Sample records for multimodal interactive systems

  1. A multimodal parallel architecture: A cognitive framework for multimodal interactions.

    PubMed

    Cohn, Neil

    2016-01-01

    Human communication is naturally multimodal, and substantial focus has examined the semantic correspondences in speech-gesture and text-image relationships. However, visual narratives, like those in comics, provide an interesting challenge to multimodal communication because the words and/or images can guide the overall meaning, and both modalities can appear in complicated "grammatical" sequences: sentences use a syntactic structure and sequential images use a narrative structure. These dual structures create complexity beyond those typically addressed by theories of multimodality where only a single form uses combinatorial structure, and also poses challenges for models of the linguistic system that focus on single modalities. This paper outlines a broad theoretical framework for multimodal interactions by expanding on Jackendoff's (2002) parallel architecture for language. Multimodal interactions are characterized in terms of their component cognitive structures: whether a particular modality (verbal, bodily, visual) is present, whether it uses a grammatical structure (syntax, narrative), and whether it "dominates" the semantics of the overall expression. Altogether, this approach integrates multimodal interactions into an existing framework of language and cognition, and characterizes interactions between varying complexity in the verbal, bodily, and graphic domains. The resulting theoretical model presents an expanded consideration of the boundaries of the "linguistic" system and its involvement in multimodal interactions, with a framework that can benefit research on corpus analyses, experimentation, and the educational benefits of multimodality. Copyright © 2015.

  2. Investigation of protein selectivity in multimodal chromatography using in silico designed Fab fragment variants.

    PubMed

    Karkov, Hanne Sophie; Krogh, Berit Olsen; Woo, James; Parimal, Siddharth; Ahmadian, Haleh; Cramer, Steven M

    2015-11-01

    In this study, a unique set of antibody Fab fragments was designed in silico and produced to examine the relationship between protein surface properties and selectivity in multimodal chromatographic systems. We hypothesized that multimodal ligands containing both hydrophobic and charged moieties would interact strongly with protein surface regions where charged groups and hydrophobic patches were in close spatial proximity. Protein surface property characterization tools were employed to identify the potential multimodal ligand binding regions on the Fab fragment of a humanized antibody and to evaluate the impact of mutations on surface charge and hydrophobicity. Twenty Fab variants were generated by site-directed mutagenesis, recombinant expression, and affinity purification. Column gradient experiments were carried out with the Fab variants in multimodal, cation-exchange, and hydrophobic interaction chromatographic systems. The results clearly indicated that selectivity in the multimodal system was different from the other chromatographic modes examined. Column retention data for the reduced charge Fab variants identified a binding site comprising light chain CDR1 as the main electrostatic interaction site for the multimodal and cation-exchange ligands. Furthermore, the multimodal ligand binding was enhanced by additional hydrophobic contributions as evident from the results obtained with hydrophobic Fab variants. The use of in silico protein surface property analyses combined with molecular biology techniques, protein expression, and chromatographic evaluations represents a previously undescribed and powerful approach for investigating multimodal selectivity with complex biomolecules. © 2015 Wiley Periodicals, Inc.

  3. Unraveling Students' Interaction around a Tangible Interface Using Multimodal Learning Analytics

    ERIC Educational Resources Information Center

    Schneider, Bertrand; Blikstein, Paulo

    2015-01-01

    In this paper, we describe multimodal learning analytics (MMLA) techniques to analyze data collected around an interactive learning environment. In a previous study (Schneider & Blikstein, submitted), we designed and evaluated a Tangible User Interface (TUI) where dyads of students were asked to learn about the human hearing system by…

  4. Novel design of interactive multimodal biofeedback system for neurorehabilitation.

    PubMed

    Huang, He; Chen, Y; Xu, W; Sundaram, H; Olson, L; Ingalls, T; Rikakis, T; He, Jiping

    2006-01-01

    A previous design of a biofeedback system for Neurorehabilitation in an interactive multimodal environment has demonstrated the potential of engaging stroke patients in task-oriented neuromotor rehabilitation. This report explores the new concept and alternative designs of multimedia based biofeedback systems. In this system, the new interactive multimodal environment was constructed with abstract presentation of movement parameters. Scenery images or pictures and their clarity and orientation are used to reflect the arm movement and relative position to the target instead of the animated arm. The multiple biofeedback parameters were classified into different hierarchical levels w.r.t. importance of each movement parameter to performance. A new quantified measurement for these parameters were developed to assess the patient's performance both real-time and offline. These parameters were represented by combined visual and auditory presentations with various distinct music instruments. Overall, the objective of newly designed system is to explore what information and how to feedback information in interactive virtual environment could enhance the sensorimotor integration that may facilitate the efficient design and application of virtual environment based therapeutic intervention.

  5. Multimodal Interaction with Speech, Gestures and Haptic Feedback in a Media Center Application

    NASA Astrophysics Data System (ADS)

    Turunen, Markku; Hakulinen, Jaakko; Hella, Juho; Rajaniemi, Juha-Pekka; Melto, Aleksi; Mäkinen, Erno; Rantala, Jussi; Heimonen, Tomi; Laivo, Tuuli; Soronen, Hannu; Hansen, Mervi; Valkama, Pellervo; Miettinen, Toni; Raisamo, Roope

    We demonstrate interaction with a multimodal media center application. Mobile phone-based interface includes speech and gesture input and haptic feedback. The setup resembles our long-term public pilot study, where a living room environment containing the application was constructed inside a local media museum allowing visitors to freely test the system.

  6. Multimodal user interfaces to improve social integration of elderly and mobility impaired.

    PubMed

    Dias, Miguel Sales; Pires, Carlos Galinho; Pinto, Fernando Miguel; Teixeira, Vítor Duarte; Freitas, João

    2012-01-01

    Technologies for Human-Computer Interaction (HCI) and Communication have evolved tremendously over the past decades. However, citizens such as mobility impaired or elderly or others, still face many difficulties interacting with communication services, either due to HCI issues or intrinsic design problems with the services. In this paper we start by presenting the results of two user studies, the first one conducted with a group of mobility impaired users, comprising paraplegic and quadriplegic individuals; and the second one with elderly. The study participants carried out a set of tasks with a multimodal (speech, touch, gesture, keyboard and mouse) and multi-platform (mobile, desktop) system, offering an integrated access to communication and entertainment services, such as email, agenda, conferencing, instant messaging and social media, referred to as LHC - Living Home Center. The system was designed to take into account the requirements captured from these users, with the objective of evaluating if the adoption of multimodal interfaces for audio-visual communication and social media services, could improve the interaction with such services. Our study revealed that a multimodal prototype system, offering natural interaction modalities, especially supporting speech and touch, can in fact improve access to the presented services, contributing to the reduction of social isolation of mobility impaired, as well as elderly, and improving their digital inclusion.

  7. Design of a 3D Navigation Technique Supporting VR Interaction

    NASA Astrophysics Data System (ADS)

    Boudoin, Pierre; Otmane, Samir; Mallem, Malik

    2008-06-01

    Multimodality is a powerful paradigm to increase the realness and the easiness of the interaction in Virtual Environments (VEs). In particular, the search for new metaphors and techniques for 3D interaction adapted to the navigation task is an important stage for the realization of future 3D interaction systems that support multimodality, in order to increase efficiency and usability. In this paper we propose a new multimodal 3D interaction model called Fly Over. This model is especially devoted to the navigation task. We present a qualitative comparison between Fly Over and a classical navigation technique called gaze-directed steering. The results from preliminary evaluation on the IBISC semi-immersive Virtual Reality/Augmented Realty EVR@ platform show that Fly Over is a user friendly and efficient navigation technique.

  8. Volume curtaining: a focus+context effect for multimodal volume visualization

    NASA Astrophysics Data System (ADS)

    Fairfield, Adam J.; Plasencia, Jonathan; Jang, Yun; Theodore, Nicholas; Crawford, Neil R.; Frakes, David H.; Maciejewski, Ross

    2014-03-01

    In surgical preparation, physicians will often utilize multimodal imaging scans to capture complementary information to improve diagnosis and to drive patient-specific treatment. These imaging scans may consist of data from magnetic resonance imaging (MR), computed tomography (CT), or other various sources. The challenge in using these different modalities is that the physician must mentally map the two modalities together during the diagnosis and planning phase. Furthermore, the different imaging modalities will be generated at various resolutions as well as slightly different orientations due to patient placement during scans. In this work, we present an interactive system for multimodal data fusion, analysis and visualization. Developed with partners from neurological clinics, this work discusses initial system requirements and physician feedback at the various stages of component development. Finally, we present a novel focus+context technique for the interactive exploration of coregistered multi-modal data.

  9. Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    The following reports are presented on this project:A first year progress report on: Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; A second year progress report on: Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; An Extensible, Interchangeable and Sharable Database Model for Improving Multidisciplinary Aircraft Design; Interactive, Secure Web-enabled Aircraft Engine Simulation Using XML Databinding Integration; and Improving the Aircraft Design Process Using Web-based Modeling and Simulation.

  10. Multimodal and ubiquitous computing systems: supporting independent-living older users.

    PubMed

    Perry, Mark; Dowdall, Alan; Lines, Lorna; Hone, Kate

    2004-09-01

    We document the rationale and design of a multimodal interface to a pervasive/ubiquitous computing system that supports independent living by older people in their own homes. The Millennium Home system involves fitting a resident's home with sensors--these sensors can be used to trigger sequences of interaction with the resident to warn them about dangerous events, or to check if they need external help. We draw lessons from the design process and conclude the paper with implications for the design of multimodal interfaces to ubiquitous systems developed for the elderly and in healthcare, as well as for more general ubiquitous computing applications.

  11. Towards an intelligent framework for multimodal affective data analysis.

    PubMed

    Poria, Soujanya; Cambria, Erik; Hussain, Amir; Huang, Guang-Bin

    2015-03-01

    An increasingly large amount of multimodal content is posted on social media websites such as YouTube and Facebook everyday. In order to cope with the growth of such so much multimodal data, there is an urgent need to develop an intelligent multi-modal analysis framework that can effectively extract information from multiple modalities. In this paper, we propose a novel multimodal information extraction agent, which infers and aggregates the semantic and affective information associated with user-generated multimodal data in contexts such as e-learning, e-health, automatic video content tagging and human-computer interaction. In particular, the developed intelligent agent adopts an ensemble feature extraction approach by exploiting the joint use of tri-modal (text, audio and video) features to enhance the multimodal information extraction process. In preliminary experiments using the eNTERFACE dataset, our proposed multi-modal system is shown to achieve an accuracy of 87.95%, outperforming the best state-of-the-art system by more than 10%, or in relative terms, a 56% reduction in error rate. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Vestibular system: the many facets of a multimodal sense.

    PubMed

    Angelaki, Dora E; Cullen, Kathleen E

    2008-01-01

    Elegant sensory structures in the inner ear have evolved to measure head motion. These vestibular receptors consist of highly conserved semicircular canals and otolith organs. Unlike other senses, vestibular information in the central nervous system becomes immediately multisensory and multimodal. There is no overt, readily recognizable conscious sensation from these organs, yet vestibular signals contribute to a surprising range of brain functions, from the most automatic reflexes to spatial perception and motor coordination. Critical to these diverse, multimodal functions are multiple computationally intriguing levels of processing. For example, the need for multisensory integration necessitates vestibular representations in multiple reference frames. Proprioceptive-vestibular interactions, coupled with corollary discharge of a motor plan, allow the brain to distinguish actively generated from passive head movements. Finally, nonlinear interactions between otolith and canal signals allow the vestibular system to function as an inertial sensor and contribute critically to both navigation and spatial orientation.

  13. Interactive Learning System "VisMis" for Scientific Visualization Course

    ERIC Educational Resources Information Center

    Zhu, Xiaoming; Sun, Bo; Luo, Yanlin

    2018-01-01

    Now visualization courses have been taught at universities around the world. Keeping students motivated and actively engaged in this course can be a challenging task. In this paper we introduce our developed interactive learning system called VisMis (Visualization and Multi-modal Interaction System) for postgraduate scientific visualization course…

  14. Effects of urea on selectivity and protein-ligand interactions in multimodal cation exchange chromatography.

    PubMed

    Holstein, Melissa A; Parimal, Siddharth; McCallum, Scott A; Cramer, Steven M

    2013-01-08

    Nuclear magnetic resonance (NMR) and molecular dynamics (MD) simulations were employed in concert with chromatography to provide insight into the effect of urea on protein-ligand interactions in multimodal (MM) chromatography. Chromatographic experiments with a protein library in ion exchange (IEX) and MM systems indicated that, while urea had a significant effect on protein retention and selectivity for a range of proteins in MM systems, the effects were much less pronounced in IEX. NMR titration experiments carried out with a multimodal ligand, and isotopically enriched human ubiquitin indicated that, while the ligand binding face of ubiquitin remained largely intact in the presence of urea, the strength of binding was decreased. MD simulations were carried out to provide further insight into the effect of urea on MM ligand binding. These results indicated that, while the overall ligand binding face of ubiquitin remained the same, there was a reduction in the occupancy of the MM ligand interaction region along with subtle changes in the residues involved in these interactions. This work demonstrates the effectiveness of urea in enhancing selectivity in MM chromatographic systems and also provides an in-depth analysis of how MM ligand-protein interactions are altered in the presence of this fluid phase modifier.

  15. Human-computer interaction for alert warning and attention allocation systems of the multimodal watchstation

    NASA Astrophysics Data System (ADS)

    Obermayer, Richard W.; Nugent, William A.

    2000-11-01

    The SPAWAR Systems Center San Diego is currently developing an advanced Multi-Modal Watchstation (MMWS); design concepts and software from this effort are intended for transition to future United States Navy surface combatants. The MMWS features multiple flat panel displays and several modes of user interaction, including voice input and output, natural language recognition, 3D audio, stylus and gestural inputs. In 1999, an extensive literature review was conducted on basic and applied research concerned with alerting and warning systems. After summarizing that literature, a human computer interaction (HCI) designer's guide was prepared to support the design of an attention allocation subsystem (AAS) for the MMWS. The resultant HCI guidelines are being applied in the design of a fully interactive AAS prototype. An overview of key findings from the literature review, a proposed design methodology with illustrative examples, and an assessment of progress made in implementing the HCI designers guide are presented.

  16. User Localization During Human-Robot Interaction

    PubMed Central

    Alonso-Martín, F.; Gorostiza, Javi F.; Malfaz, María; Salichs, Miguel A.

    2012-01-01

    This paper presents a user localization system based on the fusion of visual information and sound source localization, implemented on a social robot called Maggie. One of the main requisites to obtain a natural interaction between human-human and human-robot is an adequate spatial situation between the interlocutors, that is, to be orientated and situated at the right distance during the conversation in order to have a satisfactory communicative process. Our social robot uses a complete multimodal dialog system which manages the user-robot interaction during the communicative process. One of its main components is the presented user localization system. To determine the most suitable allocation of the robot in relation to the user, a proxemic study of the human-robot interaction is required, which is described in this paper. The study has been made with two groups of users: children, aged between 8 and 17, and adults. Finally, at the end of the paper, experimental results with the proposed multimodal dialog system are presented. PMID:23012577

  17. User localization during human-robot interaction.

    PubMed

    Alonso-Martín, F; Gorostiza, Javi F; Malfaz, María; Salichs, Miguel A

    2012-01-01

    This paper presents a user localization system based on the fusion of visual information and sound source localization, implemented on a social robot called Maggie. One of the main requisites to obtain a natural interaction between human-human and human-robot is an adequate spatial situation between the interlocutors, that is, to be orientated and situated at the right distance during the conversation in order to have a satisfactory communicative process. Our social robot uses a complete multimodal dialog system which manages the user-robot interaction during the communicative process. One of its main components is the presented user localization system. To determine the most suitable allocation of the robot in relation to the user, a proxemic study of the human-robot interaction is required, which is described in this paper. The study has been made with two groups of users: children, aged between 8 and 17, and adults. Finally, at the end of the paper, experimental results with the proposed multimodal dialog system are presented.

  18. Interactivity in Educational Apps for Young Children: A Multimodal Analysis

    ERIC Educational Resources Information Center

    Blitz-Raith, Alexandra H.; Liu, Jianxin

    2017-01-01

    Interactivity is an important indicator of an educational app's reception. Since most educational apps are multimodal, it justifies a methodological initiative to understand meaningful involvement of multimodality in enacting and even amplifying interactivity in an educational app. Yet research so far has largely concentrated on algorithm…

  19. Exploring the requirements for multimodal interaction for mobile devices in an end-to-end journey context.

    PubMed

    Krehl, Claudia; Sharples, Sarah

    2012-01-01

    The paper investigates the requirements for multimodal interaction on mobile devices in an end-to-end journey context. Traditional interfaces are deemed cumbersome and inefficient for exchanging information with the user. Multimodal interaction provides a different user-centred approach allowing for more natural and intuitive interaction between humans and computers. It is especially suitable for mobile interaction as it can overcome additional constraints including small screens, awkward keypads, and continuously changing settings - an inherent property of mobility. This paper is based on end-to-end journeys where users encounter several contexts during their journeys. Interviews and focus groups explore the requirements for multimodal interaction design for mobile devices by examining journey stages and identifying the users' information needs and sources. Findings suggest that multimodal communication is crucial when users multitask. Choosing suitable modalities depend on user context, characteristics and tasks.

  20. Using Vision and Speech Features for Automated Prediction of Performance Metrics in Multimodal Dialogs. Research Report. ETS RR-17-20

    ERIC Educational Resources Information Center

    Ramanarayanan, Vikram; Lange, Patrick; Evanini, Keelan; Molloy, Hillary; Tsuprun, Eugene; Qian, Yao; Suendermann-Oeft, David

    2017-01-01

    Predicting and analyzing multimodal dialog user experience (UX) metrics, such as overall call experience, caller engagement, and latency, among other metrics, in an ongoing manner is important for evaluating such systems. We investigate automated prediction of multiple such metrics collected from crowdsourced interactions with an open-source,…

  1. Toward Multimodal Human-Robot Interaction to Enhance Active Participation of Users in Gait Rehabilitation.

    PubMed

    Gui, Kai; Liu, Honghai; Zhang, Dingguo

    2017-11-01

    Robotic exoskeletons for physical rehabilitation have been utilized for retraining patients suffering from paraplegia and enhancing motor recovery in recent years. However, users are not voluntarily involved in most systems. This paper aims to develop a locomotion trainer with multiple gait patterns, which can be controlled by the active motion intention of users. A multimodal human-robot interaction (HRI) system is established to enhance subject's active participation during gait rehabilitation, which includes cognitive HRI (cHRI) and physical HRI (pHRI). The cHRI adopts brain-computer interface based on steady-state visual evoked potential. The pHRI is realized via admittance control based on electromyography. A central pattern generator is utilized to produce rhythmic and continuous lower joint trajectories, and its state variables are regulated by cHRI and pHRI. A custom-made leg exoskeleton prototype with the proposed multimodal HRI is tested on healthy subjects and stroke patients. The results show that voluntary and active participation can be effectively involved to achieve various assistive gait patterns.

  2. Adaptive multimodal interaction in mobile augmented reality: A conceptual framework

    NASA Astrophysics Data System (ADS)

    Abidin, Rimaniza Zainal; Arshad, Haslina; Shukri, Saidatul A'isyah Ahmad

    2017-10-01

    Recently, Augmented Reality (AR) is an emerging technology in many mobile applications. Mobile AR was defined as a medium for displaying information merged with the real world environment mapped with augmented reality surrounding in a single view. There are four main types of mobile augmented reality interfaces and one of them are multimodal interfaces. Multimodal interface processes two or more combined user input modes (such as speech, pen, touch, manual gesture, gaze, and head and body movements) in a coordinated manner with multimedia system output. In multimodal interface, many frameworks have been proposed to guide the designer to develop a multimodal applications including in augmented reality environment but there has been little work reviewing the framework of adaptive multimodal interface in mobile augmented reality. The main goal of this study is to propose a conceptual framework to illustrate the adaptive multimodal interface in mobile augmented reality. We reviewed several frameworks that have been proposed in the field of multimodal interfaces, adaptive interface and augmented reality. We analyzed the components in the previous frameworks and measure which can be applied in mobile devices. Our framework can be used as a guide for designers and developer to develop a mobile AR application with an adaptive multimodal interfaces.

  3. Tangible interactive system for document browsing and visualisation of multimedia data

    NASA Astrophysics Data System (ADS)

    Rytsar, Yuriy; Voloshynovskiy, Sviatoslav; Koval, Oleksiy; Deguillaume, Frederic; Topak, Emre; Startchik, Sergei; Pun, Thierry

    2006-01-01

    In this paper we introduce and develop a framework for document interactive navigation in multimodal databases. First, we analyze the main open issues of existing multimodal interfaces and then discuss two applications that include interaction with documents in several human environments, i.e., the so-called smart rooms. Second, we propose a system set-up dedicated to the efficient navigation in the printed documents. This set-up is based on the fusion of data from several modalities that include images and text. Both modalities can be used as cover data for hidden indexes using data-hiding technologies as well as source data for robust visual hashing. The particularities of the proposed robust visual hashing are described in the paper. Finally, we address two practical applications of smart rooms for tourism and education and demonstrate the advantages of the proposed solution.

  4. The integration of audio-tactile information is modulated by multimodal social interaction with physical contact in infancy.

    PubMed

    Tanaka, Yukari; Kanakogi, Yasuhiro; Kawasaki, Masahiro; Myowa, Masako

    2018-04-01

    Interaction between caregivers and infants is multimodal in nature. To react interactively and smoothly to such multimodal signals, infants must integrate all these signals. However, few empirical infant studies have investigated how multimodal social interaction with physical contact facilitates multimodal integration, especially regarding audio - tactile (A-T) information. By using electroencephalogram (EEG) and event-related potentials (ERPs), the present study investigated how neural processing involved in A-T integration is modulated by tactile interaction. Seven- to 8-months-old infants heard one pseudoword both whilst being tickled (multimodal 'A-T' condition), and not being tickled (unimodal 'A' condition). Thereafter, their EEG was measured during the perception of the same words. Compared to the A condition, the A-T condition resulted in enhanced ERPs and higher beta-band activity within the left temporal regions, indicating neural processing of A-T integration. Additionally, theta-band activity within the middle frontal region was enhanced, which may reflect enhanced attention to social information. Furthermore, differential ERPs correlated with the degree of engagement in the tickling interaction. We provide neural evidence that the integration of A-T information in infants' brains is facilitated through tactile interaction with others. Such plastic changes in neural processing may promote harmonious social interaction and effective learning in infancy. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Sharing a Multimodal Corpus to Study Webcam-Mediated Language Teaching

    ERIC Educational Resources Information Center

    Guichon, Nicolas

    2017-01-01

    This article proposes a methodology to create a multimodal corpus that can be shared with a group of researchers in order to analyze synchronous online pedagogical interactions. Epistemological aspects involved in studying online interactions from a multimodal and semiotic perspective are addressed. Then, issues and challenges raised by corpus…

  6. The Effects of Multimodal Mobile Communications on Cooperative Team Interactions Executing Distributed Tasks

    DTIC Science & Technology

    2013-07-01

    AFRL-RH-WP-TP-2013-0046 The Effects of Multimodal Mobile Communications on Cooperative Team Interactions Executing Distributed Tasks Gregory...3. DATES COVERED (From - To) 31-07-13 Interim 01 August 2011 – 01 August 2013 4. TITLE AND SUBTITLE The Effects of Multimodal Mobile... multimodal communication capabilities can con- tribute to the effectiveness and efficiency of real-time, task outcome and per- formance. In this paper, we

  7. Mothers' multimodal information processing is modulated by multimodal interactions with their infants.

    PubMed

    Tanaka, Yukari; Fukushima, Hirokata; Okanoya, Kazuo; Myowa-Yamakoshi, Masako

    2014-10-17

    Social learning in infancy is known to be facilitated by multimodal (e.g., visual, tactile, and verbal) cues provided by caregivers. In parallel with infants' development, recent research has revealed that maternal neural activity is altered through interaction with infants, for instance, to be sensitive to infant-directed speech (IDS). The present study investigated the effect of mother- infant multimodal interaction on maternal neural activity. Event-related potentials (ERPs) of mothers were compared to non-mothers during perception of tactile-related words primed by tactile cues. Only mothers showed ERP modulation when tactile cues were incongruent with the subsequent words, and only when the words were delivered with IDS prosody. Furthermore, the frequency of mothers' use of those words was correlated with the magnitude of ERP differentiation between congruent and incongruent stimuli presentations. These results suggest that mother-infant daily interactions enhance multimodal integration of the maternal brain in parenting contexts.

  8. Mothers' multimodal information processing is modulated by multimodal interactions with their infants

    PubMed Central

    Tanaka, Yukari; Fukushima, Hirokata; Okanoya, Kazuo; Myowa-Yamakoshi, Masako

    2014-01-01

    Social learning in infancy is known to be facilitated by multimodal (e.g., visual, tactile, and verbal) cues provided by caregivers. In parallel with infants' development, recent research has revealed that maternal neural activity is altered through interaction with infants, for instance, to be sensitive to infant-directed speech (IDS). The present study investigated the effect of mother- infant multimodal interaction on maternal neural activity. Event-related potentials (ERPs) of mothers were compared to non-mothers during perception of tactile-related words primed by tactile cues. Only mothers showed ERP modulation when tactile cues were incongruent with the subsequent words, and only when the words were delivered with IDS prosody. Furthermore, the frequency of mothers' use of those words was correlated with the magnitude of ERP differentiation between congruent and incongruent stimuli presentations. These results suggest that mother-infant daily interactions enhance multimodal integration of the maternal brain in parenting contexts. PMID:25322936

  9. Integrated urban systems modeling : designing a seamless, comprehensive approach to transportation planning.

    DOT National Transportation Integrated Search

    2009-01-01

    Metropolitan planning agencies face increasingly complex issues in modeling interactions between the built environment and multimodal transportation systems. Although great strides have been made in simulating land use, travel demand, and traffic flo...

  10. Intelligent Adaptive Systems: Literature Research of Design Guidance for Intelligent Adaptive Automation and Interfaces

    DTIC Science & Technology

    2007-09-01

    behaviour based on past experience of interacting with the operator), and mobile (i.e., can move themselves from one machine to another). Edwards argues that...Sofge, D., Bugajska, M., Adams, W., Perzanowski, D., and Schultz, A. (2003). Agent-based Multimodal Interface for Dynamically Autonomous Mobile Robots...based architecture can provide a natural and scalable approach to implementing a multimodal interface to control mobile robots through dynamic

  11. Bus-based park-and-ride system: a stochastic model on multimodal network with congestion pricing schemes

    NASA Astrophysics Data System (ADS)

    Liu, Zhiyuan; Meng, Qiang

    2014-05-01

    This paper focuses on modelling the network flow equilibrium problem on a multimodal transport network with bus-based park-and-ride (P&R) system and congestion pricing charges. The multimodal network has three travel modes: auto mode, transit mode and P&R mode. A continuously distributed value-of-time is assumed to convert toll charges and transit fares to time unit, and the users' route choice behaviour is assumed to follow the probit-based stochastic user equilibrium principle with elastic demand. These two assumptions have caused randomness to the users' generalised travel times on the multimodal network. A comprehensive network framework is first defined for the flow equilibrium problem with consideration of interactions between auto flows and transit (bus) flows. Then, a fixed-point model with unique solution is proposed for the equilibrium flows, which can be solved by a convergent cost averaging method. Finally, the proposed methodology is tested by a network example.

  12. Systemic multimodal approach to speech therapy treatment in autistic children.

    PubMed

    Tamas, Daniela; Marković, Slavica; Milankov, Vesela

    2013-01-01

    Conditions in which speech therapy treatment is applied in autistic children are often not in accordance with characteristics of opinions and learning of people with autism. A systemic multimodal approach means motivating autistic people to develop their language speech skill through the procedure which allows reliving of their personal experience according to the contents that are presented in the their natural social environment. This research was aimed at evaluating the efficiency of speech treatment based on the systemic multimodal approach to the work with autistic children. The study sample consisted of 34 children, aged from 8 to 16 years, diagnosed to have different autistic disorders, whose results showed a moderate and severe clinical picture of autism on the Childhood Autism Rating Scale. The applied instruments for the evaluation of ability were the Childhood Autism Rating Scale and Ganzberg II test. The study subjects were divided into two groups according to the type of treatment: children who were covered by the continuing treatment and systemic multimodal approach in the treatment, and children who were covered by classical speech treatment. It is shown that the systemic multimodal approach in teaching autistic children affects the stimulation of communication, socialization, self-service and work as well as that the progress achieved in these areas of functioning was retainable after long time, too. By applying the systemic multimodal approach when dealing with autistic children and by comparing their achievements on tests applied before, during and after the application of this mode, it has been concluded that certain improvement has been achieved in the functionality within the diagnosed category. The results point to a possible direction in the creation of new methods, plans and programs in dealing with autistic children based on empirical and interactive learning.

  13. Using Noninvasive Wearable Computers to Recognize Human Emotions from Physiological Signals

    NASA Astrophysics Data System (ADS)

    Lisetti, Christine Lætitia; Nasoz, Fatma

    2004-12-01

    We discuss the strong relationship between affect and cognition and the importance of emotions in multimodal human computer interaction (HCI) and user modeling. We introduce the overall paradigm for our multimodal system that aims at recognizing its users' emotions and at responding to them accordingly depending upon the current context or application. We then describe the design of the emotion elicitation experiment we conducted by collecting, via wearable computers, physiological signals from the autonomic nervous system (galvanic skin response, heart rate, temperature) and mapping them to certain emotions (sadness, anger, fear, surprise, frustration, and amusement). We show the results of three different supervised learning algorithms that categorize these collected signals in terms of emotions, and generalize their learning to recognize emotions from new collections of signals. We finally discuss possible broader impact and potential applications of emotion recognition for multimodal intelligent systems.

  14. Fostering Students' Science Inquiry through App Affordances of Multimodality, Collaboration, Interactivity, and Connectivity

    ERIC Educational Resources Information Center

    Beach, Richard; O'Brien, David

    2015-01-01

    This study examined 6th graders' use of the VoiceThread app as part of a science inquiry project on photosynthesis and carbon dioxide emissions in terms of their ability to engage in causal reasoning and their use of the affordances of multimodality, collaboration, interactivity, and connectivity. Students employed multimodal production using…

  15. EEG Classification for Hybrid Brain-Computer Interface Using a Tensor Based Multiclass Multimodal Analysis Scheme

    PubMed Central

    Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang

    2016-01-01

    Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI. PMID:26880873

  16. EEG Classification for Hybrid Brain-Computer Interface Using a Tensor Based Multiclass Multimodal Analysis Scheme.

    PubMed

    Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang

    2016-01-01

    Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI.

  17. Adolescents' Relational Schemas and Their Subjective Understanding of Romantic Relationship Interactions

    ERIC Educational Resources Information Center

    Smith, Justin D.; Welsh, Deborah P.; Fite, Paula J.

    2010-01-01

    This study examines the association between adolescents' relational schemas and their subjective understanding of interactions in the context of male-female romantic relationships. We employed an innovative multimodal methodology: the video-recall system [Welsh, D. P., & Dickson, J. W. (2005). Video-recall procedures for examining subjective…

  18. Embodied conversational agents for multimodal automated social skills training in people with autism spectrum disorders.

    PubMed

    Tanaka, Hiroki; Negoro, Hideki; Iwasaka, Hidemi; Nakamura, Satoshi

    2017-01-01

    Social skills training, performed by human trainers, is a well-established method for obtaining appropriate skills in social interaction. Previous work automated the process of social skills training by developing a dialogue system that teaches social communication skills through interaction with a computer avatar. Even though previous work that simulated social skills training only considered acoustic and linguistic information, human social skills trainers take into account visual and other non-verbal features. In this paper, we create and evaluate a social skills training system that closes this gap by considering the audiovisual features of the smiling ratio and the head pose (yaw and pitch). In addition, the previous system was only tested with graduate students; in this paper, we applied our system to children or young adults with autism spectrum disorders. For our experimental evaluation, we recruited 18 members from the general population and 10 people with autism spectrum disorders and gave them our proposed multimodal system to use. An experienced human social skills trainer rated the social skills of the users. We evaluated the system's effectiveness by comparing pre- and post-training scores and identified significant improvement in their social skills using our proposed multimodal system. Computer-based social skills training is useful for people who experience social difficulties. Such a system can be used by teachers, therapists, and social skills trainers for rehabilitation and the supplemental use of human-based training anywhere and anytime.

  19. A multimodal dataset for authoring and editing multimedia content: The MAMEM project.

    PubMed

    Nikolopoulos, Spiros; Petrantonakis, Panagiotis C; Georgiadis, Kostas; Kalaganis, Fotis; Liaros, Georgios; Lazarou, Ioulietta; Adam, Katerina; Papazoglou-Chalikias, Anastasios; Chatzilari, Elisavet; Oikonomou, Vangelis P; Kumar, Chandan; Menges, Raphael; Staab, Steffen; Müller, Daniel; Sengupta, Korok; Bostantjopoulou, Sevasti; Katsarou, Zoe; Zeilig, Gabi; Plotnik, Meir; Gotlieb, Amihai; Kizoni, Racheli; Fountoukidou, Sofia; Ham, Jaap; Athanasiou, Dimitrios; Mariakaki, Agnes; Comanducci, Dario; Sabatini, Edoardo; Nistico, Walter; Plank, Markus; Kompatsiaris, Ioannis

    2017-12-01

    We present a dataset that combines multimodal biosignals and eye tracking information gathered under a human-computer interaction framework. The dataset was developed in the vein of the MAMEM project that aims to endow people with motor disabilities with the ability to edit and author multimedia content through mental commands and gaze activity. The dataset includes EEG, eye-tracking, and physiological (GSR and Heart rate) signals collected from 34 individuals (18 able-bodied and 16 motor-impaired). Data were collected during the interaction with specifically designed interface for web browsing and multimedia content manipulation and during imaginary movement tasks. The presented dataset will contribute towards the development and evaluation of modern human-computer interaction systems that would foster the integration of people with severe motor impairments back into society.

  20. Multimodality and interactivity: connecting properties of serious games with educational outcomes.

    PubMed

    Ritterfeld, Ute; Shen, Cuihua; Wang, Hua; Nocera, Luciano; Wong, Wee Ling

    2009-12-01

    Serious games have become an important genre of digital media and are often acclaimed for their potential to enhance deeper learning because of their unique technological properties. Yet the discourse has largely remained at a conceptual level. For an empirical evaluation of educational games, extra effort is needed to separate intertwined and confounding factors in order to manipulate and thus attribute the outcome to one property independent of another. This study represents one of the first attempts to empirically test the educational impact of two important properties of serious games, multimodality and interactivity, through a partial 2 x 3 (interactive, noninteractive by high, moderate, low in multimodality) factorial between-participants follow-up experiment. Results indicate that both multimodality and interactivity contribute to educational outcomes individually. Implications for educational strategies and future research directions are discussed.

  1. Multimodal interaction for human-robot teams

    NASA Astrophysics Data System (ADS)

    Burke, Dustin; Schurr, Nathan; Ayers, Jeanine; Rousseau, Jeff; Fertitta, John; Carlin, Alan; Dumond, Danielle

    2013-05-01

    Unmanned ground vehicles have the potential for supporting small dismounted teams in mapping facilities, maintaining security in cleared buildings, and extending the team's reconnaissance and persistent surveillance capability. In order for such autonomous systems to integrate with the team, we must move beyond current interaction methods using heads-down teleoperation which require intensive human attention and affect the human operator's ability to maintain local situational awareness and ensure their own safety. This paper focuses on the design, development and demonstration of a multimodal interaction system that incorporates naturalistic human gestures, voice commands, and a tablet interface. By providing multiple, partially redundant interaction modes, our system degrades gracefully in complex environments and enables the human operator to robustly select the most suitable interaction method given the situational demands. For instance, the human can silently use arm and hand gestures for commanding a team of robots when it is important to maintain stealth. The tablet interface provides an overhead situational map allowing waypoint-based navigation for multiple ground robots in beyond-line-of-sight conditions. Using lightweight, wearable motion sensing hardware either worn comfortably beneath the operator's clothing or integrated within their uniform, our non-vision-based approach enables an accurate, continuous gesture recognition capability without line-of-sight constraints. To reduce the training necessary to operate the system, we designed the interactions around familiar arm and hand gestures.

  2. See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction.

    PubMed

    Xu, Tian Linger; Zhang, Hui; Yu, Chen

    2016-05-01

    We focus on a fundamental looking behavior in human-robot interactions - gazing at each other's face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user's face as a response to the human's gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot's gaze toward the human partner's face in real time and then analyzed the human's gaze behavior as a response to the robot's gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot's face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained.

  3. Embodied conversational agents for multimodal automated social skills training in people with autism spectrum disorders

    PubMed Central

    Negoro, Hideki; Iwasaka, Hidemi; Nakamura, Satoshi

    2017-01-01

    Social skills training, performed by human trainers, is a well-established method for obtaining appropriate skills in social interaction. Previous work automated the process of social skills training by developing a dialogue system that teaches social communication skills through interaction with a computer avatar. Even though previous work that simulated social skills training only considered acoustic and linguistic information, human social skills trainers take into account visual and other non-verbal features. In this paper, we create and evaluate a social skills training system that closes this gap by considering the audiovisual features of the smiling ratio and the head pose (yaw and pitch). In addition, the previous system was only tested with graduate students; in this paper, we applied our system to children or young adults with autism spectrum disorders. For our experimental evaluation, we recruited 18 members from the general population and 10 people with autism spectrum disorders and gave them our proposed multimodal system to use. An experienced human social skills trainer rated the social skills of the users. We evaluated the system’s effectiveness by comparing pre- and post-training scores and identified significant improvement in their social skills using our proposed multimodal system. Computer-based social skills training is useful for people who experience social difficulties. Such a system can be used by teachers, therapists, and social skills trainers for rehabilitation and the supplemental use of human-based training anywhere and anytime. PMID:28796781

  4. Virtual workstation - A multimodal, stereoscopic display environment

    NASA Astrophysics Data System (ADS)

    Fisher, S. S.; McGreevy, M.; Humphries, J.; Robinett, W.

    1987-01-01

    A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed for use in a multipurpose interface environment. The system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, application scenarios, and research directions are described.

  5. Multimodal Strategies Allowing Corrective Feedback to Be Softened during Webconferencing-Supported Interactions

    ERIC Educational Resources Information Center

    Wigham, Ciara R.; Vidal, Julie

    2016-01-01

    This paper focuses on corrective feedback and examines how trainee-teachers use different semiotic resources to soften feedback sequences during synchronous online interactions. The ISMAEL corpus of webconferencing-supported L2 interactions in French provided data for this qualitative study. Using multimodal transcriptions, the analysis describes…

  6. Tunable-Range, Photon-Mediated Atomic Interactions in Multimode Cavity QED

    NASA Astrophysics Data System (ADS)

    Vaidya, Varun D.; Guo, Yudan; Kroeze, Ronen M.; Ballantine, Kyle E.; Kollár, Alicia J.; Keeling, Jonathan; Lev, Benjamin L.

    2018-01-01

    Optical cavity QED provides a platform with which to explore quantum many-body physics in driven-dissipative systems. Single-mode cavities provide strong, infinite-range photon-mediated interactions among intracavity atoms. However, these global all-to-all couplings are limiting from the perspective of exploring quantum many-body physics beyond the mean-field approximation. The present work demonstrates that local couplings can be created using multimode cavity QED. This is established through measurements of the threshold of a superradiant, self-organization phase transition versus atomic position. Specifically, we experimentally show that the interference of near-degenerate cavity modes leads to both a strong and tunable-range interaction between Bose-Einstein condensates (BECs) trapped within the cavity. We exploit the symmetry of a confocal cavity to measure the interaction between real BECs and their virtual images without unwanted contributions arising from the merger of real BECs. Atom-atom coupling may be tuned from short range to long range. This capability paves the way toward future explorations of exotic, strongly correlated systems such as quantum liquid crystals and driven-dissipative spin glasses.

  7. Good Student/Bad Student: Situated Identities in the Figured Worlds of School and Creative Multimodal Production

    ERIC Educational Resources Information Center

    Jocius, Robin

    2017-01-01

    This study situates young adolescents' multimodal composing practices within two figured worlds--school and creative multimodal production. In a microanalysis of two focal students' multimodal processes and products, I trace how pedagogical, interactional, and semiotic resources both reified and challenged students' developing identities as…

  8. Construction of Multi-Mode Affective Learning System: Taking Affective Design as an Example

    ERIC Educational Resources Information Center

    Lin, Hao-Chiang Koong; Su, Sheng-Hsiung; Chao, Ching-Ju; Hsieh, Cheng-Yen; Tsai, Shang-Chin

    2016-01-01

    This study aims to design a non-simultaneous distance instruction system with affective computing, which integrates interactive agent technology with the curricular instruction of affective design. The research subjects were 78 students, and prototype assessment and final assessment were adopted to assess the interface and usability of the system.…

  9. A hardware and software architecture to deal with multimodal and collaborative interactions in multiuser virtual reality environments

    NASA Astrophysics Data System (ADS)

    Martin, P.; Tseu, A.; Férey, N.; Touraine, D.; Bourdot, P.

    2014-02-01

    Most advanced immersive devices provide collaborative environment within several users have their distinct head-tracked stereoscopic point of view. Combining with common used interactive features such as voice and gesture recognition, 3D mouse, haptic feedback, and spatialized audio rendering, these environments should faithfully reproduce a real context. However, even if many studies have been carried out on multimodal systems, we are far to definitively solve the issue of multimodal fusion, which consists in merging multimodal events coming from users and devices, into interpretable commands performed by the application. Multimodality and collaboration was often studied separately, despite of the fact that these two aspects share interesting similarities. We discuss how we address this problem, thought the design and implementation of a supervisor that is able to deal with both multimodal fusion and collaborative aspects. The aim of this supervisor is to ensure the merge of user's input from virtual reality devices in order to control immersive multi-user applications. We deal with this problem according to a practical point of view, because the main requirements of this supervisor was defined according to a industrial task proposed by our automotive partner, that as to be performed with multimodal and collaborative interactions in a co-located multi-user environment. In this task, two co-located workers of a virtual assembly chain has to cooperate to insert a seat into the bodywork of a car, using haptic devices to feel collision and to manipulate objects, combining speech recognition and two hands gesture recognition as multimodal instructions. Besides the architectural aspect of this supervisor, we described how we ensure the modularity of our solution that could apply on different virtual reality platforms, interactive contexts and virtual contents. A virtual context observer included in this supervisor in was especially designed to be independent to the content of the virtual scene of targeted application, and is use to report high-level interactive and collaborative events. This context observer allows the supervisor to merge these interactive and collaborative events, but is also used to deal with new issues coming from our observation of two co-located users in an immersive device performing this assembly task. We highlight the fact that when speech recognition features are provided to the two users, it is required to automatically detect according to the interactive context, whether the vocal instructions must be translated into commands that have to be performed by the machine, or whether they take a part of the natural communication necessary for collaboration. Information coming from this context observer that indicates a user is looking at its collaborator, is important to detect if the user is talking to its partner. Moreover, as the users are physically co-localised and head-tracking is used to provide high fidelity stereoscopic rendering, and natural walking navigation in the virtual scene, we have to deals with collision and screen occlusion between the co-located users in the physical work space. Working area and focus of each user, computed and reported by the context observer is necessary to prevent or avoid these situations.

  10. Extrinsic Embryonic Sensory Stimulation Alters Multimodal Behavior and Cellular Activation

    PubMed Central

    Markham, Rebecca G.; Shimizu, Toru; Lickliter, Robert

    2009-01-01

    Embryonic vision is generated and maintained by spontaneous neuronal activation patterns, yet extrinsic stimulation also sculpts sensory development. Because the sensory and motor systems are interconnected in embryogenesis, how extrinsic sensory activation guides multimodal differentiation is an important topic. Further, it is unknown whether extrinsic stimulation experienced near sensory sensitivity onset contributes to persistent brain changes, ultimately affecting postnatal behavior. To determine the effects of extrinsic stimulation on multimodal development, we delivered auditory stimulation to bobwhite quail groups during early, middle, or late embryogenesis, and then tested postnatal behavioral responsiveness to auditory or visual cues. Auditory preference tendencies were more consistently toward the conspecific stimulus for animals stimulated during late embryogenesis. Groups stimulated during middle or late embryogenesis showed altered postnatal species-typical visual responsiveness, demonstrating a persistent multimodal effect. We also examined whether auditory-related brain regions are receptive to extrinsic input during middle embryogenesis by measuring postnatal cellular activation. Stimulated birds showed a greater number of ZENK-immunopositive cells per unit volume of brain tissue in deep optic tectum, a midbrain region strongly implicated in multimodal function. We observed similar results in the medial and caudomedial nidopallia in the telencephalon. There were no ZENK differences between groups in inferior colliculus or in caudolateral nidopallium, avian analog to prefrontal cortex. To our knowledge, these are the first results linking extrinsic stimulation delivered so early in embryogenesis to changes in postnatal multimodal behavior and cellular activation. The potential role of competitive interactions between the sensory and motor systems is discussed. PMID:18777564

  11. Transition from Propagating Polariton Solitons to a Standing Wave Condensate Induced by Interactions

    NASA Astrophysics Data System (ADS)

    Sich, M.; Chana, J. K.; Egorov, O. A.; Sigurdsson, H.; Shelykh, I. A.; Skryabin, D. V.; Walker, P. M.; Clarke, E.; Royall, B.; Skolnick, M. S.; Krizhanovskii, D. N.

    2018-04-01

    We explore phase transitions of polariton wave packets, first, to a soliton and then to a standing wave polariton condensate in a multimode microwire system, mediated by nonlinear polariton interactions. At low excitation density, we observe ballistic propagation of the multimode polariton wave packets arising from the interference between different transverse modes. With increasing excitation density, the wave packets transform into single-mode bright solitons due to effects of both intermodal and intramodal polariton-polariton scattering. Further increase of the excitation density increases thermalization speed, leading to relaxation of the polariton density from a solitonic spectrum distribution in momentum space down to low momenta, with the resultant formation of a nonequilibrium condensate manifested by a standing wave pattern across the whole sample.

  12. Transition from Propagating Polariton Solitons to a Standing Wave Condensate Induced by Interactions.

    PubMed

    Sich, M; Chana, J K; Egorov, O A; Sigurdsson, H; Shelykh, I A; Skryabin, D V; Walker, P M; Clarke, E; Royall, B; Skolnick, M S; Krizhanovskii, D N

    2018-04-20

    We explore phase transitions of polariton wave packets, first, to a soliton and then to a standing wave polariton condensate in a multimode microwire system, mediated by nonlinear polariton interactions. At low excitation density, we observe ballistic propagation of the multimode polariton wave packets arising from the interference between different transverse modes. With increasing excitation density, the wave packets transform into single-mode bright solitons due to effects of both intermodal and intramodal polariton-polariton scattering. Further increase of the excitation density increases thermalization speed, leading to relaxation of the polariton density from a solitonic spectrum distribution in momentum space down to low momenta, with the resultant formation of a nonequilibrium condensate manifested by a standing wave pattern across the whole sample.

  13. Scaffolding Interaction in Parent-Child Dyads: Multimodal Analysis of Parental Scaffolding with Task and Non-Task Oriented Children

    ERIC Educational Resources Information Center

    Salonen, Pekka; Lepola, Janne; Vauras, Marja

    2007-01-01

    In this exploratory study we conceptualized and explored socio-cognitive, emotional and motivational regulatory processes displayed in scaffolding interaction between parents and their non-task and task-oriented children. Based on the dynamic systems view and findings from developmental research, we assumed that parents with non-task oriented and…

  14. Multi-modal gesture recognition using integrated model of motion, audio and video

    NASA Astrophysics Data System (ADS)

    Goutsu, Yusuke; Kobayashi, Takaki; Obara, Junya; Kusajima, Ikuo; Takeichi, Kazunari; Takano, Wataru; Nakamura, Yoshihiko

    2015-07-01

    Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become available, which leads to the rise of multi-modal gesture recognition. Since our previous approach to gesture recognition depends on a unimodal system, it is difficult to classify similar motion patterns. In order to solve this problem, a novel approach which integrates motion, audio and video models is proposed by using dataset captured by Kinect. The proposed system can recognize observed gestures by using three models. Recognition results of three models are integrated by using the proposed framework and the output becomes the final result. The motion and audio models are learned by using Hidden Markov Model. Random Forest which is the video classifier is used to learn the video model. In the experiments to test the performances of the proposed system, the motion and audio models most suitable for gesture recognition are chosen by varying feature vectors and learning methods. Additionally, the unimodal and multi-modal models are compared with respect to recognition accuracy. All the experiments are conducted on dataset provided by the competition organizer of MMGRC, which is a workshop for Multi-Modal Gesture Recognition Challenge. The comparison results show that the multi-modal model composed of three models scores the highest recognition rate. This improvement of recognition accuracy means that the complementary relationship among three models improves the accuracy of gesture recognition. The proposed system provides the application technology to understand human actions of daily life more precisely.

  15. Creative Multimodal Learning Environments and Blended Interaction for Problem-Based Activity in HCI Education

    ERIC Educational Resources Information Center

    Ioannou, Andri; Vasiliou, Christina; Zaphiris, Panayiotis; Arh, Tanja; Klobucar, Tomaž; Pipan, Matija

    2015-01-01

    This exploratory case study aims to examine how students benefit from a multimodal learning environment while they engage in collaborative problem-based activity in a Human Computer Interaction (HCI) university course. For 12 weeks, 30 students, in groups of 5-7 each, participated in weekly face-to-face meetings and online interactions.…

  16. Reconceptualising Understandings of Texts, Readers and Contexts: One English Teacher's Response to Using Multimodal Texts and Interactive Whiteboards

    ERIC Educational Resources Information Center

    Kitson, Lisbeth

    2011-01-01

    The comprehension of multimodal texts is now a key concern with the release of the Australian National Curriculum for English (ACARA, 2010). However, the nature of multimodal texts, the diversity of readers in classrooms, and the complex technological environments through which multimodal texts are mediated, requires English teachers to reconsider…

  17. Human likeness: cognitive and affective factors affecting adoption of robot-assisted learning systems

    NASA Astrophysics Data System (ADS)

    Yoo, Hosun; Kwon, Ohbyung; Lee, Namyeon

    2016-07-01

    With advances in robot technology, interest in robotic e-learning systems has increased. In some laboratories, experiments are being conducted with humanoid robots as artificial tutors because of their likeness to humans, the rich possibilities of using this type of media, and the multimodal interaction capabilities of these robots. The robot-assisted learning system, a special type of e-learning system, aims to increase the learner's concentration, pleasure, and learning performance dramatically. However, very few empirical studies have examined the effect on learning performance of incorporating humanoid robot technology into e-learning systems or people's willingness to accept or adopt robot-assisted learning systems. In particular, human likeness, the essential characteristic of humanoid robots as compared with conventional e-learning systems, has not been discussed in a theoretical context. Hence, the purpose of this study is to propose a theoretical model to explain the process of adoption of robot-assisted learning systems. In the proposed model, human likeness is conceptualized as a combination of media richness, multimodal interaction capabilities, and para-social relationships; these factors are considered as possible determinants of the degree to which human cognition and affection are related to the adoption of robot-assisted learning systems.

  18. See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction

    PubMed Central

    XU, TIAN (LINGER); ZHANG, HUI; YU, CHEN

    2016-01-01

    We focus on a fundamental looking behavior in human-robot interactions – gazing at each other’s face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user’s face as a response to the human’s gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot’s gaze toward the human partner’s face in real time and then analyzed the human’s gaze behavior as a response to the robot’s gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot’s face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained. PMID:28966875

  19. Veterans Health Administration's Disaster Emergency Medical Personnel System (DEMPS) Training Evaluation: Potential Implications for Disaster Health Care Volunteers.

    PubMed

    Schmitz, Susan; Radcliff, Tiffany A; Chu, Karen; Smith, Robert E; Dobalian, Aram

    2018-02-20

    The US Veterans Health Administration's Disaster Emergency Medical Personnel System (DEMPS) is a team of employee disaster response volunteers who provide clinical and non-clinical staffing assistance when local systems are overwhelmed. This study evaluated attitudes and recommendations of the DEMPS program to understand the impact of multi-modal training on volunteer perceptions. DEMPS volunteers completed an electronic survey in 2012 (n=2120). Three training modes were evaluated: online, field exercise, and face-to-face. Measures included: "Training Satisfaction," "Attitudes about Training," "Continued Engagement in DEMPS." Data were analyzed using χ2 and logistic regression. Open-ended questions were evaluated in a manner consistent with grounded theory methodology. Most respondents participated in DEMPS training (80%). Volunteers with multi-modal training who completed all 3 modes (14%) were significantly more likely to have positive attitudes about training, plan to continue as volunteers, and would recommend DEMPS to others (P-value<0.001). Some respondents requested additional interactive activities and suggested increased availability of training may improve volunteer engagement. A blended learning environment using multi-modal training methods, could enhance satisfaction and attitudes and possibly encourage continued engagement in DEMPS or similar programs. DEMPS training program modifications in 2015 expanded this blended learning approach through new interactive online learning opportunities. (Disaster Med Public Health Preparedness. 2018; page 1 of 8).

  20. Interactions of Multimodal Ligands with Proteins: Insights into Selectivity Using Molecular Dynamics Simulations.

    PubMed

    Parimal, Siddharth; Garde, Shekhar; Cramer, Steven M

    2015-07-14

    Fundamental understanding of protein-ligand interactions is important to the development of efficient bioseparations in multimodal chromatography. Here we employ molecular dynamics (MD) simulations to investigate the interactions of three different proteins--ubiquitin, cytochrome C, and α-chymotrypsinogen A, sampling a range of charge from +1e to +9e--with two multimodal chromatographic ligands containing similar chemical moieties--aromatic, carboxyl, and amide--in different structural arrangements. We use a spherical harmonic expansion to analyze ligand and individual moiety density profiles around the proteins. We find that the Capto MMC ligand, which contains an additional aliphatic group, displays stronger interactions than Nuvia CPrime ligand with all three proteins. Studying the ligand densities at the moiety level suggests that hydrophobic interactions play a major role in determining the locations of high ligand densities. Finally, the greater structural flexibility of the Capto MMC ligand compared to that of the Nuvia cPrime ligand allows for stronger structural complementarity and enables stronger hydrophobic interactions. These subtle and not-so-subtle differences in binding affinities and modalities for multimodal ligands can result in significantly different binding behavior towards proteins with important implications for bioprocessing.

  1. Promoting Multilingual Communicative Competence through Multimodal Academic Learning Situations

    ERIC Educational Resources Information Center

    Kyppö, Anna; Natri, Teija

    2016-01-01

    This paper presents information on the factors affecting the development of multilingual and multicultural communicative competence in interactive multimodal learning environments in an academic context. The interdisciplinary course in multilingual interaction offered at the University of Jyväskylä aims to enhance students' competence in…

  2. "Economists Who Think like Ecologists": Reframing Systems Thinking in Games for Learning

    ERIC Educational Resources Information Center

    DeVane, Ben; Durga, Shree; Squire, Kurt

    2010-01-01

    Over the past several years, educators have been exploring the potential of immersive interactive simulations, or video games for education, finding that games can support the development of disciplinary knowledge, systemic thinking, the production of complex multimodal digital artifacts, and participation in affinity spaces or sites of collective…

  3. Analyzing Multimodal Interaction within a Classroom Setting

    ERIC Educational Resources Information Center

    Moura, Heloisa

    2006-01-01

    Human interactions are multimodal in nature. From simple to complex forms of transferal of information, human beings draw on a multiplicity of communicative modes, such as intonation and gaze, to make sense of everyday experiences. Likewise, the learning process, either within traditional classrooms or Virtual Learning Environments, is shaped by…

  4. Adaptive wavefront shaping for controlling nonlinear multimode interactions in optical fibres

    NASA Astrophysics Data System (ADS)

    Tzang, Omer; Caravaca-Aguirre, Antonio M.; Wagner, Kelvin; Piestun, Rafael

    2018-06-01

    Recent progress in wavefront shaping has enabled control of light propagation inside linear media to focus and image through scattering objects. In particular, light propagation in multimode fibres comprises complex intermodal interactions and rich spatiotemporal dynamics. Control of physical phenomena in multimode fibres and its applications are in their infancy, opening opportunities to take advantage of complex nonlinear modal dynamics. Here, we demonstrate a wavefront shaping approach for controlling nonlinear phenomena in multimode fibres. Using a spatial light modulator at the fibre input, real-time spectral feedback and a genetic algorithm optimization, we control a highly nonlinear multimode stimulated Raman scattering cascade and its interplay with four-wave mixing via a flexible implicit control on the superposition of modes coupled into the fibre. We show versatile spectrum manipulations including shifts, suppression, and enhancement of Stokes and anti-Stokes peaks. These demonstrations illustrate the power of wavefront shaping to control and optimize nonlinear wave propagation.

  5. Computer-assisted surgical planning and automation of laser delivery systems

    NASA Astrophysics Data System (ADS)

    Zamorano, Lucia J.; Dujovny, Manuel; Dong, Ada; Kadi, A. Majeed

    1991-05-01

    This paper describes a 'real time' surgical treatment planning interactive workstation, utilizing multimodality imaging (computer tomography, magnetic resonance imaging, digital angiography) that has been developed to provide the neurosurgeon with two-dimensional multiplanar and three-dimensional 'display' of a patient's lesion.

  6. Brain-computer interaction research at the Computer Vision and Multimedia Laboratory, University of Geneva.

    PubMed

    Pun, Thierry; Alecu, Teodor Iulian; Chanel, Guillaume; Kronegg, Julien; Voloshynovskiy, Sviatoslav

    2006-06-01

    This paper describes the work being conducted in the domain of brain-computer interaction (BCI) at the Multimodal Interaction Group, Computer Vision and Multimedia Laboratory, University of Geneva, Geneva, Switzerland. The application focus of this work is on multimodal interaction rather than on rehabilitation, that is how to augment classical interaction by means of physiological measurements. Three main research topics are addressed. The first one concerns the more general problem of brain source activity recognition from EEGs. In contrast with classical deterministic approaches, we studied iterative robust stochastic based reconstruction procedures modeling source and noise statistics, to overcome known limitations of current techniques. We also developed procedures for optimal electroencephalogram (EEG) sensor system design in terms of placement and number of electrodes. The second topic is the study of BCI protocols and performance from an information-theoretic point of view. Various information rate measurements have been compared for assessing BCI abilities. The third research topic concerns the use of EEG and other physiological signals for assessing a user's emotional status.

  7. Adhesion of multimode adhesives to enamel and dentin after one year of water storage.

    PubMed

    Vermelho, Paulo Moreira; Reis, André Figueiredo; Ambrosano, Glaucia Maria Bovi; Giannini, Marcelo

    2017-06-01

    This study aimed to evaluate the ultramorphological characteristics of tooth-resin interfaces and the bond strength (BS) of multimode adhesive systems to enamel and dentin. Multimode adhesives (Scotchbond Universal (SBU) and All-Bond Universal) were tested in both self-etch and etch-and-rinse modes and compared to control groups (Optibond FL and Clearfil SE Bond (CSB)). Adhesives were applied to human molars and composite blocks were incrementally built up. Teeth were sectioned to obtain specimens for microtensile BS and TEM analysis. Specimens were tested after storage for either 24 h or 1 year. SEM analyses were performed to classify the failure pattern of beam specimens after BS testing. Etching increased the enamel BS of multimode adhesives; however, BS decreased after storage for 1 year. No significant differences in dentin BS were noted between multimode and control in either evaluation period. Storage for 1 year only reduced the dentin BS for SBU in self-etch mode. TEM analysis identified hybridization and interaction zones in dentin and enamel for all adhesives. Silver impregnation was detected on dentin-resin interfaces after storage of specimens for 1 year only with the SBU and CSB. Storage for 1 year reduced enamel BS when adhesives are applied on etched surface; however, BS of multimode adhesives did not differ from those of the control group. In dentin, no significant difference was noted between the multimode and control group adhesives, regardless of etching mode. In general, multimode adhesives showed similar behavior when compared to traditional adhesive techniques. Multimode adhesives are one-step self-etching adhesives that can also be used after enamel/dentin phosphoric acid etching, but each product may work better in specific conditions.

  8. Interactive multi-spectral analysis of more than one Sonrai village in Niger, West Africa

    NASA Technical Reports Server (NTRS)

    Reining, P.; Egbert, D. D.

    1975-01-01

    Use of LANDSAT data and an interaction system is considered for identifying and measuring small scale compact human settlements (villages) for demographic and anthropological studies. Because village components are not uniformly distributed within any one village, they apparently are multimodal, spectrally. Therefore, the functions of location and enumeration are kept separate. Measurement of a known village is compared with CCT response.

  9. A mass assembly of associative mechanisms: a dynamical systems account of natural social interaction.

    PubMed

    Duran, Nicholas D; Dale, Rick; Richardson, Daniel C

    2014-04-01

    The target article offers a negative, eliminativist thesis, dissolving the specialness of mirroring processes into a solution of associative mechanisms. We support the authors' project enthusiastically. What they are currently missing, we argue, is a positive, generative thesis about associative learning mechanisms and how they might give way to the complex, multimodal coordination that naturally arises in social interaction.

  10. Multimodal Interaction on English Testing Academic Assessment

    ERIC Educational Resources Information Center

    Magal-Royo, T.; Gimenez-Lopez, J. L.; Garcia Laborda, Jesus

    2012-01-01

    Multimodal interaction methods applied to learning environments of the English language will be a line for future research from the use of adapted mobile phones or PDAs. Today's mobile devices allow access and data entry in a synchronized manner through different channels. At the academic level we made the first analysis of English language…

  11. Meaning-Making in Online Language Learner Interactions via Desktop Videoconferencing

    ERIC Educational Resources Information Center

    Satar, H. Müge

    2016-01-01

    Online language learning and teaching in multimodal contexts has been identified as one of the key research areas in computer-aided learning (CALL) (Lamy, 2013; White, 2014). This paper aims to explore meaning-making in online language learner interactions via desktop videoconferencing (DVC) and in doing so illustrate multimodal transcription and…

  12. Contradictory Explorative Assessment. Multimodal Teacher/Student Interaction in Scandinavian Digital Learning Environments

    ERIC Educational Resources Information Center

    Kjällander, Susanne

    2018-01-01

    Assessment in the much-discussed digital divide in Scandinavian technologically advanced schools, is the study object of this article. Interaction is studied to understand assessment; and to see how assessment can be didactically designed to recognise students' learning. With a multimodal, design theoretical perspective on learning teachers' and…

  13. Multimodal Research: Addressing the Complexity of Multimodal Environments and the Challenges for CALL

    ERIC Educational Resources Information Center

    Tan, Sabine; O'Halloran, Kay L.; Wignell, Peter

    2016-01-01

    Multimodality, the study of the interaction of language with other semiotic resources such as images and sound resources, has significant implications for computer assisted language learning (CALL) with regards to understanding the impact of digital environments on language teaching and learning. In this paper, we explore recent manifestations of…

  14. Modeling and analyzing the impact of advanced technologies on livability and multimodal transportation performance measures in arterial corridors : phase 2.

    DOT National Transportation Integrated Search

    2017-03-01

    Transportation corridors are complex systems. Tradeoffs, particularly in terms of traffic mobility, transit performance, accessibility and pedestrian : interactions, are not well understood. When the focus is on motorized vehicle mobility and through...

  15. Offspring Generation Method for interactive Genetic Algorithm considering Multimodal Preference

    NASA Astrophysics Data System (ADS)

    Ito, Fuyuko; Hiroyasu, Tomoyuki; Miki, Mitsunori; Yokouchi, Hisatake

    In interactive genetic algorithms (iGAs), computer simulations prepare design candidates that are then evaluated by the user. Therefore, iGA can predict a user's preferences. Conventional iGA problems involve a search for a single optimum solution, and iGA were developed to find this single optimum. On the other hand, our target problems have several peaks in a function and there are small differences among these peaks. For such problems, it is better to show all the peaks to the user. Product recommendation in shopping sites on the web is one example of such problems. Several types of preference trend should be prepared for users in shopping sites. Exploitation and exploration are important mechanisms in GA search. To perform effective exploitation, the offspring generation method (crossover) is very important. Here, we introduced a new offspring generation method for iGA in multimodal problems. In the proposed method, individuals are clustered into subgroups and offspring are generated in each group. The proposed method was applied to an experimental iGA system to examine its effectiveness. In the experimental iGA system, users can decide on preferable t-shirts to buy. The results of the subjective experiment confirmed that the proposed method enables offspring generation with consideration of multimodal preferences, and the proposed mechanism was also shown not to adversely affect the performance of preference prediction.

  16. Using the Interactive Whiteboard to Resource Continuity and Support Multimodal Teaching in a Primary Science Classroom

    ERIC Educational Resources Information Center

    Gillen, J.; Littleton, K.; Twiner, A.; Staarman, J. K.; Mercer, N.

    2008-01-01

    All communication is inherently multimodal, and understandings of science need to be multidimensional. The interactive whiteboard offers a range of potential benefits to the primary science classroom in terms of relative ease of integration of a number of presentational and ICT functions, which, taken together, offers new opportunities for…

  17. Multimodality: a basis for augmentative and alternative communication--psycholinguistic, cognitive, and clinical/educational aspects.

    PubMed

    Loncke, Filip T; Campbell, Jamie; England, Amanda M; Haley, Tanya

    2006-02-15

    Message generating is a complex process involving a number of processes, including the selection of modes to use. When expressing a message, human communicators typically use a combination of modes. This phenomenon is often termed multimodality. This article explores the use of models that explain multimodality as an explanatory framework for augmentative and alternative communication (AAC). Multimodality is analysed from a communication, psycholinguistic, and cognitive perspective. Theoretical and applied topics within AAC can be explained or described within the multimodality framework considering iconicity, simultaneous communication, lexical organization, and compatibility of communication modes. Consideration of multimodality is critical to understanding underlying processes in individuals who use AAC and individuals who interact with them.

  18. Multimodal interactions in typically and atypically developing children: natural versus artificial environments.

    PubMed

    Giannopulu, Irini

    2013-11-01

    This review addresses the central role played by multimodal interactions in neurocognitive development. We first analyzed our studies of multimodal verbal and nonverbal cognition and emotional interactions within neuronal, that is, natural environments in typically developing children. We then tried to relate them to the topic of creating artificial environments using mobile toy robots to neurorehabilitate severely autistic children. By doing so, both neural/natural and artificial environments are considered as the basis of neuronal organization and reorganization. The common thread underlying the thinking behind this approach revolves around the brain's intrinsic properties: neuroplasticity and the fact that the brain is neurodynamic. In our approach, neural organization and reorganization using natural or artificial environments aspires to bring computational perspectives into cognitive developmental neuroscience.

  19. Improving manual skills in persons with disabilities (PWD) through a multimodal assistance system.

    PubMed

    Covarrubias, Mario; Gatti, Elia; Bordegoni, Monica; Cugini, Umberto; Mansutti, Alessandro

    2014-07-01

    In this research work, we present a Multimodal Guidance System (MGS) whose aim is to provide dynamic assistance to persons with disabilities (PWD) while performing manual activities such as drawing, coloring in and foam-cutting tasks. The MGS provides robotic assistance in the execution of 2D tasks through haptic and sound interactions. Haptic technology provides the virtual path of 2D shapes through the point-based approach, while sound technology provides audio feedback inputs related to the hand's velocity while sketching and filling or cutting operations. By combining this Multimodal System with the haptic assistance, we have created a new approach with possible applications to such diverse fields as physical rehabilitation, scientific investigation of sensorimotor learning and assessment of hand movements in PWD. The MGS has been tested by people with specific disorders affecting coordination, such as Down syndrome and developmental disabilities, under the supervision of their teachers and care assistants inside their learning environment. A Graphic User Interface has been designed for teachers and care assistants in order to provide training during the test sessions. Our results provide conclusive evidence that the effect of using the MGS increases the accuracy in the tasks operations. The Multimodal Guidance System (MGS) is an interface that offers haptic and sound feedback while performing manual tasks. Several studies demonstrated that the haptic guidance systems can help people in recovering cognitive function at different levels of complexity and impairment. The applications supported by our device could also have an important role in supporting physical therapist and cognitive psychologist in helping patients to recover motor and visuo-spatial abilities.

  20. Multimodal Pressure-Flow Analysis: Application of Hilbert Huang Transform in Cerebral Blood Flow Regulation

    NASA Astrophysics Data System (ADS)

    Lo, Men-Tzung; Hu, Kun; Liu, Yanhui; Peng, C.-K.; Novak, Vera

    2008-12-01

    Quantification of nonlinear interactions between two nonstationary signals presents a computational challenge in different research fields, especially for assessments of physiological systems. Traditional approaches that are based on theories of stationary signals cannot resolve nonstationarity-related issues and, thus, cannot reliably assess nonlinear interactions in physiological systems. In this review we discuss a new technique called multimodal pressure flow (MMPF) method that utilizes Hilbert-Huang transformation to quantify interaction between nonstationary cerebral blood flow velocity (BFV) and blood pressure (BP) for the assessment of dynamic cerebral autoregulation (CA). CA is an important mechanism responsible for controlling cerebral blood flow in responses to fluctuations in systemic BP within a few heart-beats. The MMPF analysis decomposes BP and BFV signals into multiple empirical modes adaptively so that the fluctuations caused by a specific physiologic process can be represented in a corresponding empirical mode. Using this technique, we showed that dynamic CA can be characterized by specific phase delays between the decomposed BP and BFV oscillations, and that the phase shifts are significantly reduced in hypertensive, diabetics and stroke subjects with impaired CA. Additionally, the new technique can reliably assess CA using both induced BP/BFV oscillations during clinical tests and spontaneous BP/BFV fluctuations during resting conditions.

  1. Integrated multimodal human-computer interface and augmented reality for interactive display applications

    NASA Astrophysics Data System (ADS)

    Vassiliou, Marius S.; Sundareswaran, Venkataraman; Chen, S.; Behringer, Reinhold; Tam, Clement K.; Chan, M.; Bangayan, Phil T.; McGee, Joshua H.

    2000-08-01

    We describe new systems for improved integrated multimodal human-computer interaction and augmented reality for a diverse array of applications, including future advanced cockpits, tactical operations centers, and others. We have developed an integrated display system featuring: speech recognition of multiple concurrent users equipped with both standard air- coupled microphones and novel throat-coupled sensors (developed at Army Research Labs for increased noise immunity); lip reading for improving speech recognition accuracy in noisy environments, three-dimensional spatialized audio for improved display of warnings, alerts, and other information; wireless, coordinated handheld-PC control of a large display; real-time display of data and inferences from wireless integrated networked sensors with on-board signal processing and discrimination; gesture control with disambiguated point-and-speak capability; head- and eye- tracking coupled with speech recognition for 'look-and-speak' interaction; and integrated tetherless augmented reality on a wearable computer. The various interaction modalities (speech recognition, 3D audio, eyetracking, etc.) are implemented a 'modality servers' in an Internet-based client-server architecture. Each modality server encapsulates and exposes commercial and research software packages, presenting a socket network interface that is abstracted to a high-level interface, minimizing both vendor dependencies and required changes on the client side as the server's technology improves.

  2. Design of a Multi-mode Flight Deck Decision Support System for Airborne Conflict Management

    NASA Technical Reports Server (NTRS)

    Barhydt, Richard; Krishnamurthy, Karthik

    2004-01-01

    NASA Langley has developed a multi-mode decision support system for pilots operating in a Distributed Air-Ground Traffic Management (DAG-TM) environment. An Autonomous Operations Planner (AOP) assists pilots in performing separation assurance functions, including conflict detection, prevention, and resolution. Ongoing AOP design has been based on a comprehensive human factors analysis and evaluation results from previous human-in-the-loop experiments with airline pilot test subjects. AOP considers complex flight mode interactions and provides flight guidance to pilots consistent with the current aircraft control state. Pilots communicate goals to AOP by setting system preferences and actively probing potential trajectories for conflicts. To minimize training requirements and improve operational use, AOP design leverages existing alerting philosophies, displays, and crew interfaces common on commercial aircraft. Future work will consider trajectory prediction uncertainties, integration with the TCAS collision avoidance system, and will incorporate enhancements based on an upcoming air-ground coordination experiment.

  3. Multiple-mode reconfigurable electro-optic switching network for optical fiber sensor array

    NASA Technical Reports Server (NTRS)

    Chen, Ray T.; Wang, Michael R.; Jannson, Tomasz; Baumbick, Robert

    1991-01-01

    This paper reports the first switching network compatible with multimode fibers. A one-to-many cascaded reconfigurable interconnection was built. A thin glass substrate was used as the guiding medium which provides not only higher coupling efficiency from multimode fiber to waveguide but also better tolerance of phase-matching conditions. Involvement of a total-internal-reflection hologram and multimode waveguide eliminates interface problems between fibers and waveguides. The DCG polymer graft has proven to be reliable from -180 C to +200 C. Survivability of such an electrooptic system in harsh environments is further ensured. LiNbO3 was chosen as the E-O material because of its stability at high temperatures (phase-transition temperature of more than 1000 C) and maturity of E-O device technology. Further theoretical calculation was conducted to provide the optimal interaction length and device capacitance.

  4. Multimodal Language Learner Interactions via Desktop Videoconferencing within a Framework of Social Presence: Gaze

    ERIC Educational Resources Information Center

    Satar, H. Muge

    2013-01-01

    Desktop videoconferencing (DVC) offers many opportunities for language learning through its multimodal features. However, it also brings some challenges such as gaze and mutual gaze, that is, eye-contact. This paper reports some of the findings of a PhD study investigating social presence in DVC interactions of English as a Foreign Language (EFL)…

  5. Multi-Modal Interaction for Robotic Mules

    DTIC Science & Technology

    2014-02-26

    Multi-Modal Interaction for Robotic Mules Glenn Taylor, Mike Quist, Matt Lanting, Cory Dunham , Patrick Theisen, Paul Muench Abstract...Taylor, Mike Quist, Matt Lanting, Cory Dunham , and Patrick Theisen are with Soar Technology, Inc. (corresponding author: 734-887- 7620; email: glenn...soartech.com; quist@soartech.com; matt.lanting@soartech.com; dunham @soartech.com; patrick.theisen@soartech.com Paul Muench is with US Army TARDEC

  6. A multimodal interface for real-time soldier-robot teaming

    NASA Astrophysics Data System (ADS)

    Barber, Daniel J.; Howard, Thomas M.; Walter, Matthew R.

    2016-05-01

    Recent research and advances in robotics have led to the development of novel platforms leveraging new sensing capabilities for semantic navigation. As these systems becoming increasingly more robust, they support highly complex commands beyond direct teleoperation and waypoint finding facilitating a transition away from robots as tools to robots as teammates. Supporting future Soldier-Robot teaming requires communication capabilities on par with human-human teams for successful integration of robots. Therefore, as robots increase in functionality, it is equally important that the interface between the Soldier and robot advances as well. Multimodal communication (MMC) enables human-robot teaming through redundancy and levels of communications more robust than single mode interaction. Commercial-off-the-shelf (COTS) technologies released in recent years for smart-phones and gaming provide tools for the creation of portable interfaces incorporating MMC through the use of speech, gestures, and visual displays. However, for multimodal interfaces to be successfully used in the military domain, they must be able to classify speech, gestures, and process natural language in real-time with high accuracy. For the present study, a prototype multimodal interface supporting real-time interactions with an autonomous robot was developed. This device integrated COTS Automated Speech Recognition (ASR), a custom gesture recognition glove, and natural language understanding on a tablet. This paper presents performance results (e.g. response times, accuracy) of the integrated device when commanding an autonomous robot to perform reconnaissance and surveillance activities in an unknown outdoor environment.

  7. The Effects of Real-Time Interactive Multimedia Teleradiology System

    PubMed Central

    Al-Safadi, Lilac

    2016-01-01

    This study describes the design of a real-time interactive multimedia teleradiology system and assesses how the system is used by referring physicians in point-of-care situations and supports or hinders aspects of physician-radiologist interaction. We developed a real-time multimedia teleradiology management system that automates the transfer of images and radiologists' reports and surveyed physicians to triangulate the findings and to verify the realism and results of the experiment. The web-based survey was delivered to 150 physicians from a range of specialties. The survey was completed by 72% of physicians. Data showed a correlation between rich interactivity, satisfaction, and effectiveness. The results of our experiments suggest that real-time multimedia teleradiology systems are valued by referring physicians and may have the potential for enhancing their practice and improving patient care and highlight the critical role of multimedia technologies to provide real-time multimode interactivity in current medical care. PMID:27294118

  8. MindSeer: a portable and extensible tool for visualization of structural and functional neuroimaging data

    PubMed Central

    Moore, Eider B; Poliakov, Andrew V; Lincoln, Peter; Brinkley, James F

    2007-01-01

    Background Three-dimensional (3-D) visualization of multimodality neuroimaging data provides a powerful technique for viewing the relationship between structure and function. A number of applications are available that include some aspect of 3-D visualization, including both free and commercial products. These applications range from highly specific programs for a single modality, to general purpose toolkits that include many image processing functions in addition to visualization. However, few if any of these combine both stand-alone and remote multi-modality visualization in an open source, portable and extensible tool that is easy to install and use, yet can be included as a component of a larger information system. Results We have developed a new open source multimodality 3-D visualization application, called MindSeer, that has these features: integrated and interactive 3-D volume and surface visualization, Java and Java3D for true cross-platform portability, one-click installation and startup, integrated data management to help organize large studies, extensibility through plugins, transparent remote visualization, and the ability to be integrated into larger information management systems. We describe the design and implementation of the system, as well as several case studies that demonstrate its utility. These case studies are available as tutorials or demos on the associated website: . Conclusion MindSeer provides a powerful visualization tool for multimodality neuroimaging data. Its architecture and unique features also allow it to be extended into other visualization domains within biomedicine. PMID:17937818

  9. MindSeer: a portable and extensible tool for visualization of structural and functional neuroimaging data.

    PubMed

    Moore, Eider B; Poliakov, Andrew V; Lincoln, Peter; Brinkley, James F

    2007-10-15

    Three-dimensional (3-D) visualization of multimodality neuroimaging data provides a powerful technique for viewing the relationship between structure and function. A number of applications are available that include some aspect of 3-D visualization, including both free and commercial products. These applications range from highly specific programs for a single modality, to general purpose toolkits that include many image processing functions in addition to visualization. However, few if any of these combine both stand-alone and remote multi-modality visualization in an open source, portable and extensible tool that is easy to install and use, yet can be included as a component of a larger information system. We have developed a new open source multimodality 3-D visualization application, called MindSeer, that has these features: integrated and interactive 3-D volume and surface visualization, Java and Java3D for true cross-platform portability, one-click installation and startup, integrated data management to help organize large studies, extensibility through plugins, transparent remote visualization, and the ability to be integrated into larger information management systems. We describe the design and implementation of the system, as well as several case studies that demonstrate its utility. These case studies are available as tutorials or demos on the associated website: http://sig.biostr.washington.edu/projects/MindSeer. MindSeer provides a powerful visualization tool for multimodality neuroimaging data. Its architecture and unique features also allow it to be extended into other visualization domains within biomedicine.

  10. A new multimodal interactive way of subjective scoring of 3D video quality of experience

    NASA Astrophysics Data System (ADS)

    Kim, Taewan; Lee, Kwanghyun; Lee, Sanghoon; Bovik, Alan C.

    2014-03-01

    People that watch today's 3D visual programs, such as 3D cinema, 3D TV and 3D games, experience wide and dynamically varying ranges of 3D visual immersion and 3D quality of experience (QoE). It is necessary to be able to deploy reliable methodologies that measure each viewers subjective experience. We propose a new methodology that we call Multimodal Interactive Continuous Scoring of Quality (MICSQ). MICSQ is composed of a device interaction process between the 3D display and a separate device (PC, tablet, etc.) used as an assessment tool, and a human interaction process between the subject(s) and the device. The scoring process is multimodal, using aural and tactile cues to help engage and focus the subject(s) on their tasks. Moreover, the wireless device interaction process makes it possible for multiple subjects to assess 3D QoE simultaneously in a large space such as a movie theater, and at di®erent visual angles and distances.

  11. Dynamics in multiple-well Bose-Einstein condensates

    NASA Astrophysics Data System (ADS)

    Nigro, M.; Capuzzi, P.; Cataldo, H. M.; Jezek, D. M.

    2018-01-01

    We study the dynamics of three-dimensional weakly linked Bose-Einstein condensates using a multimode model with an effective interaction parameter. The system is confined by a ring-shaped four-well trapping potential. By constructing a two-mode Hamiltonian in a reduced highly symmetric phase space, we examine the periodic orbits and calculate their time periods both in the self-trapping and Josephson regimes. The dynamics in the vicinity of the reduced phase space is investigated by means of a Floquet multiplier analysis, finding regions of different linear stability and analyzing their implications on the exact dynamics. The numerical exploration in an extended region of the phase space demonstrates that two-mode tools can also be useful for performing a partition of the space in different regimes. Comparisons with Gross-Pitaevskii simulations confirm these findings and emphasize the importance of properly determining the effective on-site interaction parameter governing the multimode dynamics.

  12. Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification.

    PubMed

    Rajagopal, Gayathri; Palaniswamy, Ramamoorthy

    2015-01-01

    This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database.

  13. Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification

    PubMed Central

    Rajagopal, Gayathri; Palaniswamy, Ramamoorthy

    2015-01-01

    This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database. PMID:26640813

  14. Interactions, Intersections and Improvisations: Studying the Multimodal Texts and Classroom Talk of Six- to Seven-Year-Olds

    ERIC Educational Resources Information Center

    Pahl, Kate

    2009-01-01

    This article examines the relationship between children's talk in the classroom and their multimodal texts. The article uses an analytic framework derived from Bourdieu's concept of habitus to examine how 6-7-year-old children's regular ways of being and doing can be found in their multimodal texts together with their talk (Bourdieu, 1977, 1990).…

  15. Large-scale evaluation of multimodal biometric authentication using state-of-the-art systems.

    PubMed

    Snelick, Robert; Uludag, Umut; Mink, Alan; Indovina, Michael; Jain, Anil

    2005-03-01

    We examine the performance of multimodal biometric authentication systems using state-of-the-art Commercial Off-the-Shelf (COTS) fingerprint and face biometric systems on a population approaching 1,000 individuals. The majority of prior studies of multimodal biometrics have been limited to relatively low accuracy non-COTS systems and populations of a few hundred users. Our work is the first to demonstrate that multimodal fingerprint and face biometric systems can achieve significant accuracy gains over either biometric alone, even when using highly accurate COTS systems on a relatively large-scale population. In addition to examining well-known multimodal methods, we introduce new methods of normalization and fusion that further improve the accuracy.

  16. Tunable Mode Coupling in Nanocontact Spin-Torque Oscillators

    DOE PAGES

    Zhang, Steven S. -L.; Iacocca, Ezio; Heinonen, Olle

    2017-07-27

    Recent experiments on spin-torque oscillators have revealed interactions between multiple magneto-dynamic modes, including mode coexistence, mode hopping, and temperature-driven crossover between modes. The initial multimode theory indicates that a linear coupling between several dominant modes, arising from the interaction of the subdynamic system with a magnon bath, plays an essential role in the generation of various multimode behaviors, such as mode hopping and mode coexistence. In this work, we derive a set of rate equations to describe the dynamics of coupled magneto-dynamic modes in a nanocontact spin-torque oscillator. Here, expressions for both linear and nonlinear coupling terms are obtained, whichmore » allow us to analyze the dependence of the coupled dynamic behaviors of modes on external experimental conditions as well as intrinsic magnetic properties. For a minimal two-mode system, we further map the energy and phase difference of the two modes onto a two-dimensional phase space and demonstrate in the phase portraits how the manifolds of periodic orbits and fixed points vary with an external magnetic field as well as with the temperature.« less

  17. Tunable Mode Coupling in Nanocontact Spin-Torque Oscillators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Steven S. -L.; Iacocca, Ezio; Heinonen, Olle

    Recent experiments on spin-torque oscillators have revealed interactions between multiple magneto-dynamic modes, including mode coexistence, mode hopping, and temperature-driven crossover between modes. The initial multimode theory indicates that a linear coupling between several dominant modes, arising from the interaction of the subdynamic system with a magnon bath, plays an essential role in the generation of various multimode behaviors, such as mode hopping and mode coexistence. In this work, we derive a set of rate equations to describe the dynamics of coupled magneto-dynamic modes in a nanocontact spin-torque oscillator. Here, expressions for both linear and nonlinear coupling terms are obtained, whichmore » allow us to analyze the dependence of the coupled dynamic behaviors of modes on external experimental conditions as well as intrinsic magnetic properties. For a minimal two-mode system, we further map the energy and phase difference of the two modes onto a two-dimensional phase space and demonstrate in the phase portraits how the manifolds of periodic orbits and fixed points vary with an external magnetic field as well as with the temperature.« less

  18. Semantic Entity-Component State Management Techniques to Enhance Software Quality for Multimodal VR-Systems.

    PubMed

    Fischbach, Martin; Wiebusch, Dennis; Latoschik, Marc Erich

    2017-04-01

    Modularity, modifiability, reusability, and API usability are important software qualities that determine the maintainability of software architectures. Virtual, Augmented, and Mixed Reality (VR, AR, MR) systems, modern computer games, as well as interactive human-robot systems often include various dedicated input-, output-, and processing subsystems. These subsystems collectively maintain a real-time simulation of a coherent application state. The resulting interdependencies between individual state representations, mutual state access, overall synchronization, and flow of control implies a conceptual close coupling whereas software quality asks for a decoupling to develop maintainable solutions. This article presents five semantics-based software techniques that address this contradiction: Semantic grounding, code from semantics, grounded actions, semantic queries, and decoupling by semantics. These techniques are applied to extend the well-established entity-component-system (ECS) pattern to overcome some of this pattern's deficits with respect to the implied state access. A walk-through of central implementation aspects of a multimodal (speech and gesture) VR-interface is used to highlight the techniques' benefits. This use-case is chosen as a prototypical example of complex architectures with multiple interacting subsystems found in many VR, AR and MR architectures. Finally, implementation hints are given, lessons learned regarding maintainability pointed-out, and performance implications discussed.

  19. Securing recipiency in workplace meetings: Multimodal practices

    PubMed Central

    Ford, Cecilia E.; Stickle, Trini

    2013-01-01

    As multiparty interactions with single courses of coordinated action, workplace meetings place particular interactional demands on participants who are not primary speakers (e.g. not chairs) as they work to initiate turns and to interactively coordinate with displays of recipiency from co-participants. Drawing from a corpus of 26 hours of videotaped workplace meetings in a midsized US city, this article reports on multimodal practices – phonetic, prosodic, and bodily-visual – used for coordinating turn transition and for consolidating recipiency in these specialized speech exchange systems. Practices used by self-selecting non-primary speakers as they secure turns in meetings include displays of close monitoring of current speakers’ emerging turn structure, displays of heightened interest as current turns approach possible completion, and turn initiation practices designed to pursue and, in a fine-tuned manner, coordinate with displays of recipiency on the parts of other participants as well as from reflexively constructed ‘target’ recipients. By attending to bodily-visual action, as well as phonetics and prosody, this study contributes to expanding accounts for turn taking beyond traditional word-based grammar (i.e. lexicon and syntax). PMID:24976789

  20. Role of interbranch pumping on the quantum-statistical behavior of multi-mode magnons in ferromagnetic nanowires

    NASA Astrophysics Data System (ADS)

    Haghshenasfard, Zahra; Cottam, M. G.

    2018-01-01

    Theoretical studies are reported for the quantum-statistical properties of microwave-driven multi-mode magnon systems as represented by ferromagnetic nanowires with a stripe geometry. Effects of both the exchange and the dipole-dipole interactions, as well as a Zeeman term for an external applied field, are included in the magnetic Hamiltonian. The model also contains the time-dependent nonlinear effects due to parallel pumping with an electromagnetic field. Using a coherent magnon state representation in terms of creation and annihilation operators, we investigate the effects of parallel pumping on the temporal evolution of various nonclassical properties of the system. A focus is on the interbranch mixing produced by the pumping field when there are three or more modes. In particular, the occupation magnon number and the multi-mode cross correlations between magnon modes are studied. Manipulation of the collapse and revival phenomena of the average magnon occupation number and the control of the cross correlation between the magnon modes are demonstrated through tuning of the parallel pumping field amplitude and appropriate choices for the coherent magnon states. The cross correlations are a direct consequence of the interbranch pumping effects and do not appear in the corresponding one- or two-mode magnon systems.

  1. A Multimodal Emotion Detection System during Human-Robot Interaction

    PubMed Central

    Alonso-Martín, Fernando; Malfaz, María; Sequeira, João; Gorostiza, Javier F.; Salichs, Miguel A.

    2013-01-01

    In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human–robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA), which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA), has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE) and Computer Expression Recognition Toolbox (CERT). Once these new components (GEVA and GEFA) give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human–robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System). Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual) separately. PMID:24240598

  2. A collaborative interaction and visualization multi-modal environment for surgical planning.

    PubMed

    Foo, Jung Leng; Martinez-Escobar, Marisol; Peloquin, Catherine; Lobe, Thom; Winer, Eliot

    2009-01-01

    The proliferation of virtual reality visualization and interaction technologies has changed the way medical image data is analyzed and processed. This paper presents a multi-modal environment that combines a virtual reality application with a desktop application for collaborative surgical planning. Both visualization applications can function independently but can also be synced over a network connection for collaborative work. Any changes to either application is immediately synced and updated to the other. This is an efficient collaboration tool that allows multiple teams of doctors with only an internet connection to visualize and interact with the same patient data simultaneously. With this multi-modal environment framework, one team working in the VR environment and another team from a remote location working on a desktop machine can both collaborate in the examination and discussion for procedures such as diagnosis, surgical planning, teaching and tele-mentoring.

  3. Towards the future : the promise of intermodal and multimodal transportation systems

    DOT National Transportation Integrated Search

    1995-02-01

    Issues relating to intermodal and multimodal transportation systems are introduced and defined. Intermodal and multimodal transportation solutons are assessed within the framework of legislative efforts such as Intermodal Surface Transportation Effic...

  4. The effect of geometrical presentation of multimodal cation-exchange ligands on selective recognition of hydrophobic regions on protein surfaces.

    PubMed

    Woo, James; Parimal, Siddharth; Brown, Matthew R; Heden, Ryan; Cramer, Steven M

    2015-09-18

    The effects of spatial organization of hydrophobic and charged moieties on multimodal (MM) cation-exchange ligands were examined by studying protein retention behavior on two commercial chromatographic media, Capto™ MMC and Nuvia™ cPrime™. Proteins with extended regions of surface-exposed aliphatic residues were found to have enhanced retention on the Capto MMC system as compared to the Nuvia cPrime resin. The results further indicated that while the Nuvia cPrime ligand had a strong preference for interactions with aromatic groups, the Capto MMC ligand appeared to interact with both aliphatic and aromatic clusters on the protein surfaces. These observations were formalized into a new set of protein surface property descriptors, which quantified the local distribution of electrostatic and hydrophobic potentials as well as distinguishing between aromatic and aliphatic properties. Using these descriptors, high-performing quantitative structure-activity relationship (QSAR) models (R(2)>0.88) were generated for both the Capto MMC and Nuvia cPrime datasets at pH 5 and pH 6. Descriptors of electrostatic properties were generally common across the four models; however both Capto MMC models included descriptors that quantified regions of aliphatic-based hydrophobicity in addition to aromatic descriptors. Retention was generally reduced by lowering the ligand densities on both MM resins. Notably, elution order was largely unaffected by the change in surface density, but smaller and more aliphatic proteins tended to be more affected by this drop in ligand density. This suggests that modulating the exposure, shape and density of the hydrophobic moieties in multimodal chromatographic systems can alter the preference for surface exposed aliphatic or aromatic residues, thus providing an additional dimension for modulating the selectivity of MM protein separation systems. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Analyzing a multimodal biometric system using real and virtual users

    NASA Astrophysics Data System (ADS)

    Scheidat, Tobias; Vielhauer, Claus

    2007-02-01

    Three main topics of recent research on multimodal biometric systems are addressed in this article: The lack of sufficiently large multimodal test data sets, the influence of cultural aspects and data protection issues of multimodal biometric data. In this contribution, different possibilities are presented to extend multimodal databases by generating so-called virtual users, which are created by combining single biometric modality data of different users. Comparative tests on databases containing real and virtual users based on a multimodal system using handwriting and speech are presented, to study to which degree the use of virtual multimodal databases allows conclusions with respect to recognition accuracy in comparison to real multimodal data. All tests have been carried out on databases created from donations from three different nationality groups. This allows to review the experimental results both in general and in context of cultural origin. The results show that in most cases the usage of virtual persons leads to lower accuracy than the usage of real users in terms of the measurement applied: the Equal Error Rate. Finally, this article will address the general question how the concept of virtual users may influence the data protection requirements for multimodal evaluation databases in the future.

  6. Multimodal Neuroelectric Interface Development

    NASA Technical Reports Server (NTRS)

    Trejo, Leonard J.; Wheeler, Kevin R.; Jorgensen, Charles C.; Totah, Joseph (Technical Monitor)

    2001-01-01

    This project aims to improve performance of NASA missions by developing multimodal neuroelectric technologies for augmented human-system interaction. Neuroelectric technologies will add completely new modes of interaction that operate in parallel with keyboards, speech, or other manual controls, thereby increasing the bandwidth of human-system interaction. We recently demonstrated the feasibility of real-time electromyographic (EMG) pattern recognition for a direct neuroelectric human-computer interface. We recorded EMG signals from an elastic sleeve with dry electrodes, while a human subject performed a range of discrete gestures. A machine-teaming algorithm was trained to recognize the EMG patterns associated with the gestures and map them to control signals. Successful applications now include piloting two Class 4 aircraft simulations (F-15 and 757) and entering data with a "virtual" numeric keyboard. Current research focuses on on-line adaptation of EMG sensing and processing and recognition of continuous gestures. We are also extending this on-line pattern recognition methodology to electroencephalographic (EEG) signals. This will allow us to bypass muscle activity and draw control signals directly from the human brain. Our system can reliably detect P-rhythm (a periodic EEG signal from motor cortex in the 10 Hz range) with a lightweight headset containing saline-soaked sponge electrodes. The data show that EEG p-rhythm can be modulated by real and imaginary motions. Current research focuses on using biofeedback to train of human subjects to modulate EEG rhythms on demand, and to examine interactions of EEG-based control with EMG-based and manual control. Viewgraphs on these neuroelectric technologies are also included.

  7. Multimodal Trip Planner System final evaluation report.

    DOT National Transportation Integrated Search

    2011-05-01

    This evaluation of the Multimodal Trip Planning System (MMTPS) is the culmination of a multi-year project evaluating the development and deployment of a multimodal trip planner in the Chicagoland area between 2004 and 2010. The report includes an ove...

  8. Near-field hyperspectral quantum probing of multimodal plasmonic resonators

    NASA Astrophysics Data System (ADS)

    Cuche, A.; Berthel, M.; Kumar, U.; Colas des Francs, G.; Huant, S.; Dujardin, E.; Girard, C.; Drezet, A.

    2017-03-01

    Quantum systems, excited by an external source of photons, display a photodynamics that is ruled by a subtle balance between radiative or nonradiative energy channels when interacting with metallic nanostructures. We apply and generalize this concept to achieve a quantum probing of multimodal plasmonic resonators by collecting and filtering the broad emission spectra generated by a nanodiamond (ND) hosting a small set of nitrogen-vacancy (NV) color centers attached at the apex of an optical tip. Spatially and spectrally resolved information on the photonic local density of states (ph-LDOS) can be recorded with this technique in the immediate vicinity of plasmonic resonators, paving the way for a complete near-field optical characterization of any kind of nanoresonators in the single photon regime.

  9. Cooperative action of coherent groups in broadly heterogeneous populations of interacting chemical oscillators

    PubMed Central

    Mikhailov, A. S.; Zanette, D. H.; Zhai, Y. M.; Kiss, I. Z.; Hudson, J. L.

    2004-01-01

    We present laboratory experiments on the effects of global coupling in a population of electrochemical oscillators with a multimodal frequency distribution. The experiments show that complex collective signals are generated by this system through spontaneous emergence and joint operation of coherently acting groups representing hierarchically organized resonant clusters. Numerical simulations support these experimental findings. Our results suggest that some forms of internal self-organization, characteristic for complex multiagent systems, are already possible in simple chemical systems. PMID:15263084

  10. Multimodality as a Sociolinguistic Resource

    ERIC Educational Resources Information Center

    Collister, Lauren Brittany

    2013-01-01

    This work explores the use of multimodal communication in a community of expert "World of Warcraft"® players and its impact on politeness, identity, and relationships. Players in the community regularly communicated using three linguistic modes quasi-simultaneously: text chat, voice chat, and face-to-face interaction. Using the…

  11. Sequences of Normative Evaluation in Two Telecollaboration Projects: A Comparative Study of Multimodal Feedback through Desktop Videoconference

    ERIC Educational Resources Information Center

    Cappellini, Marco; Azaoui, Brahim

    2017-01-01

    In our study we analyse how the same interactional dynamic is produced in two different pedagogical settings exploiting a desktop videoconference system. We propose to focus our attention on a specific type of conversational side sequence, known in the Francophone literature as sequences of normative evaluation. More particularly, we analyse data…

  12. A molecular modeling based method to predict elution behavior and binding patches of proteins in multimodal chromatography.

    PubMed

    Banerjee, Suvrajit; Parimal, Siddharth; Cramer, Steven M

    2017-08-18

    Multimodal (MM) chromatography provides a powerful means to enhance the selectivity of protein separations by taking advantage of multiple weak interactions that include electrostatic, hydrophobic and van der Waals interactions. In order to increase our understanding of such phenomena, a computationally efficient approach was developed that combines short molecular dynamics simulations and continuum solvent based coarse-grained free energy calculations in order to study the binding of proteins to Self Assembled Monolayers (SAM) presenting MM ligands. Using this method, the free energies of protein-MM SAM binding over a range of incident orientations of the protein can be determined. The resulting free energies were then examined to identify the more "strongly bound" orientations of different proteins with two multimodal surfaces. The overall free energy of protein-MM surface binding was then determined and correlated to retention factors from isocratic chromatography. This correlation, combined with analytical expressions from the literature, was then employed to predict protein gradient elution salt concentrations as well as selectivity reversals with different MM resin systems. Patches on protein surfaces that interacted strongly with MM surfaces were also identified by determining the frequency of heavy atom contacts with the atoms of the MM SAMs. A comparison of these patches to Electrostatic Potential and hydrophobicity maps indicated that while all of these patches contained significant positive charge, only the highest frequency sites also possessed hydrophobicity. The ability to identify key binding patches on proteins may have significant impact on process development for the separation of bioproduct related impurities. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Challenges in Transcribing Multimodal Data: A Case Study

    ERIC Educational Resources Information Center

    Helm, Francesca; Dooly, Melinda

    2017-01-01

    Computer-mediated communication (CMC) once meant principally text-based communication mediated by computers, but rapid technological advances in recent years have heralded an era of multimodal communication with a growing emphasis on audio and video synchronous interaction. As CMC, in all its variants (text chats, video chats, forums, blogs, SMS,…

  14. Computer-aided psychotherapy based on multimodal elicitation, estimation and regulation of emotion.

    PubMed

    Cosić, Krešimir; Popović, Siniša; Horvat, Marko; Kukolja, Davor; Dropuljić, Branimir; Kovač, Bernard; Jakovljević, Miro

    2013-09-01

    Contemporary psychiatry is looking at affective sciences to understand human behavior, cognition and the mind in health and disease. Since it has been recognized that emotions have a pivotal role for the human mind, an ever increasing number of laboratories and research centers are interested in affective sciences, affective neuroscience, affective psychology and affective psychopathology. Therefore, this paper presents multidisciplinary research results of Laboratory for Interactive Simulation System at Faculty of Electrical Engineering and Computing, University of Zagreb in the stress resilience. Patient's distortion in emotional processing of multimodal input stimuli is predominantly consequence of his/her cognitive deficit which is result of their individual mental health disorders. These emotional distortions in patient's multimodal physiological, facial, acoustic, and linguistic features related to presented stimulation can be used as indicator of patient's mental illness. Real-time processing and analysis of patient's multimodal response related to annotated input stimuli is based on appropriate machine learning methods from computer science. Comprehensive longitudinal multimodal analysis of patient's emotion, mood, feelings, attention, motivation, decision-making, and working memory in synchronization with multimodal stimuli provides extremely valuable big database for data mining, machine learning and machine reasoning. Presented multimedia stimuli sequence includes personalized images, movies and sounds, as well as semantically congruent narratives. Simultaneously, with stimuli presentation patient provides subjective emotional ratings of presented stimuli in terms of subjective units of discomfort/distress, discrete emotions, or valence and arousal. These subjective emotional ratings of input stimuli and corresponding physiological, speech, and facial output features provides enough information for evaluation of patient's cognitive appraisal deficit. Aggregated real-time visualization of this information provides valuable assistance in patient mental state diagnostics enabling therapist deeper and broader insights into dynamics and progress of the psychotherapy.

  15. A Fully Immersive Set-Up for Remote Interaction and Neurorehabilitation Based on Virtual Body Ownership

    PubMed Central

    Perez-Marcos, Daniel; Solazzi, Massimiliano; Steptoe, William; Oyekoya, Oyewole; Frisoli, Antonio; Weyrich, Tim; Steed, Anthony; Tecchia, Franco; Slater, Mel; Sanchez-Vives, Maria V.

    2012-01-01

    Although telerehabilitation systems represent one of the most technologically appealing clinical solutions for the immediate future, they still present limitations that prevent their standardization. Here we propose an integrated approach that includes three key and novel factors: (a) fully immersive virtual environments, including virtual body representation and ownership; (b) multimodal interaction with remote people and virtual objects including haptic interaction; and (c) a physical representation of the patient at the hospital through embodiment agents (e.g., as a physical robot). The importance of secure and rapid communication between the nodes is also stressed and an example implemented solution is described. Finally, we discuss the proposed approach with reference to the existing literature and systems. PMID:22787454

  16. Molecular simulations of multimodal ligand-protein binding: elucidation of binding sites and correlation with experiments.

    PubMed

    Freed, Alexander S; Garde, Shekhar; Cramer, Steven M

    2011-11-17

    Multimodal chromatography, which employs more than one mode of interaction between ligands and proteins, has been shown to have unique selectivity and high efficacy for protein purification. To test the ability of free solution molecular dynamics (MD) simulations in explicit water to identify binding regions on the protein surface and to shed light on the "pseudo affinity" nature of multimodal interactions, we performed MD simulations of a model protein ubiquitin in aqueous solution of free ligands. Comparisons of MD with NMR spectroscopy of ubiquitin mutants in solutions of free ligands show a good agreement between the two with regard to the preferred binding region on the surface of the protein and several binding sites. MD simulations also identify additional binding sites that were not observed in the NMR experiments. "Bound" ligands were found to be sufficiently flexible and to access a number of favorable conformations, suggesting only a moderate loss of ligand entropy in the "pseudo affinity" binding of these multimodal ligands. Analysis of locations of chemical subunits of the ligand on the protein surface indicated that electrostatic interaction units were located on the periphery of the preferred binding region on the protein. The analysis of the electrostatic potential, the hydrophobicity maps, and the binding of both acetate and benzene probes were used to further study the localization of individual ligand moieties. These results suggest that water-mediated electrostatic interactions help the localization and orientation of the MM ligand to the binding region with additional stability provided by nonspecific hydrophobic interactions.

  17. New methods of multimode fiber interferometer signal processing

    NASA Astrophysics Data System (ADS)

    Vitrik, Oleg B.; Kulchin, Yuri N.; Maxaev, Oleg G.; Kirichenko, Oleg V.; Kamenev, Oleg T.; Petrov, Yuri S.

    1995-06-01

    New methods of multimode fiber interferometers signal processing are suggested. For scheme of single fiber multimode interferometers with two excited modes, the method based on using of special fiber unit is developed. This unit provides the modes interaction and further sum optical field filtering. As a result the amplitude of output signal is modulated by external influence on interferometer. The stabilization of interferometer sensitivity is achieved by using additional special modulation of output signal. For scheme of single fiber multimode interferometers with excitation of wide mode spectrum, the signal of intermode interference is registered by photodiode matrix and then special electronic unit performs correlation processing. For elimination of temperature destabilization, the registered signal is adopted to multimode interferometers optical signal temperature changes. The achieved parameters for double mode scheme: temporary stability--0.6% per hour, sensitivity to interferometer length deviations--3,2 nm; for multimode scheme: temperature stability--(0.5%)/(K), temporary nonstability--0.2% per hour, sensitivity to interferometer length deviations--20 nm, dynamic range--35 dB.

  18. A multimodal interface to resolve the Midas-Touch problem in gaze controlled wheelchair.

    PubMed

    Meena, Yogesh Kumar; Cecotti, Hubert; Wong-Lin, KongFatt; Prasad, Girijesh

    2017-07-01

    Human-computer interaction (HCI) research has been playing an essential role in the field of rehabilitation. The usability of the gaze controlled powered wheelchair is limited due to Midas-Touch problem. In this work, we propose a multimodal graphical user interface (GUI) to control a powered wheelchair that aims to help upper-limb mobility impaired people in daily living activities. The GUI was designed to include a portable and low-cost eye-tracker and a soft-switch wherein the wheelchair can be controlled in three different ways: 1) with a touchpad 2) with an eye-tracker only, and 3) eye-tracker with soft-switch. The interface includes nine different commands (eight directions and stop) and integrated within a powered wheelchair system. We evaluated the performance of the multimodal interface in terms of lap-completion time, the number of commands, and the information transfer rate (ITR) with eight healthy participants. The analysis of the results showed that the eye-tracker with soft-switch provides superior performance with an ITR of 37.77 bits/min among the three different conditions (p<;0.05). Thus, the proposed system provides an effective and economical solution to the Midas-Touch problem and extended usability for the large population of disabled users.

  19. Effects of Webcams on Multimodal Interactive Learning

    ERIC Educational Resources Information Center

    Codreanu, Tatiana; Celik, Christelle Combe

    2013-01-01

    This paper describes the multimodal pedagogical communication of two groups of online teachers; trainee tutors (second year students of the Master of Arts in Teaching French as a Foreign Language at the University Lumiere-Lyon 2) and experienced teachers based in different locations (France, Spain and Finland). They all taught French as a Foreign…

  20. The integration of emotional and symbolic components in multimodal communication

    PubMed Central

    Mehu, Marc

    2015-01-01

    Human multimodal communication can be said to serve two main purposes: information transfer and social influence. In this paper, I argue that different components of multimodal signals play different roles in the processes of information transfer and social influence. Although the symbolic components of communication (e.g., verbal and denotative signals) are well suited to transfer conceptual information, emotional components (e.g., non-verbal signals that are difficult to manipulate voluntarily) likely take a function that is closer to social influence. I suggest that emotion should be considered a property of communicative signals, rather than an entity that is transferred as content by non-verbal signals. In this view, the effect of emotional processes on communication serve to change the quality of social signals to make them more efficient at producing responses in perceivers, whereas symbolic components increase the signals’ efficiency at interacting with the cognitive processes dedicated to the assessment of relevance. The interaction between symbolic and emotional components will be discussed in relation to the need for perceivers to evaluate the reliability of multimodal signals. PMID:26217280

  1. Analytical solution and applications of three qubits in three coupled modes without rotating wave approximation

    NASA Astrophysics Data System (ADS)

    Zhang, Jian-Song; Zhang, Liu-Juan; Chen, Ai-Xi; Abdel-Aty, Mahmoud

    2018-06-01

    We study the dynamics of the three-qubit system interacting with multi-mode without rotating wave approximation (RWA). A physical realization of the system without direct qubits interactions with dephasing bath is proposed. It is shown that non-Markovian characters of the purity of the three qubits and the coupling strength of modes are stronger enough the RWA is no longer valid. The influences of the dephasing of qubits and interactions of modes on the dynamics of genuine multipartite entanglement and bipartite correlations of qubits are investigated. The multipartite and bipartite quantum correlations could be generated faster if we increase the coupling strength of modes and the RWA is not valid when the coupling strength is strong enough. The unitary transformations approach adopted here can be extended to other systems such as circuit or cavity quantum electrodynamic systems in the strong coupling regime.

  2. Multimode cavity-assisted quantum storage via continuous phase-matching control

    NASA Astrophysics Data System (ADS)

    Kalachev, Alexey; Kocharovskaya, Olga

    2013-09-01

    A scheme for spatial multimode quantum memory is developed such that spatial-temporal structure of a weak signal pulse can be stored and recalled via cavity-assisted off-resonant Raman interaction with a strong angular-modulated control field in an extended Λ-type atomic ensemble. It is shown that effective multimode storage is possible when the Raman coherence spatial grating involves wave vectors with different longitudinal components relative to the paraxial signal field. The possibilities of implementing the scheme in the solid-state materials are discussed.

  3. New Sociotechnical Insights in Interaction Design

    NASA Astrophysics Data System (ADS)

    Abdelnour-Nocera, José; Mørch, Anders I.

    New challenges are facing interaction design. On one hand because of advances in technology - pervasive, ubiquitous, multimodal and adaptive computing - are changing the nature of interaction. On the other, web 2.0, massive multiplayer games and collaboration software extends the boundaries of HCI to deal with interaction in settings of remote communication and collaboration. The aim of this workshop is to provide a forum for HCI practitioners and researchers interested in knowledge from the social sciences to discuss how sociotechnical insights can be used to inform interaction design, and more generally how social science methods and theories can help to enrich the conceptual framework of systems development and participatory design. Position papers submissions are invited to address key aspects of current research and practical case studies.

  4. Safety in Acute Pain Medicine-Pharmacologic Considerations and the Impact of Systems-Based Gaps.

    PubMed

    Weingarten, Toby N; Taenzer, Andreas H; Elkassabany, Nabil M; Le Wendling, Linda; Nin, Olga; Kent, Michael L

    2018-05-02

    In the setting of an expanding prevalence of acute pain medicine services and the aggressive use of multimodal analgesia, an overview of systems-based safety gaps and safety concerns in the setting of aggressive multimodal analgesia is provided below. Expert commentary. Recent evidence focused on systems-based gaps in acute pain medicine is discussed. A focused literature review was conducted to assess safety concerns related to commonly used multimodal pharmacologic agents (opioids, nonsteroidal anti-inflammatory drugs, gabapentanoids, ketamine, acetaminophen) in the setting of inpatient acute pain management. Optimization of systems-based gaps will increase the probability of accurate pain assessment, improve the application of uniform evidence-based multimodal analgesia, and ensure a continuum of pain care. While acute pain medicine strategies should be aggressively applied, multimodal regimens must be strategically utilized to minimize risk to patients and in a comorbidity-specific fashion.

  5. Rats in Virtual Space: The development and implementation of a multimodal virtual reality system for small animals

    NASA Astrophysics Data System (ADS)

    Aharoni, Daniel Benjamin

    The integration of multimodal sensory information into a common neural code is a critical function of all complex nervous systems. This process is required for adaptive responding to incoming stimuli as well as the formation of a cognitive map of the external sensory environment. The underlying neural mechanisms of multimodal integration are poorly understood due, in part, to the technical difficulties of manipulating multimodal sensory information in combination with simultaneous in-vivo electrophysiological recording in awake behaving animals. We therefore developed a non-invasive multimodal virtual reality system that is conducive to wired electrophysiological recording techniques. This system allows for the dynamic presentation of highly immersive audiovisual virtual environments to rats maintained in a body fixed position on top of a quiet spherical treadmill. Notably, this allows the rats to remain at the same spatial location in the real world without the need for head fixation. This method opens the door for a wide array of future studies aimed at elucidating the underlying neural mechanisms of multimodal integration.

  6. Interference of Multi-Mode Gaussian States and "non Appearance" of Quantum Correlations

    NASA Astrophysics Data System (ADS)

    Olivares, Stefano

    2012-01-01

    We theoretically investigate bilinear, mode-mixing interactions involving two modes of uncorrelated multi-mode Gaussian states. In particular, we introduce the notion of "locally the same states" (LSS) and prove that two uncorrelated LSS modes are invariant under the mode mixing, i.e. the interaction does not lead to the birth of correlations between the outgoing modes. We also study the interference of orthogonally polarized Gaussian states by means of an interferometric scheme based on a beam splitter, rotators of polarization and polarization filters.

  7. A multimodal image sensor system for identifying water stress in grapevines

    NASA Astrophysics Data System (ADS)

    Zhao, Yong; Zhang, Qin; Li, Minzan; Shao, Yongni; Zhou, Jianfeng; Sun, Hong

    2012-11-01

    Water stress is one of the most common limitations of fruit growth. Water is the most limiting resource for crop growth. In grapevines, as well as in other fruit crops, fruit quality benefits from a certain level of water deficit which facilitates to balance vegetative and reproductive growth and the flow of carbohydrates to reproductive structures. A multi-modal sensor system was designed to measure the reflectance signature of grape plant surfaces and identify different water stress levels in this paper. The multi-modal sensor system was equipped with one 3CCD camera (three channels in R, G, and IR). The multi-modal sensor can capture and analyze grape canopy from its reflectance features, and identify the different water stress levels. This research aims at solving the aforementioned problems. The core technology of this multi-modal sensor system could further be used as a decision support system that combines multi-modal sensory data to improve plant stress detection and identify the causes of stress. The images were taken by multi-modal sensor which could output images in spectral bands of near-infrared, green and red channel. Based on the analysis of the acquired images, color features based on color space and reflectance features based on image process method were calculated. The results showed that these parameters had the potential as water stress indicators. More experiments and analysis are needed to validate the conclusion.

  8. Emotional pictures and sounds: a review of multimodal interactions of emotion cues in multiple domains

    PubMed Central

    Gerdes, Antje B. M.; Wieser, Matthias J.; Alpers, Georg W.

    2014-01-01

    In everyday life, multiple sensory channels jointly trigger emotional experiences and one channel may alter processing in another channel. For example, seeing an emotional facial expression and hearing the voice’s emotional tone will jointly create the emotional experience. This example, where auditory and visual input is related to social communication, has gained considerable attention by researchers. However, interactions of visual and auditory emotional information are not limited to social communication but can extend to much broader contexts including human, animal, and environmental cues. In this article, we review current research on audiovisual emotion processing beyond face-voice stimuli to develop a broader perspective on multimodal interactions in emotion processing. We argue that current concepts of multimodality should be extended in considering an ecologically valid variety of stimuli in audiovisual emotion processing. Therefore, we provide an overview of studies in which emotional sounds and interactions with complex pictures of scenes were investigated. In addition to behavioral studies, we focus on neuroimaging, electro- and peripher-physiological findings. Furthermore, we integrate these findings and identify similarities or differences. We conclude with suggestions for future research. PMID:25520679

  9. Advanced Multimodal Solutions for Information Presentation

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Godfroy-Cooper, Martine

    2018-01-01

    High-workload, fast-paced, and degraded sensory environments are the likeliest candidates to benefit from multimodal information presentation. For example, during EVA (Extra-Vehicular Activity) and telerobotic operations, the sensory restrictions associated with a space environment provide a major challenge to maintaining the situation awareness (SA) required for safe operations. Multimodal displays hold promise to enhance situation awareness and task performance by utilizing different sensory modalities and maximizing their effectiveness based on appropriate interaction between modalities. During EVA, the visual and auditory channels are likely to be the most utilized with tasks such as monitoring the visual environment, attending visual and auditory displays, and maintaining multichannel auditory communications. Previous studies have shown that compared to unimodal displays (spatial auditory or 2D visual), bimodal presentation of information can improve operator performance during simulated extravehicular activity on planetary surfaces for tasks as diverse as orientation, localization or docking, particularly when the visual environment is degraded or workload is increased. Tactile displays offer a third sensory channel that may both offload information processing effort and provide a means to capture attention when urgently required. For example, recent studies suggest that including tactile cues may result in increased orientation and alerting accuracy, improved task response time and decreased workload, as well as provide self-orientation cues in microgravity on the ISS (International Space Station). An important overall issue is that context-dependent factors like task complexity, sensory degradation, peripersonal vs. extrapersonal space operations, workload, experience level, and operator fatigue tend to vary greatly in complex real-world environments and it will be difficult to design a multimodal interface that performs well under all conditions. As a possible solution, adaptive systems have been proposed in which the information presented to the user changes as a function of taskcontext-dependent factors. However, this presupposes that adequate methods for detecting andor predicting such factors are developed. Further, research in adaptive systems for aviation suggests that they can sometimes serve to increase workload and reduce situational awareness. It will be critical to develop multimodal display guidelines that include consideration of smart systems that can select the best display method for a particular contextsituation.The scope of the current work is an analysis of potential multimodal display technologies for long duration missions and, in particular, will focus on their potential role in EVA activities. The review will address multimodal (combined visual, auditory andor tactile) displays investigated by NASA, industry, and DoD (Dept. of Defense). It also considers the need for adaptive information systems to accommodate a variety of operational contexts such as crew status (e.g., fatigue, workload level) and task environment (e.g., EVA, habitat, rover, spacecraft). Current approaches to guidelines and best practices for combining modalities for the most effective information displays are also reviewed. Potential issues in developing interface guidelines for the Exploration Information System (EIS) are briefly considered.

  10. Peruvian Food Chain Jenga: Learning Ecosystems with an Interactive Model

    ERIC Educational Resources Information Center

    Hartweg, Beau; Biffi, Daniella; de la Fuente, Yohanis; Malkoc, Ummuhan; Patterson, Melissa E.; Pearce, Erin; Stewart, Morgan A.; Weinburgh, Molly

    2017-01-01

    A pilot study was conducted on a multimodal educational tool, Peruvian Food Chain Jenga (PFCJ), with 5th-grade students (N = 54) at a public charter school. The goal was to compare the effectiveness of the multimodal tool to a more traditional presentation of the same materials (food chain) using an experimental/control design. Data collection…

  11. The Use of the Webcam for Teaching a Foreign Language in a Desktop Videoconferencing Environment

    ERIC Educational Resources Information Center

    Develotte, Christine; Guichon, Nicolas; Vincent, Caroline

    2010-01-01

    This paper explores how language teachers learn to teach with a synchronous multimodal setup ("Skype"), and it focuses on their use of the webcam during the pedagogical interaction. First, we analyze the ways that French graduate students learning to teach online use the multimodal resources available in a desktop videoconferencing (DVC)…

  12. A Plurisemiotic Study of Multimodal Interactive Teaching through Videoconferencing

    ERIC Educational Resources Information Center

    Codreanu, Tatiana; Celik, Christelle Combe

    2012-01-01

    The aim of the study is to describe and analyze webcam pedagogical communication between a French Foreign Language tutor and two students during seven online classes. It tries to answer the following question: how does the tutor in a multimodal learning environment change her semio-discursive behavior from the first to the last session? We analyze…

  13. Using a Multimodal Approach to Facilitate Articulation, Phonemic Awareness, and Literacy in Young Children

    ERIC Educational Resources Information Center

    Pieretti, Robert A.; Kaul, Sandra D.; Zarchy, Razi M.; O'Hanlon, Laureen M.

    2015-01-01

    The primary focus of this research study was to examine the benefit of a using a multimodal approach to speech sound correction with preschool children. The approach uses the auditory, tactile, and kinesthetic modalities and includes a unique, interactive visual focus that attempts to provide a visual representation of a phonemic category. The…

  14. An In-Depth Exploration of the Effects of the Webcam on Multimodal Interactive Learning

    ERIC Educational Resources Information Center

    Codreanu, Tatiana; Celik, Christelle Combe

    2012-01-01

    Current research describes multimodal pedagogical communication of two populations of online teachers; trainee tutors (second year students of the Master of Arts in Teaching French as a Foreign Language at the university Lumiere-Lyon 2, France) and experienced teachers based in different locations (France, Spain and Finland). They all taught…

  15. Linguistic Layering: Social Language Development in the Context of Multimodal Design and Digital Technologies

    ERIC Educational Resources Information Center

    Domingo, Myrrh

    2012-01-01

    In our contemporary society, digital texts circulate more readily and extend beyond page-bound formats to include interactive representations such as online newsprint with hyperlinks to audio and video files. This is to say that multimodality combined with digital technologies extends grammar to include voice, visual, and music, among other modes…

  16. Collaboration of Miniature Multi-Modal Mobile Smart Robots over a Network

    DTIC Science & Technology

    2015-08-14

    theoretical research on mathematics of failures in sensor-network-based miniature multimodal mobile robots and electromechanical systems. The views...theoretical research on mathematics of failures in sensor-network-based miniature multimodal mobile robots and electromechanical systems. The...independently evolving research directions based on physics-based models of mechanical, electromechanical and electronic devices, operational constraints

  17. Modal interactions between a large-wavelength inclined interface and small-wavelength multimode perturbations in a Richtmyer-Meshkov instability

    NASA Astrophysics Data System (ADS)

    McFarland, Jacob A.; Reilly, David; Black, Wolfgang; Greenough, Jeffrey A.; Ranjan, Devesh

    2015-07-01

    The interaction of a small-wavelength multimodal perturbation with a large-wavelength inclined interface perturbation is investigated for the reshocked Richtmyer-Meshkov instability using three-dimensional simulations. The ares code, developed at Lawrence Livermore National Laboratory, was used for these simulations and a detailed comparison of simulation results and experiments performed at the Georgia Tech Shock Tube facility is presented first for code validation. Simulation results are presented for four cases that vary in large-wavelength perturbation amplitude and the presence of secondary small-wavelength multimode perturbations. Previously developed measures of mixing and turbulence quantities are presented that highlight the large variation in perturbation length scales created by the inclined interface and the multimode complex perturbation. Measures are developed for entrainment, and turbulence anisotropy that help to identify the effects of and competition between each perturbations type. It is shown through multiple measures that before reshock the flow processes a distinct memory of the initial conditions that is present in both large-scale-driven entrainment measures and small-scale-driven mixing measures. After reshock the flow develops to a turbulentlike state that retains a memory of high-amplitude but not low-amplitude large-wavelength perturbations. It is also shown that the high-amplitude large-wavelength perturbation is capable of producing small-scale mixing and turbulent features similar to the small-wavelength multimode perturbations.

  18. Sensitivity-Bandwidth Limit in a Multimode Optoelectromechanical Transducer

    NASA Astrophysics Data System (ADS)

    Moaddel Haghighi, I.; Malossi, N.; Natali, R.; Di Giuseppe, G.; Vitali, D.

    2018-03-01

    An optoelectromechanical system formed by a nanomembrane capacitively coupled to an L C resonator and to an optical interferometer has recently been employed for the highly sensitive optical readout of rf signals [T. Bagci et al., Nature (London) 507, 81 (2013), 10.1038/nature13029]. We propose and experimentally demonstrate how the bandwidth of such a transducer can be increased by controlling the interference between two electromechanical interaction pathways of a two-mode mechanical system. With a proof-of-principle device operating at room temperature, we achieve a sensitivity of 300 nV /√{Hz } over a bandwidth of 15 kHz in the presence of radio-frequency noise, and an optimal shot-noise-limited sensitivity of 10 nV /√{Hz } over a bandwidth of 5 kHz. We discuss strategies for improving the performance of the device, showing that, for the same given sensitivity, a mechanical multimode transducer can achieve a bandwidth significantly larger than that for a single-mode one.

  19. Intersegmental Eye-Head-Body Interactions during Complex Whole Body Movements

    PubMed Central

    von Laßberg, Christoph; Beykirch, Karl A.; Mohler, Betty J.; Bülthoff, Heinrich H.

    2014-01-01

    Using state-of-the-art technology, interactions of eye, head and intersegmental body movements were analyzed for the first time during multiple twisting somersaults of high-level gymnasts. With this aim, we used a unique combination of a 16-channel infrared kinemetric system; a three-dimensional video kinemetric system; wireless electromyography; and a specialized wireless sport-video-oculography system, which was able to capture and calculate precise oculomotor data under conditions of rapid multiaxial acceleration. All data were synchronized and integrated in a multimodal software tool for three-dimensional analysis. During specific phases of the recorded movements, a previously unknown eye-head-body interaction was observed. The phenomenon was marked by a prolonged and complete suppression of gaze-stabilizing eye movements, in favor of a tight coupling with the head, spine and joint movements of the gymnasts. Potential reasons for these observations are discussed with regard to earlier findings and integrated within a functional model. PMID:24763143

  20. State deadbeat response and observability in multi-modal systems

    NASA Technical Reports Server (NTRS)

    Conner, L. T., Jr.; Stanford, D. P.

    1984-01-01

    Two aspects of multimodal systems are examined. It is shown that any completely controllable system with state dimension n not exceeding three allows a choice of feedback matrices resulting in a state deadbeat response. Some of the results presented here are valid for arbitrary n, and it is suggested that for all n the state deadbeat response can be obtained under the hypothesis of complete controllability. The controllability canonical form for a multimodal system is refined by introducing a notion of observability which is dual to controllability for these systems.

  1. Haptic-Multimodal Flight Control System Update

    NASA Technical Reports Server (NTRS)

    Goodrich, Kenneth H.; Schutte, Paul C.; Williams, Ralph A.

    2011-01-01

    The rapidly advancing capabilities of autonomous aircraft suggest a future where many of the responsibilities of today s pilot transition to the vehicle, transforming the pilot s job into something akin to driving a car or simply being a passenger. Notionally, this transition will reduce the specialized skills, training, and attention required of the human user while improving safety and performance. However, our experience with highly automated aircraft highlights many challenges to this transition including: lack of automation resilience; adverse human-automation interaction under stress; and the difficulty of developing certification standards and methods of compliance for complex systems performing critical functions traditionally performed by the pilot (e.g., sense and avoid vs. see and avoid). Recognizing these opportunities and realities, researchers at NASA Langley are developing a haptic-multimodal flight control (HFC) system concept that can serve as a bridge between today s state of the art aircraft that are highly automated but have little autonomy and can only be operated safely by highly trained experts (i.e., pilots) to a future in which non-experts (e.g., drivers) can safely and reliably use autonomous aircraft to perform a variety of missions. This paper reviews the motivation and theoretical basis of the HFC system, describes its current state of development, and presents results from two pilot-in-the-loop simulation studies. These preliminary studies suggest the HFC reshapes human-automation interaction in a way well-suited to revolutionary ease-of-use.

  2. Multimodality cardiac imaging at IRCCS Policlinico San Donato: a new interdisciplinary vision.

    PubMed

    Lombardi, Massimo; Secchi, Francesco; Pluchinotta, Francesca R; Castelvecchio, Serenella; Montericcio, Vincenzo; Camporeale, Antonia; Bandera, Francesco

    2016-04-28

    Multimodality imaging is the efficient integration of various methods of cardiovascular imaging to improve the ability to diagnose, guide therapy, or predict outcome. This approach implies both the availability of different technologies in a single unit and the presence of dedicated staff with cardiologic and radiologic background and certified competence in more than one imaging technique. Interaction with clinical practice and existence of research programmes and educational activities are pivotal for the success of this model. The aim of this paper is to describe the multimodality cardiac imaging programme recently started at San Donato Hospital.

  3. Mode-selective mapping and control of vectorial nonlinear-optical processes in multimode photonic-crystal fibers.

    PubMed

    Hu, Ming-Lie; Wang, Ching-Yue; Song, You-Jian; Li, Yan-Feng; Chai, Lu; Serebryannikov, Evgenii; Zheltikov, Aleksei

    2006-02-06

    We demonstrate an experimental technique that allows a mapping of vectorial nonlinear-optical processes in multimode photonic-crystal fibers (PCFs). Spatial and polarization modes of PCFs are selectively excited in this technique by varying the tilt angle of the input beam and rotating the polarization of the input field. Intensity spectra of the PCF output plotted as a function of the input field power and polarization then yield mode-resolved maps of nonlinear-optical interactions in multimode PCFs, facilitating the analysis and control of nonlinear-optical transformations of ultrashort laser pulses in such fibers.

  4. Numerical investigation of nonlinear interactions between multimodal guided waves and delamination in composite structures

    NASA Astrophysics Data System (ADS)

    Shen, Yanfeng

    2017-04-01

    This paper presents a numerical investigation of the nonlinear interactions between multimodal guided waves and delamination in composite structures. The elastodynamic wave equations for anisotropic composite laminate were formulated using an explicit Local Interaction Simulation Approach (LISA). The contact dynamics was modeled using the penalty method. In order to capture the stick-slip contact motion, a Coulomb friction law was integrated into the computation procedure. A random gap function was defined for the contact pairs to model distributed initial closures or openings to approximate the nature of rough delamination interfaces. The LISA procedure was coded using the Compute Unified Device Architecture (CUDA), which enables the highly parallelized computation on powerful graphic cards. Several guided wave modes centered at various frequencies were investigated as the incident wave. Numerical case studies of different delamination locations across the thickness were carried out. The capability of different wave modes at various frequencies to trigger the Contact Acoustic Nonlinearity (CAN) was studied. The correlation between the delamination size and the signal nonlinearity was also investigated. Furthermore, the influence from the roughness of the delamination interfaces was discussed as well. The numerical investigation shows that the nonlinear features of wave delamination interactions can enhance the evaluation capability of guided wave Structural Health Monitoring (SHM) system. This paper finishes with discussion, concluding remarks, and suggestions for future work.

  5. Survey on RGB, 3D, Thermal, and Multimodal Approaches for Facial Expression Recognition: History, Trends, and Affect-Related Applications.

    PubMed

    Corneanu, Ciprian Adrian; Simon, Marc Oliu; Cohn, Jeffrey F; Guerrero, Sergio Escalera

    2016-08-01

    Facial expressions are an important way through which humans interact socially. Building a system capable of automatically recognizing facial expressions from images and video has been an intense field of study in recent years. Interpreting such expressions remains challenging and much research is needed about the way they relate to human affect. This paper presents a general overview of automatic RGB, 3D, thermal and multimodal facial expression analysis. We define a new taxonomy for the field, encompassing all steps from face detection to facial expression recognition, and describe and classify the state of the art methods accordingly. We also present the important datasets and the bench-marking of most influential methods. We conclude with a general discussion about trends, important questions and future lines of research.

  6. Signatures of Indistinguishability in Bosonic Many-Body Dynamics

    NASA Astrophysics Data System (ADS)

    Brünner, Tobias; Dufour, Gabriel; Rodríguez, Alberto; Buchleitner, Andreas

    2018-05-01

    The dynamics of bosons in generic multimode systems, such as Bose-Hubbard models, are not only determined by interactions among the particles, but also by their mutual indistinguishability manifested in many-particle interference. We introduce a measure of indistinguishability for Fock states of bosons whose mutual distinguishability is controlled by an internal degree of freedom. We demonstrate how this measure emerges both in the noninteracting and interacting evolution of observables. In particular, we find an unambiguous relationship between our measure and the variance of single-particle observables in the noninteracting limit. A nonvanishing interaction leads to a hierarchy of interaction-induced interference processes, such that even the expectation value of single-particle observables is influenced by the degree of indistinguishability.

  7. Localizing HIV/AIDS discourse in a rural Kenyan community.

    PubMed

    Banda, Felix; Oketch, Omondi

    2011-01-01

    This paper examines the effectiveness of multimodal texts used in HIV/AIDS campaigns in rural western Kenya using multimodal discourse analysis (Kress and Van Leeuwen, 2006; Martin and Rose, 2004). Twenty HIV/AIDS documents (posters, billboards and brochures) are analysed together with interview data (20 unstructured one-on-one interviews and six focus groups) from the target group to explore the effectiveness of the multimodal texts in engaging the target rural audience in meaningful interaction towards behavioural change. It is concluded that in some cases the HIV/AIDS messages are misinterpreted or lost as the multimodal texts used are unfamiliar and contradictory to the everyday life experiences of the rural folk. The paper suggests localization of HIV/AIDS discourse through use of local modes of communication and resources.

  8. A Physics-driven Neural Networks-based Simulation System (PhyNNeSS) for multimodal interactive virtual environments involving nonlinear deformable objects

    PubMed Central

    De, Suvranu; Deo, Dhannanjay; Sankaranarayanan, Ganesh; Arikatla, Venkata S.

    2012-01-01

    Background While an update rate of 30 Hz is considered adequate for real time graphics, a much higher update rate of about 1 kHz is necessary for haptics. Physics-based modeling of deformable objects, especially when large nonlinear deformations and complex nonlinear material properties are involved, at these very high rates is one of the most challenging tasks in the development of real time simulation systems. While some specialized solutions exist, there is no general solution for arbitrary nonlinearities. Methods In this work we present PhyNNeSS - a Physics-driven Neural Networks-based Simulation System - to address this long-standing technical challenge. The first step is an off-line pre-computation step in which a database is generated by applying carefully prescribed displacements to each node of the finite element models of the deformable objects. In the next step, the data is condensed into a set of coefficients describing neurons of a Radial Basis Function network (RBFN). During real-time computation, these neural networks are used to reconstruct the deformation fields as well as the interaction forces. Results We present realistic simulation examples from interactive surgical simulation with real time force feedback. As an example, we have developed a deformable human stomach model and a Penrose-drain model used in the Fundamentals of Laparoscopic Surgery (FLS) training tool box. Conclusions A unique computational modeling system has been developed that is capable of simulating the response of nonlinear deformable objects in real time. The method distinguishes itself from previous efforts in that a systematic physics-based pre-computational step allows training of neural networks which may be used in real time simulations. We show, through careful error analysis, that the scheme is scalable, with the accuracy being controlled by the number of neurons used in the simulation. PhyNNeSS has been integrated into SoFMIS (Software Framework for Multimodal Interactive Simulation) for general use. PMID:22629108

  9. Field evaluation of a wearable multimodal soldier navigation system.

    PubMed

    Aaltonen, Iina; Laarni, Jari

    2017-09-01

    Challenging environments pose difficulties for terrain navigation, and therefore wearable and multimodal navigation systems have been proposed to overcome these difficulties. Few such navigation systems, however, have been evaluated in field conditions. We evaluated how a multimodal system can aid in navigating in a forest in the context of a military exercise. The system included a head-mounted display, headphones, and a tactile vibrating vest. Visual, auditory, and tactile modalities were tested and evaluated using unimodal, bimodal, and trimodal conditions. Questionnaires, interviews and observations were used to evaluate the advantages and disadvantages of each modality and their multimodal use. The guidance was considered easy to interpret and helpful in navigation. Simplicity of the displayed information was required, which was partially conflicting with the request for having both distance and directional information available. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Multi-Modal Traveler Information System - Gateway Functional Requirements

    DOT National Transportation Integrated Search

    1997-11-17

    The Multi-Modal Traveler Information System (MMTIS) project involves a large number of Intelligent Transportation System (ITS) related tasks. It involves research of all ITS initiatives in the Gary-Chicago-Milwaukee (GCM) Corridor which are currently...

  11. Multi-Modal Traveler Information System - Gateway Interface Control Requirements

    DOT National Transportation Integrated Search

    1997-10-30

    The Multi-Modal Traveler Information System (MMTIS) project involves a large number of Intelligent Transportation System (ITS) related tasks. It involves research of all ITS initiatives in the Gary-Chicago-Milwaukee (GCM) Corridor which are currently...

  12. Construction of a multimodal CT-video chest model

    NASA Astrophysics Data System (ADS)

    Byrnes, Patrick D.; Higgins, William E.

    2014-03-01

    Bronchoscopy enables a number of minimally invasive chest procedures for diseases such as lung cancer and asthma. For example, using the bronchoscope's continuous video stream as a guide, a physician can navigate through the lung airways to examine general airway health, collect tissue samples, or administer a disease treatment. In addition, physicians can now use new image-guided intervention (IGI) systems, which draw upon both three-dimensional (3D) multi-detector computed tomography (MDCT) chest scans and bronchoscopic video, to assist with bronchoscope navigation. Unfortunately, little use is made of the acquired video stream, a potentially invaluable source of information. In addition, little effort has been made to link the bronchoscopic video stream to the detailed anatomical information given by a patient's 3D MDCT chest scan. We propose a method for constructing a multimodal CT-video model of the chest. After automatically computing a patient's 3D MDCT-based airway-tree model, the method next parses the available video data to generate a positional linkage between a sparse set of key video frames and airway path locations. Next, a fusion/mapping of the video's color mucosal information and MDCT-based endoluminal surfaces is performed. This results in the final multimodal CT-video chest model. The data structure constituting the model provides a history of those airway locations visited during bronchoscopy. It also provides for quick visual access to relevant sections of the airway wall by condensing large portions of endoscopic video into representative frames containing important structural and textural information. When examined with a set of interactive visualization tools, the resulting fused data structure provides a rich multimodal data source. We demonstrate the potential of the multimodal model with both phantom and human data.

  13. Multimodal medical information retrieval with unsupervised rank fusion.

    PubMed

    Mourão, André; Martins, Flávio; Magalhães, João

    2015-01-01

    Modern medical information retrieval systems are paramount to manage the insurmountable quantities of clinical data. These systems empower health care experts in the diagnosis of patients and play an important role in the clinical decision process. However, the ever-growing heterogeneous information generated in medical environments poses several challenges for retrieval systems. We propose a medical information retrieval system with support for multimodal medical case-based retrieval. The system supports medical information discovery by providing multimodal search, through a novel data fusion algorithm, and term suggestions from a medical thesaurus. Our search system compared favorably to other systems in 2013 ImageCLEFMedical. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Multi-Modal Traveler Information System - Alternative GCM Corridor Technologies and Strategies

    DOT National Transportation Integrated Search

    1997-10-24

    The purpose of this working paper is to summarize current and evolving Intelligent Transportation System (ITS) technologies and strategies related to the design, development, and deployment of regional multi-modal traveler information systems. This r...

  15. Multi-Modal Traveler Information System - GCM Corridor Architecture Interface Control Requirements

    DOT National Transportation Integrated Search

    1997-10-31

    The Multi-Modal Traveler Information System (MMTIS) project involves a large number of Intelligent Transportation System (ITS) related tasks. It involves research of all ITS initiatives in the Gary-Chicago-Milwaukee (GCM) Corridor which are currently...

  16. Multi-Modal Traveler Information System - GCM Corridor Architecture Functional Requirements

    DOT National Transportation Integrated Search

    1997-11-17

    The Multi-Modal Traveler Information System (MMTIS) project involves a large number of Intelligent Transportation System (ITS) related tasks. It involves research of all ITS initiatives in the Gary-Chicago-Milwaukee (GCM) Corridor which are currently...

  17. Awareware: Narrowcasting Attributes for Selective Attention, Privacy, and Multipresence

    NASA Astrophysics Data System (ADS)

    Cohen, Michael; Newton Fernando, Owen Noel

    The domain of cscw, computer-supported collaborative work, and DSC, distributed synchronous collaboration, spans real-time interactive multiuser systems, shared information spaces, and applications for teleexistence and artificial reality, including collaborative virtual environments ( cves) (Benford et al., 2001). As presence awareness systems emerge, it is important to develop appropriate interfaces and architectures for managing multimodal multiuser systems. Especially in consideration of the persistent connectivity enabled by affordable networked communication, shared distributed environments require generalized control of media streams, techniques to control source → sink transmissions in synchronous groupware, including teleconferences and chatspaces, online role-playing games, and virtual concerts.

  18. Trust Measurement using Multimodal Behavioral Analysis and Uncertainty Aware Trust Calibration

    DTIC Science & Technology

    2018-01-05

    to estimate their performance based on their estimation on all prior trials. In the meanwhile via comparing the decisions of participants with the...it is easier compared with situations when more trials have been done. It should be noted that if a participant is good at memorizing the previous...them. The proposed study, being quantitative and explorative, are expected to reveal a number of findings that benefit interaction system design and

  19. Optics & Opto-Electronic Systems

    DTIC Science & Technology

    1988-06-01

    its reflection by the 13 cavity boundaries, and its reabsorption by the atom. Multimode corrections to the single-mode Jaynes - Cummings model are...walls. Transients in the Micromaser C. R. Stroud, Jr. The Jaynes - Cummings model of a single two-lev3l atom interacting with a single field mode of a...increasing laser intensity and to be as large as 22 bits/sec. A standard model of self- pumped phase conjugation due to four- wave mixing has been

  20. A Closer Look at Bilingual Students' Use of Multimodality in the Context of an Area Comparison Problem from a Large-Scale Assessment

    ERIC Educational Resources Information Center

    Fernandes, Anthony; Kahn, Leslie H.; Civil, Marta

    2017-01-01

    In this article, we use multimodality to examine how bilingual students interact with an area task from the National Assessment of Educational Progress in task-based interviews. Using vignettes, we demonstrate how some of these students manipulate the concrete materials, and use gestures, as a primary form of structuring their explanations and…

  1. Toward a Neuroscientific Understanding of Play: A Dimensional Coding Framework for Analyzing Infant–Adult Play Patterns

    PubMed Central

    Neale, Dave; Clackson, Kaili; Georgieva, Stanimira; Dedetas, Hatice; Scarpate, Melissa; Wass, Sam; Leong, Victoria

    2018-01-01

    Play during early life is a ubiquitous activity, and an individual’s propensity for play is positively related to cognitive development and emotional well-being. Play behavior (which may be solitary or shared with a social partner) is diverse and multi-faceted. A challenge for current research is to converge on a common definition and measurement system for play – whether examined at a behavioral, cognitive or neurological level. Combining these different approaches in a multimodal analysis could yield significant advances in understanding the neurocognitive mechanisms of play, and provide the basis for developing biologically grounded play models. However, there is currently no integrated framework for conducting a multimodal analysis of play that spans brain, cognition and behavior. The proposed coding framework uses grounded and observable behaviors along three dimensions (sensorimotor, cognitive and socio-emotional), to compute inferences about playful behavior in a social context, and related social interactional states. Here, we illustrate the sensitivity and utility of the proposed coding framework using two contrasting dyadic corpora (N = 5) of mother-infant object-oriented interactions during experimental conditions that were either non-conducive (Condition 1) or conducive (Condition 2) to the emergence of playful behavior. We find that the framework accurately identifies the modal form of social interaction as being either non-playful (Condition 1) or playful (Condition 2), and further provides useful insights about differences in the quality of social interaction and temporal synchronicity within the dyad. It is intended that this fine-grained coding of play behavior will be easily assimilated with, and inform, future analysis of neural data that is also collected during adult–infant play. In conclusion, here, we present a novel framework for analyzing the continuous time-evolution of adult–infant play patterns, underpinned by biologically informed state coding along sensorimotor, cognitive and socio-emotional dimensions. We expect that the proposed framework will have wide utility amongst researchers wishing to employ an integrated, multimodal approach to the study of play, and lead toward a greater understanding of the neuroscientific basis of play. It may also yield insights into a new biologically grounded taxonomy of play interactions. PMID:29618994

  2. Toward a Neuroscientific Understanding of Play: A Dimensional Coding Framework for Analyzing Infant-Adult Play Patterns.

    PubMed

    Neale, Dave; Clackson, Kaili; Georgieva, Stanimira; Dedetas, Hatice; Scarpate, Melissa; Wass, Sam; Leong, Victoria

    2018-01-01

    Play during early life is a ubiquitous activity, and an individual's propensity for play is positively related to cognitive development and emotional well-being. Play behavior (which may be solitary or shared with a social partner) is diverse and multi-faceted. A challenge for current research is to converge on a common definition and measurement system for play - whether examined at a behavioral, cognitive or neurological level. Combining these different approaches in a multimodal analysis could yield significant advances in understanding the neurocognitive mechanisms of play, and provide the basis for developing biologically grounded play models. However, there is currently no integrated framework for conducting a multimodal analysis of play that spans brain, cognition and behavior. The proposed coding framework uses grounded and observable behaviors along three dimensions (sensorimotor, cognitive and socio-emotional), to compute inferences about playful behavior in a social context, and related social interactional states. Here, we illustrate the sensitivity and utility of the proposed coding framework using two contrasting dyadic corpora ( N = 5) of mother-infant object-oriented interactions during experimental conditions that were either non-conducive (Condition 1) or conducive (Condition 2) to the emergence of playful behavior. We find that the framework accurately identifies the modal form of social interaction as being either non-playful (Condition 1) or playful (Condition 2), and further provides useful insights about differences in the quality of social interaction and temporal synchronicity within the dyad. It is intended that this fine-grained coding of play behavior will be easily assimilated with, and inform, future analysis of neural data that is also collected during adult-infant play. In conclusion, here, we present a novel framework for analyzing the continuous time-evolution of adult-infant play patterns, underpinned by biologically informed state coding along sensorimotor, cognitive and socio-emotional dimensions. We expect that the proposed framework will have wide utility amongst researchers wishing to employ an integrated, multimodal approach to the study of play, and lead toward a greater understanding of the neuroscientific basis of play. It may also yield insights into a new biologically grounded taxonomy of play interactions.

  3. Multimodal system planning technique : an analytical approach to peak period operation

    DOT National Transportation Integrated Search

    1995-11-01

    The multimodal system planning technique described in this report is an improvement of the methodology used in the Dallas System Planning Study. The technique includes a spreadsheet-based process to identify the costs of congestion, construction, and...

  4. Remote sensing of multimodal transportation systems : research brief.

    DOT National Transportation Integrated Search

    2016-09-01

    Remote Sensing of Multimodal Transportation Systems : Rapid condition monitoring and performance evaluations of the vast and vulnerable transportation infrastructure has been elusive. The framework and models developed in this research will enable th...

  5. Empowering Prospective Teachers to Become Active Sense-Makers: Multimodal Modeling of the Seasons

    NASA Astrophysics Data System (ADS)

    Kim, Mi Song

    2015-10-01

    Situating science concepts in concrete and authentic contexts, using information and communications technologies, including multimodal modeling tools, is important for promoting the development of higher-order thinking skills in learners. However, teachers often struggle to integrate emergent multimodal models into a technology-rich informal learning environment. Our design-based research co-designs and develops engaging, immersive, and interactive informal learning activities called "Embodied Modeling-Mediated Activities" (EMMA) to support not only Singaporean learners' deep learning of astronomy but also the capacity of teachers. As part of the research on EMMA, this case study describes two prospective teachers' co-design processes involving multimodal models for teaching and learning the concept of the seasons in a technology-rich informal learning setting. Our study uncovers four prominent themes emerging from our data concerning the contextualized nature of learning and teaching involving multimodal models in informal learning contexts: (1) promoting communication and emerging questions, (2) offering affordances through limitations, (3) explaining one concept involving multiple concepts, and (4) integrating teaching and learning experiences. This study has an implication for the development of a pedagogical framework for teaching and learning in technology-enhanced learning environments—that is empowering teachers to become active sense-makers using multimodal models.

  6. A novel approach of dynamic cross correlation analysis on molecular dynamics simulations and its application to Ets1 dimer-DNA complex.

    PubMed

    Kasahara, Kota; Fukuda, Ikuo; Nakamura, Haruki

    2014-01-01

    The dynamic cross correlation (DCC) analysis is a popular method for analyzing the trajectories of molecular dynamics (MD) simulations. However, it is difficult to detect correlative motions that appear transiently in only a part of the trajectory, such as atomic contacts between the side-chains of amino acids, which may rapidly flip. In order to capture these multi-modal behaviors of atoms, which often play essential roles, particularly at the interfaces of macromolecules, we have developed the "multi-modal DCC (mDCC)" analysis. The mDCC is an extension of the DCC and it takes advantage of a Bayesian-based pattern recognition technique. We performed MD simulations for molecular systems modeled from the (Ets1)2-DNA complex and analyzed their results with the mDCC method. Ets1 is an essential transcription factor for a variety of physiological processes, such as immunity and cancer development. Although many structural and biochemical studies have so far been performed, its DNA binding properties are still not well characterized. In particular, it is not straightforward to understand the molecular mechanisms how the cooperative binding of two Ets1 molecules facilitates their recognition of Stromelysin-1 gene regulatory elements. A correlation network was constructed among the essential atomic contacts, and the two major pathways by which the two Ets1 molecules communicate were identified. One is a pathway via direct protein-protein interactions and the other is that via the bound DNA intervening two recognition helices. These two pathways intersected at the particular cytosine bases (C110/C11), interacting with the H1, H2, and H3 helices. Furthermore, the mDCC analysis showed that both pathways included the transient interactions at their intermolecular interfaces of Tyr396-C11 and Ala327-Asn380 in multi-modal motions of the amino acid side chains and the nucleotide backbone. Thus, the current mDCC approach is a powerful tool to reveal these complicated behaviors and scrutinize intermolecular communications in a molecular system.

  7. Fluorescence Imaging Topography Scanning System for intraoperative multimodal imaging

    PubMed Central

    Quang, Tri T.; Kim, Hye-Yeong; Bao, Forrest Sheng; Papay, Francis A.; Edwards, W. Barry; Liu, Yang

    2017-01-01

    Fluorescence imaging is a powerful technique with diverse applications in intraoperative settings. Visualization of three dimensional (3D) structures and depth assessment of lesions, however, are oftentimes limited in planar fluorescence imaging systems. In this study, a novel Fluorescence Imaging Topography Scanning (FITS) system has been developed, which offers color reflectance imaging, fluorescence imaging and surface topography scanning capabilities. The system is compact and portable, and thus suitable for deployment in the operating room without disturbing the surgical flow. For system performance, parameters including near infrared fluorescence detection limit, contrast transfer functions and topography depth resolution were characterized. The developed system was tested in chicken tissues ex vivo with simulated tumors for intraoperative imaging. We subsequently conducted in vivo multimodal imaging of sentinel lymph nodes in mice using FITS and PET/CT. The PET/CT/optical multimodal images were co-registered and conveniently presented to users to guide surgeries. Our results show that the developed system can facilitate multimodal intraoperative imaging. PMID:28437441

  8. Multimodal representation of limb endpoint position in the posterior parietal cortex.

    PubMed

    Shi, Ying; Apker, Gregory; Buneo, Christopher A

    2013-04-01

    Understanding the neural representation of limb position is important for comprehending the control of limb movements and the maintenance of body schema, as well as for the development of neuroprosthetic systems designed to replace lost limb function. Multiple subcortical and cortical areas contribute to this representation, but its multimodal basis has largely been ignored. Regarding the parietal cortex, previous results suggest that visual information about arm position is not strongly represented in area 5, although these results were obtained under conditions in which animals were not using their arms to interact with objects in their environment, which could have affected the relative weighting of relevant sensory signals. Here we examined the multimodal basis of limb position in the superior parietal lobule (SPL) as monkeys reached to and actively maintained their arm position at multiple locations in a frontal plane. On half of the trials both visual and nonvisual feedback of the endpoint of the arm were available, while on the other trials visual feedback was withheld. Many neurons were tuned to arm position, while a smaller number were modulated by the presence/absence of visual feedback. Visual modulation generally took the form of a decrease in both firing rate and variability with limb vision and was associated with more accurate decoding of position at the population level under these conditions. These findings support a multimodal representation of limb endpoint position in the SPL but suggest that visual signals are relatively weakly represented in this area, and only at the population level.

  9. Developing single-laser sources for multimodal coherent anti-Stokes Raman scattering microscopy

    NASA Astrophysics Data System (ADS)

    Pegoraro, Adrian Frank

    Coherent anti-Stokes Raman scattering (CARS) microscopy has developed rapidly and is opening the door to new types of experiments. This work describes the development of new laser sources for CARS microscopy and their use for different applications. It is specifically focused on multimodal nonlinear optical microscopy—the simultaneous combination of different imaging techniques. This allows us to address a diverse range of applications, such as the study of biomaterials, fluid inclusions, atherosclerosis, hepatitis C infection in cells, and ice formation in cells. For these applications new laser sources are developed that allow for practical multimodal imaging. For example, it is shown that using a single Ti:sapphire oscillator with a photonic crystal fiber, it is possible to develop a versatile multimodal imaging system using optimally chirped laser pulses. This system can perform simultaneous two photon excited fluorescence, second harmonic generation, and CARS microscopy. The versatility of the system is further demonstrated by showing that it is possible to probe different Raman modes using CARS microscopy simply by changing a time delay between the excitation beams. Using optimally chirped pulses also enables further simplification of the laser system required by using a single fiber laser combined with nonlinear optical fibers to perform effective multimodal imaging. While these sources are useful for practical multimodal imaging, it is believed that for further improvements in CARS microscopy sensitivity, new excitation schemes are necessary. This has led to the design of a new, high power, extended cavity oscillator that should be capable of implementing new excitation schemes for CARS microscopy as well as other techniques. Our interest in multimodal imaging has led us to other areas of research as well. For example, a fiber-coupling scheme for signal collection in the forward direction is demonstrated that allows for fluorescence lifetime imaging without significant temporal distortion. Also highlighted is an imaging artifact that is unique to CARS microscopy that can alter image interpretation, especially when using multimodal imaging. By combining expertise in nonlinear optics, laser development, fiber optics, and microscopy, we have developed systems and techniques that will be of benefit for multimodal CARS microscopy.

  10. Optical circulation in a multimode optomechanical resonator.

    PubMed

    Ruesink, Freek; Mathew, John P; Miri, Mohammad-Ali; Alù, Andrea; Verhagen, Ewold

    2018-05-04

    Breaking the symmetry of electromagnetic wave propagation enables important technological functionality. In particular, circulators are nonreciprocal components that can route photons directionally in classical or quantum photonic circuits and offer prospects for fundamental research on electromagnetic transport. Developing highly efficient circulators thus presents an important challenge, especially to realise compact reconfigurable implementations that do not rely on magnetic fields to break reciprocity. We demonstrate optical circulation utilising radiation pressure interactions in an on-chip multimode optomechanical system. Mechanically mediated optical mode conversion in a silica microtoroid provides a synthetic gauge bias for light, enabling four-port circulation that exploits tailored interference between appropriate light paths. We identify two sideband conditions under which ideal circulation is approached. This allows to experimentally demonstrate ~10 dB isolation and <3 dB insertion loss in all relevant channels. We show the possibility of actively controlling the circulator properties, enabling ideal opportunities for reconfigurable integrated nanophotonic circuits.

  11. Fabrication of flexible, multimodal light-emitting devices for wireless optogenetics

    PubMed Central

    Huang, Xian; Jung, Yei Hwan; Al-Hasani, Ream; Omenetto, Fiorenzo G.

    2014-01-01

    Summary The rise of optogenetics provides unique opportunities to advance materials and biomedical engineering as well as fundamental understanding in neuroscience. This protocol describes the fabrication of optoelectronic devices for studying intact neural systems. Unlike optogenetic approaches that rely on rigid fiber optics tethered to external light sources, these novel devices utilize flexible substrates to carry wirelessly powered microscale, inorganic light-emitting diodes (μ-ILEDs) and multimodal sensors inside the brain. We describe the technical procedures for construction of these devices, their corresponding radiofrequency power scavengers, and their implementation in vivo for experimental application. In total, the timeline of the procedure, including device fabrication, implantation, and preparation to begin in vivo experimentation, can be completed in approximately 3–8 weeks. Implementation of these devices allows for chronic (tested up to six months), wireless optogenetic manipulation of neural circuitry in animals experiencing behaviors such as social interaction, home cage, and other complex natural environments. PMID:24202555

  12. Defining the property space for chromatographic ligands from a homologous series of mixed-mode ligands.

    PubMed

    Woo, James A; Chen, Hong; Snyder, Mark A; Chai, Yiming; Frost, Russell G; Cramer, Steven M

    2015-08-14

    A homologous ligand library based on the commercially-available Nuvia cPrime ligand was generated to systematically explore various features of a multimodal cation-exchange ligand and to identify structural variants that had significantly altered chromatographic selectivity. Substitution of the polar amide bond with more hydrophobic chemistries was found to enhance retention while remaining hydrophobically-selective for aromatic residues. In contrast, increasing the solvent exposure of the aromatic ring was observed to strengthen the ligand affinity for both types of hydrophobic residues. An optimal linker length between the charged and hydrophobic moieties was also observed to enhance retention, balancing the steric accessibility of the hydrophobic moiety with its ability to interact independently of the charged group. The weak pKa of the carboxylate charge group was found to have a notable impact on protein retention on Nuvia cPrime at lower pH, increasing hydrophobic interactions with the protein. Substituting the charged group with a sulfonic acid allowed this strong MM ligand to retain its electrostatic-dominant character in this lower pH range. pH gradient experiments were also carried out to further elucidate this pH dependent behavior. A single QSAR model was generated using this accumulated experimental data to predict protein retention across a range of multimodal and ion exchange systems. This model could correctly predict the retention of proteins on resins that were not included in the original model and could prove quite powerful as an in silico approach toward designing more effective and differentiated multimodal ligands. Copyright © 2015. Published by Elsevier B.V.

  13. The Impact Of Multimode Fiber Chromatic Dispersion On Data Communications

    NASA Astrophysics Data System (ADS)

    Hackert, Michael J.

    1990-01-01

    Capability for the lowest cost is the goal of contemporary communications managers. With all of the competitive pressures that modern businesses are experiencing these days, communications needs must be met with the most information carrying capacity for the lowest cost. Optical fiber communication systems meet these requirements while providing reliability, system integrity, and potential future upgradability. Consequently, optical fiber is finding numerous applications in addition to its traditional telephony plant. Fiber based systems are meeting these requirements in building networks and computer interconnects at a lower cost than copper based systems. A fiber type being chosen by industry to meet these needs in standard systems such as FDDI, is multimode fiber. Multimode fiber systems offer cost advantages over single-mode fiber through lower fiber connection costs. Also, system designers can gain savings by using low cost, high reliability, wide spectral width sources such as LEDs instead of lasers and by operating at higher bit rates than used for multimode systems in the past. However, in order to maximize the cost savings while ensuring the system will operate as intended, the chromatic dispersion of the fiber must be taken into account. This paper explains how to do that and shows how to calculate multimode chromatic dispersion for each of the standard fiber sizes (50 μm, 62.5 μm, 85 μm, and 100μm core diameter).

  14. Multi-modal trip planning system : Northeastern Illinois Regional Transportation Authority.

    DOT National Transportation Integrated Search

    2013-01-01

    This report evaluates the Multi-Modal Trip Planner System (MMTPS) implemented by the Northeastern Illinois Regional Transportation Authority (RTA) against the specific functional objectives enumerated by the Federal Transit Administration (FTA) in it...

  15. A Multimodal Mindfulness Training to Address Mental Health Symptoms in Providers Who Care for and Interact With Children in Relation to End-of-Life Care.

    PubMed

    O'Mahony, Sean; Gerhart, James; Abrams, Ira; Greene, Michelle; McFadden, Rory; Tamizuddin, Sara; Levy, Mitchell M

    2017-11-01

    Medical providers may face unique emotional challenges when confronted with the suffering of chronically ill, dying, and bereaved children. This study assessed the preliminary outcomes of participation in a group-based multimodal mindfulness training pilot designed to reduce symptoms of burnout and mental health symptoms in providers who interact with children in the context of end-of-life care. A total of 13 medical providers who care for children facing life-threatening illness or bereaved children participated in a 9-session multimodal mindfulness session. Mental health symptoms and burnout were assessed prior to the program, at the program midpoint, and at the conclusion of the program. Participation in the pilot was associated with significant reductions in depressive and posttraumatic stress disorder (PTSD) symptoms among providers ( P < .05). Mindfulness-based programs may help providers recognize and address symptoms of depression and PTSD. Additional research is needed to enhance access and uptake of programming among larger groups of participants.

  16. Using a Multimodal Learning System to Support Music Instruction

    ERIC Educational Resources Information Center

    Yu, Pao-Ta; Lai, Yen-Shou; Tsai, Hung-Hsu; Chang, Yuan-Hou

    2010-01-01

    This paper describes a multimodality approach that helps primary-school students improve their learning performance during music instruction. Multimedia instruction is an effective way to help learners create meaningful knowledge and to make referential connections between mental representations. This paper proposes a multimodal, dual-channel,…

  17. Multimodal Learning Clubs

    ERIC Educational Resources Information Center

    Casey, Heather

    2012-01-01

    Multimodal learning clubs link principles of motivation and engagement with 21st century technological tools and texts to support content area learning. The author describes how a sixth grade health teacher and his class incorporated multimodal learning clubs into a unit of study on human body systems. The students worked collaboratively online…

  18. Simon Plays Simon Says: The Timing of Turn-Taking in an imitation Game

    DTIC Science & Technology

    2012-01-01

    found in the linguistics literature as well. Some work focuses on the structure of syntax and semantics in language usage [3], and other work...components come from many different approaches. Turn- taking is a highly multimodal process, and prior work gives much in-depth analysis of specific...attractive as an initial domain of investigation for its multimodality , interactive symmetry, and relative simplicity, being isolated from such

  19. Considering the Activity in Interactivity: A Multimodal Perspective

    ERIC Educational Resources Information Center

    Schwartz, Ruth N.

    2010-01-01

    What factors contribute to effective multimedia learning? Increasingly, interactivity is considered a critical component that can foster learning in multimedia environments, including simulations and games. Although a number of recent studies investigate interactivity as a factor in the effective design of multimedia instruction, most examine only…

  20. Older users, multimodal reminders and assisted living technology.

    PubMed

    Warnock, David; McGee-Lennon, Marilyn; Brewster, Stephen

    2012-09-01

    The primary users of assisted living technology are older people who are likely to have one or more sensory impairments. Multimodal technology allows users to interact via non-impaired senses and provides alternative ways to interact if primary interaction methods fail. An empirical user study was carried out with older participants which evaluated the performance, disruptiveness and subjective workload of visual, audio, tactile and olfactory notifications then compared the results with earlier findings in younger participants. It was found that disruption and subjective workload were not affected by modality, although some modalities were more effective at delivering information accurately. It is concluded that although further studies need to be carried out in a real-world settings, the findings support the argument for multiple modalities in assisted living technology.

  1. Social Network Extraction and Analysis Based on Multimodal Dyadic Interaction

    PubMed Central

    Escalera, Sergio; Baró, Xavier; Vitrià, Jordi; Radeva, Petia; Raducanu, Bogdan

    2012-01-01

    Social interactions are a very important component in people’s lives. Social network analysis has become a common technique used to model and quantify the properties of social interactions. In this paper, we propose an integrated framework to explore the characteristics of a social network extracted from multimodal dyadic interactions. For our study, we used a set of videos belonging to New York Times’ Blogging Heads opinion blog. The Social Network is represented as an oriented graph, whose directed links are determined by the Influence Model. The links’ weights are a measure of the “influence” a person has over the other. The states of the Influence Model encode automatically extracted audio/visual features from our videos using state-of-the art algorithms. Our results are reported in terms of accuracy of audio/visual data fusion for speaker segmentation and centrality measures used to characterize the extracted social network. PMID:22438733

  2. The Interactive Origin and the Aesthetic Modelling of Image-Schemas and Primary Metaphors.

    PubMed

    Martínez, Isabel C; Español, Silvia A; Pérez, Diana I

    2018-06-02

    According to the theory of conceptual metaphor, image-schemas and primary metaphors are preconceptual structures configured in human cognition, based on sensory-motor environmental activity. Focusing on the way both non-conceptual structures are embedded in early social interaction, we provide empirical evidence for the interactive and intersubjective ontogenesis of image-schemas and primary metaphors. We present the results of a multimodal image-schematic microanalysis of three interactive infant-directed performances (the composition of movement, touch, speech, and vocalization that adults produce for-and-with the infants). The microanalyses show that adults aesthetically highlight the image-schematic structures embedded in the multimodal composition of the performance, and that primary metaphors are also lived as embedded in these inter-enactive experiences. The findings allow corroborating that the psychological domains of cognition and affection are not in rivalry or conflict but rather intertwined in meaningful experiences.

  3. Vestibular-somatosensory interactions: effects of passive whole-body rotation on somatosensory detection.

    PubMed

    Ferrè, Elisa Raffaella; Kaliuzhna, Mariia; Herbelin, Bruno; Haggard, Patrick; Blanke, Olaf

    2014-01-01

    Vestibular signals are strongly integrated with information from several other sensory modalities. For example, vestibular stimulation was reported to improve tactile detection. However, this improvement could reflect either a multimodal interaction or an indirect interaction driven by vestibular effects on spatial attention and orienting. Here we investigate whether natural vestibular activation induced by passive whole-body rotation influences tactile detection. In particular, we assessed the ability to detect faint tactile stimuli to the fingertips of the left and right hand during spatially congruent or incongruent rotations. We found that passive whole-body rotations significantly enhanced sensitivity to faint shocks, without affecting response bias. Critically, this enhancement of somatosensory sensitivity did not depend on the spatial congruency between the direction of rotation and the hand stimulated. Thus, our results support a multimodal interaction, likely in brain areas receiving both vestibular and somatosensory signals.

  4. Learning multimodal dictionaries.

    PubMed

    Monaci, Gianluca; Jost, Philippe; Vandergheynst, Pierre; Mailhé, Boris; Lesage, Sylvain; Gribonval, Rémi

    2007-09-01

    Real-world phenomena involve complex interactions between multiple signal modalities. As a consequence, humans are used to integrate at each instant perceptions from all their senses in order to enrich their understanding of the surrounding world. This paradigm can be also extremely useful in many signal processing and computer vision problems involving mutually related signals. The simultaneous processing of multimodal data can, in fact, reveal information that is otherwise hidden when considering the signals independently. However, in natural multimodal signals, the statistical dependencies between modalities are in general not obvious. Learning fundamental multimodal patterns could offer deep insight into the structure of such signals. In this paper, we present a novel model of multimodal signals based on their sparse decomposition over a dictionary of multimodal structures. An algorithm for iteratively learning multimodal generating functions that can be shifted at all positions in the signal is proposed, as well. The learning is defined in such a way that it can be accomplished by iteratively solving a generalized eigenvector problem, which makes the algorithm fast, flexible, and free of user-defined parameters. The proposed algorithm is applied to audiovisual sequences and it is able to discover underlying structures in the data. The detection of such audio-video patterns in audiovisual clips allows to effectively localize the sound source on the video in presence of substantial acoustic and visual distractors, outperforming state-of-the-art audiovisual localization algorithms.

  5. Visually induced plasticity of auditory spatial perception in macaques.

    PubMed

    Woods, Timothy M; Recanzone, Gregg H

    2004-09-07

    When experiencing spatially disparate visual and auditory stimuli, a common percept is that the sound originates from the location of the visual stimulus, an illusion known as the ventriloquism effect. This illusion can persist for tens of minutes, a phenomenon termed the ventriloquism aftereffect. The underlying neuronal mechanisms of this rapidly induced plasticity remain unclear; indeed, it remains untested whether similar multimodal interactions occur in other species. We therefore tested whether macaque monkeys experience the ventriloquism aftereffect similar to the way humans do. The ability of two monkeys to determine which side of the midline a sound was presented from was tested before and after a period of 20-60 min in which the monkeys experienced either spatially identical or spatially disparate auditory and visual stimuli. In agreement with human studies, the monkeys did experience a shift in their auditory spatial perception in the direction of the spatially disparate visual stimulus, and the aftereffect did not transfer across sounds that differed in frequency by two octaves. These results show that macaque monkeys experience the ventriloquism aftereffect similar to the way humans do in all tested respects, indicating that these multimodal interactions are a basic phenomenon of the central nervous system.

  6. Evaluation of protein adsorption and preferred binding regions in multimodal chromatography using NMR

    PubMed Central

    Chung, Wai Keen; Freed, Alexander S.; Holstein, Melissa A.; McCallum, Scott A.; Cramer, Steven M.

    2010-01-01

    NMR titration experiments with labeled human ubiquitin were employed in concert with chromatographic data obtained with a library of ubiquitin mutants to study the nature of protein adsorption in multimodal (MM) chromatography. The elution order of the mutants on the MM resin was significantly different from that obtained by ion-exchange chromatography. Further, the chromatographic results with the protein library indicated that mutations in a defined region induced greater changes in protein affinity to the solid support. Chemical shift mapping and determination of dissociation constants from NMR titration experiments with the MM ligand and isotopically enriched ubiquitin were used to determine and rank the relative binding affinities of interaction sites on the protein surface. The results with NMR confirmed that the protein possessed a distinct preferred binding region for the MM ligand in agreement with the chromatographic results. Finally, coarse-grained ligand docking simulations were employed to study the modes of interaction between the MM ligand and ubiquitin. The use of NMR titration experiments in concert with chromatographic data obtained with protein libraries represents a previously undescribed approach for elucidating the structural basis of protein binding affinity in MM chromatographic systems. PMID:20837551

  7. Gelatin-based Hydrogel Degradation and Tissue Interaction in vivo: Insights from Multimodal Preclinical Imaging in Immunocompetent Nude Mice.

    PubMed

    Tondera, Christoph; Hauser, Sandra; Krüger-Genge, Anne; Jung, Friedrich; Neffe, Axel T; Lendlein, Andreas; Klopfleisch, Robert; Steinbach, Jörg; Neuber, Christin; Pietzsch, Jens

    2016-01-01

    Hydrogels based on gelatin have evolved as promising multifunctional biomaterials. Gelatin is crosslinked with lysine diisocyanate ethyl ester (LDI) and the molar ratio of gelatin and LDI in the starting material mixture determines elastic properties of the resulting hydrogel. In order to investigate the clinical potential of these biopolymers, hydrogels with different ratios of gelatin and diisocyanate (3-fold (G10_LNCO3) and 8-fold (G10_LNCO8) molar excess of isocyanate groups) were subcutaneously implanted in mice (uni- or bilateral implantation). Degradation and biomaterial-tissue-interaction were investigated in vivo (MRI, optical imaging, PET) and ex vivo (autoradiography, histology, serum analysis). Multimodal imaging revealed that the number of covalent net points correlates well with degradation time, which allows for targeted modification of hydrogels based on properties of the tissue to be replaced. Importantly, the degradation time was also dependent on the number of implants per animal. Despite local mechanisms of tissue remodeling no adverse tissue responses could be observed neither locally nor systemically. Finally, this preclinical investigation in immunocompetent mice clearly demonstrated a complete restoration of the original healthy tissue.

  8. A Multimodal Discourse Analysis of Tmall's Double Eleven Advertisement

    ERIC Educational Resources Information Center

    Hu, Chunyu; Luo, Mengxi

    2016-01-01

    From the 1990s, the multimodal turn in discourse studies makes multimodal discourse analysis a popular topic in linguistics and communication studies. An important approach to applying Systemic Functional Linguistics to non-verbal modes is Visual Grammar initially proposed by Kress and van Leeuwen (1996). Considering that commercial advertisement…

  9. Teaching Visual Texts with the Multimodal Analysis Software

    ERIC Educational Resources Information Center

    Lim Fei, Victor; O'Halloran, Kay L.; Tan, Sabine; E., Marissa K. L.

    2015-01-01

    This exploratory study introduces the systemic approach and the explicit teaching of a meta-language to provide conceptual tools for students for the analysis and interpretation of multimodal texts. Equipping students with a set of specialised vocabulary with conventionalised meanings associated with specific choices in multimodal texts empowers…

  10. Students' Multimodal Construction of the Work-Energy Concept

    NASA Astrophysics Data System (ADS)

    Tang, Kok-Sing; Chee Tan, Seng; Yeo, Jennifer

    2011-09-01

    This article examines the role of multimodalities in representing the concept of work-energy by studying the collaborative discourse of a group of ninth-grade physics students engaging in an inquiry-based instruction. Theorising a scientific concept as a network of meaning relationships across semiotic modalities situated in human activity, this article analyses the students' interactions through their use of natural language, mathematical symbolism, depiction, and gestures, and examines the intertextual meanings made through the integration of these modalities. Results indicate that the thematic integration of multimodalities is both difficult and necessary for students in order to construct a scientific understanding that is congruent with the physics curriculum. More significantly, the difficulties in multimodal integration stem from the subtle differences in the categorical, quantitative, and spatial meanings of the work-energy concept whose contrasts are often not made explicit to the students. The implications of these analyses and findings for science teaching and educational research are discussed.

  11. The GuideView System for Interactive, Structured, Multi-modal Delivery of Clinical Guidelines

    NASA Technical Reports Server (NTRS)

    Iyengar, Sriram; Florez-Arango, Jose; Garcia, Carlos Andres

    2009-01-01

    GuideView is a computerized clinical guideline system which delivers clinical guidelines in an easy-to-understand and easy-to-use package. It may potentially enhance the quality of medical care or allow non-medical personnel to provide acceptable levels of care in situations where physicians or nurses may not be available. Such a system can be very valuable during space flight missions when a physician is not readily available, or perhaps the designated medical personnel is unable to provide care. Complex clinical guidelines are broken into simple steps. At each step clinical information is presented in multiple modes, including voice,audio, text, pictures, and video. Users can respond via mouse clicks or via voice navigation. GuideView can also interact with medical sensors using wireless or wired connections. The system's interface is illustrated and the results of a usability study are presented.

  12. 76 FR 32953 - Transportation Infrastructure/Multimodal Products and Services Trade Mission to Doha, Qatar, and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-07

    ... new systems, particularly those related to multimodal freight and intelligent supply chain management... technologies, supply chain systems and strategies; mass transportation systems; advanced vehicle technologies... country. There are excellent opportunities for U.S. engineers, program management firms, and manufacturers...

  13. Phonetic Variation and Interactional Contingencies in Simultaneous Responses

    ERIC Educational Resources Information Center

    Walker, Gareth

    2016-01-01

    An auspicious but unexplored environment for studying phonetic variation in naturalistic interaction is where two or more participants say the same thing at the same time. Working with a core dataset built from the multimodal Augmented Multi-party Interaction corpus, the principles of conversation analysis were followed to analyze the sequential…

  14. The Semiotic Work of the Hands in Scientific Enquiry

    ERIC Educational Resources Information Center

    Sakr, Mona; Jewitt, Carey; Price, Sara

    2014-01-01

    This paper takes a multimodal approach to analysing embodied interaction and discourses of scientific investigation using an interactive tangible tabletop. It argues that embodied forms of interaction are central to science inquiry. More specifically, the paper examines the role of hand actions in the development of descriptions and explanations…

  15. The Bursts and Lulls of Multimodal Interaction: Temporal Distributions of Behavior Reveal Differences Between Verbal and Non-Verbal Communication.

    PubMed

    Abney, Drew H; Dale, Rick; Louwerse, Max M; Kello, Christopher T

    2018-04-06

    Recent studies of naturalistic face-to-face communication have demonstrated coordination patterns such as the temporal matching of verbal and non-verbal behavior, which provides evidence for the proposal that verbal and non-verbal communicative control derives from one system. In this study, we argue that the observed relationship between verbal and non-verbal behaviors depends on the level of analysis. In a reanalysis of a corpus of naturalistic multimodal communication (Louwerse, Dale, Bard, & Jeuniaux, ), we focus on measuring the temporal patterns of specific communicative behaviors in terms of their burstiness. We examined burstiness estimates across different roles of the speaker and different communicative modalities. We observed more burstiness for verbal versus non-verbal channels, and for more versus less informative language subchannels. Using this new method for analyzing temporal patterns in communicative behaviors, we show that there is a complex relationship between verbal and non-verbal channels. We propose a "temporal heterogeneity" hypothesis to explain how the language system adapts to the demands of dialog. Copyright © 2018 Cognitive Science Society, Inc.

  16. Concept Representation Reflects Multimodal Abstraction: A Framework for Embodied Semantics

    PubMed Central

    Fernandino, Leonardo; Binder, Jeffrey R.; Desai, Rutvik H.; Pendl, Suzanne L.; Humphries, Colin J.; Gross, William L.; Conant, Lisa L.; Seidenberg, Mark S.

    2016-01-01

    Recent research indicates that sensory and motor cortical areas play a significant role in the neural representation of concepts. However, little is known about the overall architecture of this representational system, including the role played by higher level areas that integrate different types of sensory and motor information. The present study addressed this issue by investigating the simultaneous contributions of multiple sensory-motor modalities to semantic word processing. With a multivariate fMRI design, we examined activation associated with 5 sensory-motor attributes—color, shape, visual motion, sound, and manipulation—for 900 words. Regions responsive to each attribute were identified using independent ratings of the attributes' relevance to the meaning of each word. The results indicate that these aspects of conceptual knowledge are encoded in multimodal and higher level unimodal areas involved in processing the corresponding types of information during perception and action, in agreement with embodied theories of semantics. They also reveal a hierarchical system of abstracted sensory-motor representations incorporating a major division between object interaction and object perception processes. PMID:25750259

  17. Multi-Modal Intelligent Traffic Signal Systems (MMITSS) impacts assessment.

    DOT National Transportation Integrated Search

    2015-08-01

    The study evaluates the potential network-wide impacts of the Multi-Modal Intelligent Transportation Signal System (MMITSS) based on a field data analysis utilizing data collected from a MMITSS prototype and a simulation analysis. The Intelligent Tra...

  18. Multimodal sensing strategies for detecting transparent barriers indoors from a mobile platform

    NASA Astrophysics Data System (ADS)

    Acevedo, Isaiah; Kleine, R. Kaleb; Kraus, Dustan; Mascareñas, David

    2015-04-01

    There is currently an interest in developing mobile sensing platforms that fly indoors. The primary goal for these platforms is to be able to successfully navigate a building under various lighting and environmental conditions. There are numerous research challenges associated with this goal, one of which is the platform's ability to detect and identify the presence of transparent barriers. Transparent barriers could include windows, glass partitions, or skylights. For example, in order to successfully navigate inside of a structure, these platforms will need to sense if a space contains a transparent barrier and whether or not this space can be traversed. This project's focus has been developing a multimodal sensing system that can successfully identify such transparent barriers under various lighting conditions while aboard a mobile platform. Along with detecting transparent barriers, this sensing platform is capable of distinguishing between reflective, opaque, and transparent barriers. It will be critical for this system to be able to identify transparent barriers in real-time in order for the navigation system to maneuver accordingly. The properties associated with the interaction between various frequencies of light and transparent materials were one of the techniques leveraged to solve this problem.

  19. When a robot is social: spatial arrangements and multimodal semiotic engagement in the practice of social robotics.

    PubMed

    Alac, Morana; Movellan, Javier; Tanaka, Fumihide

    2011-12-01

    Social roboticists design their robots to function as social agents in interaction with humans and other robots. Although we do not deny that the robot's design features are crucial for attaining this aim, we point to the relevance of spatial organization and coordination between the robot and the humans who interact with it. We recover these interactions through an observational study of a social robotics laboratory and examine them by applying a multimodal interactional analysis to two moments of robotics practice. We describe the vital role of roboticists and of the group of preverbal infants, who are involved in a robot's design activity, and we argue that the robot's social character is intrinsically related to the subtleties of human interactional moves in laboratories of social robotics. This human involvement in the robot's social agency is not simply controlled by individual will. Instead, the human-machine couplings are demanded by the situational dynamics in which the robot is lodged.

  20. A Framework and Toolkit for the Construction of Multimodal Learning Interfaces

    DTIC Science & Technology

    1998-04-29

    human communication modalities in the context of a broad class of applications, specifically those that support state manipulation via parameterized actions. The multimodal semantic model is also the basis for a flexible, domain independent, incrementally trainable multimodal interpretation algorithm based on a connectionist network. The second major contribution is an application framework consisting of reusable components and a modular, distributed system architecture. Multimodal application developers can assemble the components in the framework into a new application,

  1. A simultaneous multimodal imaging system for tissue functional parameters

    NASA Astrophysics Data System (ADS)

    Ren, Wenqi; Zhang, Zhiwu; Wu, Qiang; Zhang, Shiwu; Xu, Ronald

    2014-02-01

    Simultaneous and quantitative assessment of skin functional characteristics in different modalities will facilitate diagnosis and therapy in many clinical applications such as wound healing. However, many existing clinical practices and multimodal imaging systems are subjective, qualitative, sequential for multimodal data collection, and need co-registration between different modalities. To overcome these limitations, we developed a multimodal imaging system for quantitative, non-invasive, and simultaneous imaging of cutaneous tissue oxygenation and blood perfusion parameters. The imaging system integrated multispectral and laser speckle imaging technologies into one experimental setup. A Labview interface was developed for equipment control, synchronization, and image acquisition. Advanced algorithms based on a wide gap second derivative reflectometry and laser speckle contrast analysis (LASCA) were developed for accurate reconstruction of tissue oxygenation and blood perfusion respectively. Quantitative calibration experiments and a new style of skinsimulating phantom were designed to verify the accuracy and reliability of the imaging system. The experimental results were compared with a Moor tissue oxygenation and perfusion monitor. For In vivo testing, a post-occlusion reactive hyperemia (PORH) procedure in human subject and an ongoing wound healing monitoring experiment using dorsal skinfold chamber models were conducted to validate the usability of our system for dynamic detection of oxygenation and perfusion parameters. In this study, we have not only setup an advanced multimodal imaging system for cutaneous tissue oxygenation and perfusion parameters but also elucidated its potential for wound healing assessment in clinical practice.

  2. A novel multimode hybrid energy storage system and its energy management strategy for electric vehicles

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Xu, Jun; Cao, Binggang; Zhou, Xuan

    2015-05-01

    This paper proposes a novel topology of multimode hybrid energy storage system (HESS) and its energy management strategy for electric vehicles (EVs). Compared to the conventional HESS, the proposed multimode HESS has more operating modes and thus it could in further enhance the efficiency of the system. The rule-based control strategy and the power-balancing strategy are developed for the energy management strategy to realize mode selection and power distribution. Generally, the DC-DC converter will operate at peak efficiency to convey the energy from the batteries to the UCs. Otherwise, the pure battery mode or the pure ultracapacitors (UCs) mode will be utilized without the DC-DC converter. To extend the battery life, the UCs have the highest priority to recycle the energy and the batteries are isolated from being recharged directly during regenerative braking. Simulations and experiments are established to validate the proposed multimode HESS and its energy management strategy. The results reveal that the energy losses in the DC-DC converter, the total energy consumption and the overall system efficiency of the proposed multimode HESS are improved compared to the conventional HESS.

  3. Advanced driver assistance systems: Using multimodal redundant warnings to enhance road safety.

    PubMed

    Biondi, Francesco; Strayer, David L; Rossi, Riccardo; Gastaldi, Massimiliano; Mulatti, Claudio

    2017-01-01

    This study investigated whether multimodal redundant warnings presented by advanced assistance systems reduce brake response times. Warnings presented by assistance systems are designed to assist drivers by informing them that evasive driving maneuvers are needed in order to avoid a potential accident. If these warnings are poorly designed, they may distract drivers, slow their responses, and reduce road safety. In two experiments, participants drove a simulated vehicle equipped with a forward collision avoidance system. Auditory, vibrotactile, and multimodal warnings were presented when the time to collision was shorter than five seconds. The effects of these warnings were investigated with participants performing a concurrent cell phone conversation (Exp. 1) or driving in high-density traffic (Exp. 2). Braking times and subjective workload were measured. Multimodal redundant warnings elicited faster braking reaction times. These warnings were found to be effective even when talking on a cell phone (Exp. 1) or driving in dense traffic (Exp. 2). Multimodal warnings produced higher ratings of urgency, but ratings of frustration did not increase compared to other warnings. Findings obtained in these two experiments are important given that faster braking responses may reduce the potential for a collision. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Score level fusion scheme based on adaptive local Gabor features for face-iris-fingerprint multimodal biometric

    NASA Astrophysics Data System (ADS)

    He, Fei; Liu, Yuanning; Zhu, Xiaodong; Huang, Chun; Han, Ye; Chen, Ying

    2014-05-01

    A multimodal biometric system has been considered a promising technique to overcome the defects of unimodal biometric systems. We have introduced a fusion scheme to gain a better understanding and fusion method for a face-iris-fingerprint multimodal biometric system. In our case, we use particle swarm optimization to train a set of adaptive Gabor filters in order to achieve the proper Gabor basic functions for each modality. For a closer analysis of texture information, two different local Gabor features for each modality are produced by the corresponding Gabor coefficients. Next, all matching scores of the two Gabor features for each modality are projected to a single-scalar score via a trained, supported, vector regression model for a final decision. A large-scale dataset is formed to validate the proposed scheme using the Facial Recognition Technology database-fafb and CASIA-V3-Interval together with FVC2004-DB2a datasets. The experimental results demonstrate that as well as achieving further powerful local Gabor features of multimodalities and obtaining better recognition performance by their fusion strategy, our architecture also outperforms some state-of-the-art individual methods and other fusion approaches for face-iris-fingerprint multimodal biometric systems.

  5. Analysis of multimode fiber bundles for endoscopic spectral-domain optical coherence tomography

    PubMed Central

    Risi, Matthew D.; Makhlouf, Houssine; Rouse, Andrew R.; Gmitro, Arthur F.

    2016-01-01

    A theoretical analysis of the use of a fiber bundle in spectral-domain optical coherence tomography (OCT) systems is presented. The fiber bundle enables a flexible endoscopic design and provides fast, parallelized acquisition of the OCT data. However, the multimode characteristic of the fibers in the fiber bundle affects the depth sensitivity of the imaging system. A description of light interference in a multimode fiber is presented along with numerical simulations and experimental studies to illustrate the theoretical analysis. PMID:25967012

  6. A virtual surgical environment for rehearsal of tympanomastoidectomy.

    PubMed

    Chan, Sonny; Li, Peter; Lee, Dong Hoon; Salisbury, J Kenneth; Blevins, Nikolas H

    2011-01-01

    This article presents a virtual surgical environment whose purpose is to assist the surgeon in preparation for individual cases. The system constructs interactive anatomical models from patient-specific, multi-modal preoperative image data, and incorporates new methods for visually and haptically rendering the volumetric data. Evaluation of the system's ability to replicate temporal bone dissections for tympanomastoidectomy, using intraoperative video of the same patients as guides, showed strong correlations between virtual and intraoperative anatomy. The result is a portable and cost-effective tool that may prove highly beneficial for the purposes of surgical planning and rehearsal.

  7. Shape-Controlled Synthesis of Isotopic Yttrium-90-Labeled Rare Earth Fluoride Nanocrystals for Multimodal Imaging.

    PubMed

    Paik, Taejong; Chacko, Ann-Marie; Mikitsh, John L; Friedberg, Joseph S; Pryma, Daniel A; Murray, Christopher B

    2015-09-22

    Isotopically labeled nanomaterials have recently attracted much attention in biomedical research, environmental health studies, and clinical medicine because radioactive probes allow the elucidation of in vitro and in vivo cellular transport mechanisms, as well as the unambiguous distribution and localization of nanomaterials in vivo. In addition, nanocrystal-based inorganic materials have a unique capability of customizing size, shape, and composition; with the potential to be designed as multimodal imaging probes. Size and shape of nanocrystals can directly influence interactions with biological systems, hence it is important to develop synthetic methods to design radiolabeled nanocrystals with precise control of size and shape. Here, we report size- and shape-controlled synthesis of rare earth fluoride nanocrystals doped with the β-emitting radioisotope yttrium-90 ((90)Y). Size and shape of nanocrystals are tailored via tight control of reaction parameters and the type of rare earth hosts (e.g., Gd or Y) employed. Radiolabeled nanocrystals are synthesized in high radiochemical yield and purity as well as excellent radiolabel stability in the face of surface modification with different polymeric ligands. We demonstrate the Cerenkov radioluminescence imaging and magnetic resonance imaging capabilities of (90)Y-doped GdF3 nanoplates, which offer unique opportunities as a promising platform for multimodal imaging and targeted therapy.

  8. Multi-Modal Traveler Information System - Performance Criteria for Evaluating GCM Corridor Strategies & Technologies

    DOT National Transportation Integrated Search

    1997-07-16

    The Gary-Chicago-Milwaukee (GCM) Multi-Modal Traveler Information System (MMTIS) is a complex project involving a wide spectrum of participants. In order to facilitate its implementation it is important to understand the direction of the MMTIS. This ...

  9. Dynamic mobility applications policy analysis : policy and institutional issues for multi-modal intelligent traffic signal system (MMITSS).

    DOT National Transportation Integrated Search

    2015-03-01

    The Connected Vehicle Mobility Policy team (herein, policy team) developed this report to document policy considerations for the Multi-Modal Intelligent Traffic Signal System, or MMITSS. MMITSS comprises a bundle of dynamic mobility application...

  10. Optimality in mono- and multisensory map formation.

    PubMed

    Bürck, Moritz; Friedel, Paul; Sichert, Andreas B; Vossen, Christine; van Hemmen, J Leo

    2010-07-01

    In the struggle for survival in a complex and dynamic environment, nature has developed a multitude of sophisticated sensory systems. In order to exploit the information provided by these sensory systems, higher vertebrates reconstruct the spatio-temporal environment from each of the sensory systems they have at their disposal. That is, for each modality the animal computes a neuronal representation of the outside world, a monosensory neuronal map. Here we present a universal framework that allows to calculate the specific layout of the involved neuronal network by means of a general mathematical principle, viz., stochastic optimality. In order to illustrate the use of this theoretical framework, we provide a step-by-step tutorial of how to apply our model. In so doing, we present a spatial and a temporal example of optimal stimulus reconstruction which underline the advantages of our approach. That is, given a known physical signal transmission and rudimental knowledge of the detection process, our approach allows to estimate the possible performance and to predict neuronal properties of biological sensory systems. Finally, information from different sensory modalities has to be integrated so as to gain a unified perception of reality for further processing, e.g., for distinct motor commands. We briefly discuss concepts of multimodal interaction and how a multimodal space can evolve by alignment of monosensory maps.

  11. Pollution going multimodal: the complex impact of the human-altered sensory environment on animal perception and performance

    PubMed Central

    Halfwerk, Wouter; Slabbekoorn, Hans

    2015-01-01

    Anthropogenic sensory pollution is affecting ecosystems worldwide. Human actions generate acoustic noise, emanate artificial light and emit chemical substances. All of these pollutants are known to affect animals. Most studies on anthropogenic pollution address the impact of pollutants in unimodal sensory domains. High levels of anthropogenic noise, for example, have been shown to interfere with acoustic signals and cues. However, animals rely on multiple senses, and pollutants often co-occur. Thus, a full ecological assessment of the impact of anthropogenic activities requires a multimodal approach. We describe how sensory pollutants can co-occur and how covariance among pollutants may differ from natural situations. We review how animals combine information that arrives at their sensory systems through different modalities and outline how sensory conditions can interfere with multimodal perception. Finally, we describe how sensory pollutants can affect the perception, behaviour and endocrinology of animals within and across sensory modalities. We conclude that sensory pollution can affect animals in complex ways due to interactions among sensory stimuli, neural processing and behavioural and endocrinal feedback. We call for more empirical data on covariance among sensory conditions, for instance, data on correlated levels in noise and light pollution. Furthermore, we encourage researchers to test animal responses to a full-factorial set of sensory pollutants in the presence or the absence of ecologically important signals and cues. We realize that such approach is often time and energy consuming, but we think this is the only way to fully understand the multimodal impact of sensory pollution on animal performance and perception. PMID:25904319

  12. Multimodal system for the planning and guidance of bronchoscopy

    NASA Astrophysics Data System (ADS)

    Higgins, William E.; Cheirsilp, Ronnarit; Zang, Xiaonan; Byrnes, Patrick

    2015-03-01

    Many technical innovations in multimodal radiologic imaging and bronchoscopy have emerged recently in the effort against lung cancer. Modern X-ray computed-tomography (CT) scanners provide three-dimensional (3D) high-resolution chest images, positron emission tomography (PET) scanners give complementary molecular imaging data, and new integrated PET/CT scanners combine the strengths of both modalities. State-of-the-art bronchoscopes permit minimally invasive tissue sampling, with vivid endobronchial video enabling navigation deep into the airway-tree periphery, while complementary endobronchial ultrasound (EBUS) reveals local views of anatomical structures outside the airways. In addition, image-guided intervention (IGI) systems have proven their utility for CT-based planning and guidance of bronchoscopy. Unfortunately, no IGI system exists that integrates all sources effectively through the complete lung-cancer staging work flow. This paper presents a prototype of a computer-based multimodal IGI system that strives to fill this need. The system combines a wide range of automatic and semi-automatic image-processing tools for multimodal data fusion and procedure planning. It also provides a flexible graphical user interface for follow-on guidance of bronchoscopy/EBUS. Human-study results demonstrate the system's potential.

  13. IMMERSE: Interactive Mentoring for Multimodal Experiences in Realistic Social Encounters

    DTIC Science & Technology

    2015-08-28

    undergraduates funded by your agreement who graduated during this period and will receive scholarships or fellowships for further studies in science... Player Locomotion 9.2 Interacting with Real and Virtual Objects 9.3 Animation Combinations and Stage Management 10. Recommendations on the Way Ahead...Interaction with Virtual Characters ................................52! 9.1! Player Locomotion

  14. MULTIMODAL IMAGING OF SYPHILITIC MULTIFOCAL RETINITIS.

    PubMed

    Curi, Andre L; Sarraf, David; Cunningham, Emmett T

    2015-01-01

    To describe multimodal imaging of syphilitic multifocal retinitis. Observational case series. Two patients developed multifocal retinitis after treatment of unrecognized syphilitic uveitis with systemic corticosteroids in the absence of appropriate antibiotic therapy. Multimodal imaging localized the foci of retinitis within the retina in contrast to superficial retinal precipitates that accumulate on the surface of the retina in eyes with untreated syphilitic uveitis. Although the retinitis resolved after treatment with systemic penicillin in both cases, vision remained poor in the patient with multifocal retinitis involving the macula. Treatment of unrecognized syphilitic uveitis with corticosteroids in the absence of antitreponemal treatment can lead to the development of multifocal retinitis. Multimodal imaging, and optical coherence tomography in particular, can be used to distinguish multifocal retinitis from superficial retinal precipitates or accumulations.

  15. Interaction between visual and chemical cues in a Liolaemus lizard: a multimodal approach.

    PubMed

    Vicente, Natalin S; Halloy, Monique

    2017-12-01

    Multimodal communication involves the use of signals and cues across two or more sensory modalities. The genus Liolaemus (Iguania: Liolaemidae) offers a great potential for studies on the ecology and evolution of multimodal communication, including visual and chemical signals. In this study, we analyzed the response of male and female Liolaemus pacha to chemical, visual and combined (multimodal) stimuli. Using cue-isolation tests, we registered the number of tongue flicks and headbob displays from exposure to signals in each modality. Number of tongue flicks was greater when a chemical stimulus was presented alone than in the presence of visual or multimodal stimuli. In contrast, headbob displays were fewer in number with visual and chemical stimuli alone, but significantly higher in number when combined. Female signallers triggered significantly more tongue flicks than male signallers, suggesting that chemical cues are involved in sexual recognition. We did not find an inhibition between chemical and visual cues. On the contrary, we observed a dominance of the chemical modality, because when presented with visual stimuli, lizards also responded with more tongue flicks than headbob displays. The total response produced by multimodal stimuli was similar to that of the chemical stimuli alone, possibly suggesting non-redundancy. We discuss whether the visual component of a multimodal signal could attract attention at a distance, increasing the effectiveness of transmission and reception of the information in chemical cues. Copyright © 2017 Elsevier GmbH. All rights reserved.

  16. On-the-fly augmented reality for orthopedic surgery using a multimodal fiducial.

    PubMed

    Andress, Sebastian; Johnson, Alex; Unberath, Mathias; Winkler, Alexander Felix; Yu, Kevin; Fotouhi, Javad; Weidert, Simon; Osgood, Greg; Navab, Nassir

    2018-04-01

    Fluoroscopic x-ray guidance is a cornerstone for percutaneous orthopedic surgical procedures. However, two-dimensional (2-D) observations of the three-dimensional (3-D) anatomy suffer from the effects of projective simplification. Consequently, many x-ray images from various orientations need to be acquired for the surgeon to accurately assess the spatial relations between the patient's anatomy and the surgical tools. We present an on-the-fly surgical support system that provides guidance using augmented reality and can be used in quasiunprepared operating rooms. The proposed system builds upon a multimodality marker and simultaneous localization and mapping technique to cocalibrate an optical see-through head mounted display to a C-arm fluoroscopy system. Then, annotations on the 2-D x-ray images can be rendered as virtual objects in 3-D providing surgical guidance. We quantitatively evaluate the components of the proposed system and, finally, design a feasibility study on a semianthropomorphic phantom. The accuracy of our system was comparable to the traditional image-guided technique while substantially reducing the number of acquired x-ray images as well as procedure time. Our promising results encourage further research on the interaction between virtual and real objects that we believe will directly benefit the proposed method. Further, we would like to explore the capabilities of our on-the-fly augmented reality support system in a larger study directed toward common orthopedic interventions.

  17. Application of Multimodality Imaging Fusion Technology in Diagnosis and Treatment of Malignant Tumors under the Precision Medicine Plan.

    PubMed

    Wang, Shun-Yi; Chen, Xian-Xia; Li, Yi; Zhang, Yu-Ying

    2016-12-20

    The arrival of precision medicine plan brings new opportunities and challenges for patients undergoing precision diagnosis and treatment of malignant tumors. With the development of medical imaging, information on different modality imaging can be integrated and comprehensively analyzed by imaging fusion system. This review aimed to update the application of multimodality imaging fusion technology in the precise diagnosis and treatment of malignant tumors under the precision medicine plan. We introduced several multimodality imaging fusion technologies and their application to the diagnosis and treatment of malignant tumors in clinical practice. The data cited in this review were obtained mainly from the PubMed database from 1996 to 2016, using the keywords of "precision medicine", "fusion imaging", "multimodality", and "tumor diagnosis and treatment". Original articles, clinical practice, reviews, and other relevant literatures published in English were reviewed. Papers focusing on precision medicine, fusion imaging, multimodality, and tumor diagnosis and treatment were selected. Duplicated papers were excluded. Multimodality imaging fusion technology plays an important role in tumor diagnosis and treatment under the precision medicine plan, such as accurate location, qualitative diagnosis, tumor staging, treatment plan design, and real-time intraoperative monitoring. Multimodality imaging fusion systems could provide more imaging information of tumors from different dimensions and angles, thereby offing strong technical support for the implementation of precision oncology. Under the precision medicine plan, personalized treatment of tumors is a distinct possibility. We believe that multimodality imaging fusion technology will find an increasingly wide application in clinical practice.

  18. A feasibility study of evaluating transportation security systems and associated multi-modal efficiency impacts

    DOT National Transportation Integrated Search

    2006-08-01

    The overall purpose of this research project is to conduct a feasibility study and development of a general methodology to determine the impacts on multi-modal and system efficiency of alternative freight security measures. The methodology to be exam...

  19. A Multimodal Search Engine for Medical Imaging Studies.

    PubMed

    Pinho, Eduardo; Godinho, Tiago; Valente, Frederico; Costa, Carlos

    2017-02-01

    The use of digital medical imaging systems in healthcare institutions has increased significantly, and the large amounts of data in these systems have led to the conception of powerful support tools: recent studies on content-based image retrieval (CBIR) and multimodal information retrieval in the field hold great potential in decision support, as well as for addressing multiple challenges in healthcare systems, such as computer-aided diagnosis (CAD). However, the subject is still under heavy research, and very few solutions have become part of Picture Archiving and Communication Systems (PACS) in hospitals and clinics. This paper proposes an extensible platform for multimodal medical image retrieval, integrated in an open-source PACS software with profile-based CBIR capabilities. In this article, we detail a technical approach to the problem by describing its main architecture and each sub-component, as well as the available web interfaces and the multimodal query techniques applied. Finally, we assess our implementation of the engine with computational performance benchmarks.

  20. Magnon dark modes and gradient memory

    PubMed Central

    Zhang, Xufeng; Zou, Chang-Ling; Zhu, Na; Marquardt, Florian; Jiang, Liang; Tang, Hong X.

    2015-01-01

    Extensive efforts have been expended in developing hybrid quantum systems to overcome the short coherence time of superconducting circuits by introducing the naturally long-lived spin degree of freedom. Among all the possible materials, single-crystal yttrium iron garnet has shown up recently as a promising candidate for hybrid systems, and various highly coherent interactions, including strong and even ultrastrong coupling, have been demonstrated. One distinct advantage in these systems is that spins form well-defined magnon modes, which allows flexible and precise tuning. Here we demonstrate that by dissipation engineering, a non-Markovian interaction dynamics between the magnon and the microwave cavity photon can be achieved. Such a process enables us to build a magnon gradient memory to store information in the magnon dark modes, which decouple from the microwave cavity and thus preserve a long lifetime. Our findings provide a promising approach for developing long-lifetime, multimode quantum memories. PMID:26568130

  1. Optical diagnostics of mercury jet for an intense proton target.

    PubMed

    Park, H; Tsang, T; Kirk, H G; Ladeinde, F; Graves, V B; Spampinato, P T; Carroll, A J; Titus, P H; McDonald, K T

    2008-04-01

    An optical diagnostic system is designed and constructed for imaging a free mercury jet interacting with a high intensity proton beam in a pulsed high-field solenoid magnet. The optical imaging system employs a backilluminated, laser shadow photography technique. Object illumination and image capture are transmitted through radiation-hard multimode optical fibers and flexible coherent imaging fibers. A retroreflected illumination design allows the entire passive imaging system to fit inside the bore of the solenoid magnet. A sequence of synchronized short laser light pulses are used to freeze the transient events, and the images are recorded by several high speed charge coupled devices. Quantitative and qualitative data analysis using image processing based on probability approach is described. The characteristics of free mercury jet as a high power target for beam-jet interaction at various levels of the magnetic induction field is reported in this paper.

  2. Magnon dark modes and gradient memory.

    PubMed

    Zhang, Xufeng; Zou, Chang-Ling; Zhu, Na; Marquardt, Florian; Jiang, Liang; Tang, Hong X

    2015-11-16

    Extensive efforts have been expended in developing hybrid quantum systems to overcome the short coherence time of superconducting circuits by introducing the naturally long-lived spin degree of freedom. Among all the possible materials, single-crystal yttrium iron garnet has shown up recently as a promising candidate for hybrid systems, and various highly coherent interactions, including strong and even ultrastrong coupling, have been demonstrated. One distinct advantage in these systems is that spins form well-defined magnon modes, which allows flexible and precise tuning. Here we demonstrate that by dissipation engineering, a non-Markovian interaction dynamics between the magnon and the microwave cavity photon can be achieved. Such a process enables us to build a magnon gradient memory to store information in the magnon dark modes, which decouple from the microwave cavity and thus preserve a long lifetime. Our findings provide a promising approach for developing long-lifetime, multimode quantum memories.

  3. QSAR models for prediction of chromatographic behavior of homologous Fab variants.

    PubMed

    Robinson, Julie R; Karkov, Hanne S; Woo, James A; Krogh, Berit O; Cramer, Steven M

    2017-06-01

    While quantitative structure activity relationship (QSAR) models have been employed successfully for the prediction of small model protein chromatographic behavior, there have been few reports to date on the use of this methodology for larger, more complex proteins. Recently our group generated focused libraries of antibody Fab fragment variants with different combinations of surface hydrophobicities and electrostatic potentials, and demonstrated that the unique selectivities of multimodal resins can be exploited to separate these Fab variants. In this work, results from linear salt gradient experiments with these Fabs were employed to develop QSAR models for six chromatographic systems, including multimodal (Capto MMC, Nuvia cPrime, and two novel ligand prototypes), hydrophobic interaction chromatography (HIC; Capto Phenyl), and cation exchange (CEX; CM Sepharose FF) resins. The models utilized newly developed "local descriptors" to quantify changes around point mutations in the Fab libraries as well as novel cluster descriptors recently introduced by our group. Subsequent rounds of feature selection and linearized machine learning algorithms were used to generate robust, well-validated models with high training set correlations (R 2  > 0.70) that were well suited for predicting elution salt concentrations in the various systems. The developed models then were used to predict the retention of a deamidated Fab and isotype variants, with varying success. The results represent the first successful utilization of QSAR for the prediction of chromatographic behavior of complex proteins such as Fab fragments in multimodal chromatographic systems. The framework presented here can be employed to facilitate process development for the purification of biological products from product-related impurities by in silico screening of resin alternatives. Biotechnol. Bioeng. 2017;114: 1231-1240. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  4. Radiolabeled Nanoparticles for Multimodality Tumor Imaging

    PubMed Central

    Xing, Yan; Zhao, Jinhua; Conti, Peter S.; Chen, Kai

    2014-01-01

    Each imaging modality has its own unique strengths. Multimodality imaging, taking advantages of strengths from two or more imaging modalities, can provide overall structural, functional, and molecular information, offering the prospect of improved diagnostic and therapeutic monitoring abilities. The devices of molecular imaging with multimodality and multifunction are of great value for cancer diagnosis and treatment, and greatly accelerate the development of radionuclide-based multimodal molecular imaging. Radiolabeled nanoparticles bearing intrinsic properties have gained great interest in multimodality tumor imaging over the past decade. Significant breakthrough has been made toward the development of various radiolabeled nanoparticles, which can be used as novel cancer diagnostic tools in multimodality imaging systems. It is expected that quantitative multimodality imaging with multifunctional radiolabeled nanoparticles will afford accurate and precise assessment of biological signatures in cancer in a real-time manner and thus, pave the path towards personalized cancer medicine. This review addresses advantages and challenges in developing multimodality imaging probes by using different types of nanoparticles, and summarizes the recent advances in the applications of radiolabeled nanoparticles for multimodal imaging of tumor. The key issues involved in the translation of radiolabeled nanoparticles to the clinic are also discussed. PMID:24505237

  5. Key informant interviews test plan : model deployment of a regional, multi-modal 511 traveler information system

    DOT National Transportation Integrated Search

    2004-01-28

    This document presents the detailed plan to conduct the Key Informants Interviews Test, one of several test activities to be conducted as part of the national evaluation of the regional, multi-modal 511 Traveler Information System Model Deployment. T...

  6. A Multimodal Dialog System for Language Assessment: Current State and Future Directions. Research Report. ETS RR-17-21

    ERIC Educational Resources Information Center

    Suendermann-Oeft, David; Ramanarayanan, Vikram; Yu, Zhou; Qian, Yao; Evanini, Keelan; Lange, Patrick; Wang, Xinhao; Zechner, Klaus

    2017-01-01

    We present work in progress on a multimodal dialog system for English language assessment using a modular cloud-based architecture adhering to open industry standards. Among the modules being developed for the system, multiple modules heavily exploit machine learning techniques, including speech recognition, spoken language proficiency rating,…

  7. Nanoparticles in Higher-Order Multimodal Imaging

    NASA Astrophysics Data System (ADS)

    Rieffel, James Ki

    Imaging procedures are a cornerstone in our current medical infrastructure. In everything from screening, diagnostics, and treatment, medical imaging is perhaps our greatest tool in evaluating individual health. Recently, there has been tremendous increase in the development of multimodal systems that combine the strengths of complimentary imaging technologies to overcome their independent weaknesses. Clinically, this has manifested in the virtually universal manufacture of combined PET-CT scanners. With this push toward more integrated imaging, new contrast agents with multimodal functionality are needed. Nanoparticle-based systems are ideal candidates based on their unique size, properties, and diversity. In chapter 1, an extensive background on recent multimodal imaging agents capable of enhancing signal or contrast in three or more modalities is presented. Chapter 2 discusses the development and characterization of a nanoparticulate probe with hexamodal imaging functionality. It is my hope that the information contained in this thesis will demonstrate the many benefits of nanoparticles in multimodal imaging, and provide insight into the potential of fully integrated imaging.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barstow, Del R; Patlolla, Dilip Reddy; Mann, Christopher J

    Abstract The data captured by existing standoff biometric systems typically has lower biometric recognition performance than their close range counterparts due to imaging challenges, pose challenges, and other factors. To assist in overcoming these limitations systems typically perform in a multi-modal capacity such as Honeywell s Combined Face and Iris (CFAIRS) [21] system. While this improves the systems performance, standoff systems have yet to be proven as accurate as their close range equivalents. We will present a standoff system capable of operating up to 7 meters in range. Unlike many systems such as the CFAIRS our system captures high qualitymore » 12 MP video allowing for a multi-sample as well as multi-modal comparison. We found that for standoff systems multi-sample improved performance more than multi-modal. For a small test group of 50 subjects we were able to achieve 100% rank one recognition performance with our system.« less

  9. An Assessment Instrument for Identifying Counseling Needs of Elementary-Aged Students: The Multimodal Sentence Completion Form for Children (MSCF-C).

    ERIC Educational Resources Information Center

    Gamble, Charles W.; Hamblin, Arthur G.

    1986-01-01

    Discusses the use of a sentence completion instrument predicated on Lazarus' multimodal system. The instrument, entitled The Multimodal Sentence Completion Form for Children (MSCF-C), is designed to systematically assess client needs and assist in identifying intervention strategies. Presents a case study of a 12-year-old, sixth-grade student.…

  10. Activity-Based Protein Profiling of Microbes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadler, Natalie C.; Wright, Aaron T.

    Activity-Based Protein Profiling (ABPP) in conjunction with multimodal characterization techniques has yielded impactful findings in microbiology, particularly in pathogen, bioenergy, drug discovery, and environmental research. Using small molecule chemical probes that react irreversibly with specific proteins or protein families in complex systems has provided insights in enzyme functions in central metabolic pathways, drug-protein interactions, and regulatory protein redox, for systems ranging from photoautotrophic cyanobacteria to mycobacteria, and combining live cell or cell extract ABPP with proteomics, molecular biology, modeling, and other techniques has greatly expanded our understanding of these systems. New opportunities for application of ABPP to microbial systems include:more » enhancing protein annotation, characterizing protein activities in myriad environments, and reveal signal transduction and regulatory mechanisms in microbial systems.« less

  11. Thermodynamic cycle in a cavity optomechanical system

    NASA Astrophysics Data System (ADS)

    Ian, Hou

    2014-07-01

    A cavity optomechanical system is initiated by the radiation pressure of a cavity field onto a mirror element acting as a quantum resonator. This radiation pressure can control the thermodynamic character of the mirror to some extent, such as by cooling its effective temperature. Here, we show that by properly engineering the spectral density of a thermal heat bath that interacts with a quantum system, the evolution of the quantum system can be effectively turned on and off. Inside a cavity optomechanical system, when the heat bath is realized by a multi-mode oscillator modelling of the mirror, this on-off effect translates to infusion or extraction of heat energy in and out of the cavity field, facilitating a four-stroke thermodynamic cycle.

  12. An Iterative Local Updating Ensemble Smoother for Estimation and Uncertainty Assessment of Hydrologic Model Parameters With Multimodal Distributions

    NASA Astrophysics Data System (ADS)

    Zhang, Jiangjiang; Lin, Guang; Li, Weixuan; Wu, Laosheng; Zeng, Lingzao

    2018-03-01

    Ensemble smoother (ES) has been widely used in inverse modeling of hydrologic systems. However, for problems where the distribution of model parameters is multimodal, using ES directly would be problematic. One popular solution is to use a clustering algorithm to identify each mode and update the clusters with ES separately. However, this strategy may not be very efficient when the dimension of parameter space is high or the number of modes is large. Alternatively, we propose in this paper a very simple and efficient algorithm, i.e., the iterative local updating ensemble smoother (ILUES), to explore multimodal distributions of model parameters in nonlinear hydrologic systems. The ILUES algorithm works by updating local ensembles of each sample with ES to explore possible multimodal distributions. To achieve satisfactory data matches in nonlinear problems, we adopt an iterative form of ES to assimilate the measurements multiple times. Numerical cases involving nonlinearity and multimodality are tested to illustrate the performance of the proposed method. It is shown that overall the ILUES algorithm can well quantify the parametric uncertainties of complex hydrologic models, no matter whether the multimodal distribution exists.

  13. A 3D character animation engine for multimodal interaction on mobile devices

    NASA Astrophysics Data System (ADS)

    Sandali, Enrico; Lavagetto, Fabio; Pisano, Paolo

    2005-03-01

    Talking virtual characters are graphical simulations of real or imaginary persons that enable natural and pleasant multimodal interaction with the user, by means of voice, eye gaze, facial expression and gestures. This paper presents an implementation of a 3D virtual character animation and rendering engine, compliant with the MPEG-4 standard, running on Symbian-based SmartPhones. Real-time animation of virtual characters on mobile devices represents a challenging task, since many limitations must be taken into account with respect to processing power, graphics capabilities, disk space and execution memory size. The proposed optimization techniques allow to overcome these issues, guaranteeing a smooth and synchronous animation of facial expressions and lip movements on mobile phones such as Sony-Ericsson's P800 and Nokia's 6600. The animation engine is specifically targeted to the development of new "Over The Air" services, based on embodied conversational agents, with applications in entertainment (interactive story tellers), navigation aid (virtual guides to web sites and mobile services), news casting (virtual newscasters) and education (interactive virtual teachers).

  14. Multimode entanglement in reconfigurable graph states using optical frequency combs

    PubMed Central

    Cai, Y.; Roslund, J.; Ferrini, G.; Arzani, F.; Xu, X.; Fabre, C.; Treps, N.

    2017-01-01

    Multimode entanglement is an essential resource for quantum information processing and quantum metrology. However, multimode entangled states are generally constructed by targeting a specific graph configuration. This yields to a fixed experimental setup that therefore exhibits reduced versatility and scalability. Here we demonstrate an optical on-demand, reconfigurable multimode entangled state, using an intrinsically multimode quantum resource and a homodyne detection apparatus. Without altering either the initial squeezing source or experimental architecture, we realize the construction of thirteen cluster states of various sizes and connectivities as well as the implementation of a secret sharing protocol. In particular, this system enables the interrogation of quantum correlations and fluctuations for any multimode Gaussian state. This initiates an avenue for implementing on-demand quantum information processing by only adapting the measurement process and not the experimental layout. PMID:28585530

  15. An innovative multimodal virtual platform for communication with devices in a natural way

    NASA Astrophysics Data System (ADS)

    Kinkar, Chhayarani R.; Golash, Richa; Upadhyay, Akhilesh R.

    2012-03-01

    As technology grows people are diverted and are more interested in communicating with machine or computer naturally. This will make machine more compact and portable by avoiding remote, keyboard etc. also it will help them to live in an environment free from electromagnetic waves. This thought has made 'recognition of natural modality in human computer interaction' a most appealing and promising research field. Simultaneously it has been observed that using single mode of interaction limit the complete utilization of commands as well as data flow. In this paper a multimodal platform, where out of many natural modalities like eye gaze, speech, voice, face etc. human gestures are combined with human voice is proposed which will minimize the mean square error. This will loosen the strict environment needed for accurate and robust interaction while using single mode. Gesture complement Speech, gestures are ideal for direct object manipulation and natural language is used for descriptive tasks. Human computer interaction basically requires two broad sections recognition and interpretation. Recognition and interpretation of natural modality in complex binary instruction is a tough task as it integrate real world to virtual environment. The main idea of the paper is to develop a efficient model for data fusion coming from heterogeneous sensors, camera and microphone. Through this paper we have analyzed that the efficiency is increased if heterogeneous data (image & voice) is combined at feature level using artificial intelligence. The long term goal of this paper is to design a robust system for physically not able or having less technical knowledge.

  16. CD-ROM Multimodal Affordances: Classroom Interaction Perspectives in the Malaysian English Literacy Hour

    ERIC Educational Resources Information Center

    Gardner, Sheena; Yaacob, Aizan

    2009-01-01

    CD-ROM affordances are explored in this article through participation in classroom interaction. CD-ROMs for shared reading of animated stories and language work were introduced to all Malaysian primary schools in 2003 for the Year 1 English Literacy Hour. We present classroom interaction extracts that show how the same CD-ROMs offer different…

  17. Learners' Multimodal Displays of Willingness to Participate in Classroom Interaction in the L2 and CLIL Contexts

    ERIC Educational Resources Information Center

    Evnitskaya, Natalia; Berger, Evelyne

    2017-01-01

    Drawing on recent conversation-analytic and socio-interactionist research on students' participation in L1 and L2 classroom interaction in teacher-fronted activities, this paper makes a step further by presenting an exploratory study of students' displays of willingness to participate (WTP) in classroom interaction and pedagogical activities…

  18. Image-Language Interaction in Online Reading Environments: Challenges for Students' Reading Comprehension

    ERIC Educational Resources Information Center

    Chan, Eveline; Unsworth, Len

    2011-01-01

    This paper presents the qualitative results of a study of students' reading of multimodal texts in an interactive, online environment. The study forms part of a larger project which addressed image-language interaction as an important dimension of language pedagogy and assessment for students growing up in a multimedia digital age. Thirty-two Year…

  19. Multimodal quantitative phase and fluorescence imaging of cell apoptosis

    NASA Astrophysics Data System (ADS)

    Fu, Xinye; Zuo, Chao; Yan, Hao

    2017-06-01

    Fluorescence microscopy, utilizing fluorescence labeling, has the capability to observe intercellular changes which transmitted and reflected light microscopy techniques cannot resolve. However, the parts without fluorescence labeling are not imaged. Hence, the processes simultaneously happen in these parts cannot be revealed. Meanwhile, fluorescence imaging is 2D imaging where information in the depth is missing. Therefore the information in labeling parts is also not complete. On the other hand, quantitative phase imaging is capable to image cells in 3D in real time through phase calculation. However, its resolution is limited by the optical diffraction and cannot observe intercellular changes below 200 nanometers. In this work, fluorescence imaging and quantitative phase imaging are combined to build a multimodal imaging system. Such system has the capability to simultaneously observe the detailed intercellular phenomenon and 3D cell morphology. In this study the proposed multimodal imaging system is used to observe the cell behavior in the cell apoptosis. The aim is to highlight the limitations of fluorescence microscopy and to point out the advantages of multimodal quantitative phase and fluorescence imaging. The proposed multimodal quantitative phase imaging could be further applied in cell related biomedical research, such as tumor.

  20. Risk-Based Neuro-Grid Architecture for Multimodal Biometrics

    NASA Astrophysics Data System (ADS)

    Venkataraman, Sitalakshmi; Kulkarni, Siddhivinayak

    Recent research indicates that multimodal biometrics is the way forward for a highly reliable adoption of biometric identification systems in various applications, such as banks, businesses, government and even home environments. However, such systems would require large distributed datasets with multiple computational realms spanning organisational boundaries and individual privacies.

  1. Operando Multi-modal Synchrotron Investigation for Structural and Chemical Evolution of Cupric Sulfide (CuS) Additive in Li-S battery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Ke; Zhao, Chonghang; Lin, Cheng-Hung

    Conductive metal sulfides are promising multi-functional additives for future lithium-sulfur (Li-S) batteries. These can increase the sulfur cathode’s electrical conductivity to improve the battery’s power capability, as well as contribute to the overall cell-discharge capacity. This multi-functional electrode design showed initial promise; however, complicated interactions at the system level are accompanied by some detrimental side effects. The metal sulfide additives with a chemical conversion as the reaction mechanism, e.g., CuS and FeS 2, can increase the theoretical capacity of the Li-S system. However, these additives may cause undesired parasitic reactions, such as the dissolution of the additive in the electrolyte.more » Studying such complex reactions presents a challenge because it requires experimental methods that can track the chemical and structural evolution of the system during an electrochemical process. To address the fundamental mechanisms in these systems, we employed an operando multimodal x-ray characterization approach to study the structural and chemical evolution of the metal sulfide—utilizing powder diffraction and fluorescence imaging to resolve the former and absorption spectroscopy the latter—during lithiation and de-lithiation of a Li-S battery with CuS as the multi-functional cathode additive. The resulting elucidation of the structural and chemical evolution of the system leads to a new description of the reaction mechanism.« less

  2. Operando Multi-modal Synchrotron Investigation for Structural and Chemical Evolution of Cupric Sulfide (CuS) Additive in Li-S battery

    DOE PAGES

    Sun, Ke; Zhao, Chonghang; Lin, Cheng-Hung; ...

    2017-10-11

    Conductive metal sulfides are promising multi-functional additives for future lithium-sulfur (Li-S) batteries. These can increase the sulfur cathode’s electrical conductivity to improve the battery’s power capability, as well as contribute to the overall cell-discharge capacity. This multi-functional electrode design showed initial promise; however, complicated interactions at the system level are accompanied by some detrimental side effects. The metal sulfide additives with a chemical conversion as the reaction mechanism, e.g., CuS and FeS 2, can increase the theoretical capacity of the Li-S system. However, these additives may cause undesired parasitic reactions, such as the dissolution of the additive in the electrolyte.more » Studying such complex reactions presents a challenge because it requires experimental methods that can track the chemical and structural evolution of the system during an electrochemical process. To address the fundamental mechanisms in these systems, we employed an operando multimodal x-ray characterization approach to study the structural and chemical evolution of the metal sulfide—utilizing powder diffraction and fluorescence imaging to resolve the former and absorption spectroscopy the latter—during lithiation and de-lithiation of a Li-S battery with CuS as the multi-functional cathode additive. The resulting elucidation of the structural and chemical evolution of the system leads to a new description of the reaction mechanism.« less

  3. Optical/MRI Multimodality Molecular Imaging

    NASA Astrophysics Data System (ADS)

    Ma, Lixin; Smith, Charles; Yu, Ping

    2007-03-01

    Multimodality molecular imaging that combines anatomical and functional information has shown promise in development of tumor-targeted pharmaceuticals for cancer detection or therapy. We present a new multimodality imaging technique that combines fluorescence molecular tomography (FMT) and magnetic resonance imaging (MRI) for in vivo molecular imaging of preclinical tumor models. Unlike other optical/MRI systems, the new molecular imaging system uses parallel phase acquisition based on heterodyne principle. The system has a higher accuracy of phase measurements, reduced noise bandwidth, and an efficient modulation of the fluorescence diffuse density waves. Fluorescent Bombesin probes were developed for targeting breast cancer cells and prostate cancer cells. Tissue phantom and small animal experiments were performed for calibration of the imaging system and validation of the targeting probes.

  4. HCI∧2 framework: a software framework for multimodal human-computer interaction systems.

    PubMed

    Shen, Jie; Pantic, Maja

    2013-12-01

    This paper presents a novel software framework for the development and research in the area of multimodal human-computer interface (MHCI) systems. The proposed software framework, which is called the HCI∧2 Framework, is built upon publish/subscribe (P/S) architecture. It implements a shared-memory-based data transport protocol for message delivery and a TCP-based system management protocol. The latter ensures that the integrity of system structure is maintained at runtime. With the inclusion of bridging modules, the HCI∧2 Framework is interoperable with other software frameworks including Psyclone and ActiveMQ. In addition to the core communication middleware, we also present the integrated development environment (IDE) of the HCI∧2 Framework. It provides a complete graphical environment to support every step in a typical MHCI system development process, including module development, debugging, packaging, and management, as well as the whole system management and testing. The quantitative evaluation indicates that our framework outperforms other similar tools in terms of average message latency and maximum data throughput under a typical single PC scenario. To demonstrate HCI∧2 Framework's capabilities in integrating heterogeneous modules, we present several example modules working with a variety of hardware and software. We also present an example of a full system developed using the proposed HCI∧2 Framework, which is called the CamGame system and represents a computer game based on hand-held marker(s) and low-cost camera(s).

  5. Visual Feedback of Tongue Movement for Novel Speech Sound Learning

    PubMed Central

    Katz, William F.; Mehta, Sonya

    2015-01-01

    Pronunciation training studies have yielded important information concerning the processing of audiovisual (AV) information. Second language (L2) learners show increased reliance on bottom-up, multimodal input for speech perception (compared to monolingual individuals). However, little is known about the role of viewing one's own speech articulation processes during speech training. The current study investigated whether real-time, visual feedback for tongue movement can improve a speaker's learning of non-native speech sounds. An interactive 3D tongue visualization system based on electromagnetic articulography (EMA) was used in a speech training experiment. Native speakers of American English produced a novel speech sound (/ɖ/; a voiced, coronal, palatal stop) before, during, and after trials in which they viewed their own speech movements using the 3D model. Talkers' productions were evaluated using kinematic (tongue-tip spatial positioning) and acoustic (burst spectra) measures. The results indicated a rapid gain in accuracy associated with visual feedback training. The findings are discussed with respect to neural models for multimodal speech processing. PMID:26635571

  6. Multimodal approaches for emotion recognition: a survey

    NASA Astrophysics Data System (ADS)

    Sebe, Nicu; Cohen, Ira; Gevers, Theo; Huang, Thomas S.

    2004-12-01

    Recent technological advances have enabled human users to interact with computers in ways previously unimaginable. Beyond the confines of the keyboard and mouse, new modalities for human-computer interaction such as voice, gesture, and force-feedback are emerging. Despite important advances, one necessary ingredient for natural interaction is still missing-emotions. Emotions play an important role in human-to-human communication and interaction, allowing people to express themselves beyond the verbal domain. The ability to understand human emotions is desirable for the computer in several applications. This paper explores new ways of human-computer interaction that enable the computer to be more aware of the user's emotional and attentional expressions. We present the basic research in the field and the recent advances into the emotion recognition from facial, voice, and physiological signals, where the different modalities are treated independently. We then describe the challenging problem of multimodal emotion recognition and we advocate the use of probabilistic graphical models when fusing the different modalities. We also discuss the difficult issues of obtaining reliable affective data, obtaining ground truth for emotion recognition, and the use of unlabeled data.

  7. Multimodal approaches for emotion recognition: a survey

    NASA Astrophysics Data System (ADS)

    Sebe, Nicu; Cohen, Ira; Gevers, Theo; Huang, Thomas S.

    2005-01-01

    Recent technological advances have enabled human users to interact with computers in ways previously unimaginable. Beyond the confines of the keyboard and mouse, new modalities for human-computer interaction such as voice, gesture, and force-feedback are emerging. Despite important advances, one necessary ingredient for natural interaction is still missing-emotions. Emotions play an important role in human-to-human communication and interaction, allowing people to express themselves beyond the verbal domain. The ability to understand human emotions is desirable for the computer in several applications. This paper explores new ways of human-computer interaction that enable the computer to be more aware of the user's emotional and attentional expressions. We present the basic research in the field and the recent advances into the emotion recognition from facial, voice, and physiological signals, where the different modalities are treated independently. We then describe the challenging problem of multimodal emotion recognition and we advocate the use of probabilistic graphical models when fusing the different modalities. We also discuss the difficult issues of obtaining reliable affective data, obtaining ground truth for emotion recognition, and the use of unlabeled data.

  8. Intertextuality and Dialogic Interaction in Students' Online Text Construction

    ERIC Educational Resources Information Center

    Ronan, Briana

    2015-01-01

    This study examines the online writing practices of adolescent emergent bilinguals through the mediating lenses of dialogic interaction and intertextuality. Using a multimodal discourse analysis approach, the study traces how three students develop online academic texts through intertextual moves that traverse modal boundaries. The analysis…

  9. Development of a highly automated system for the remote evaluation of individual tree parameters

    Treesearch

    Richard Pollock

    2000-01-01

    A highly-automated procedure for remotely estimating individual tree location, crown diameter, species class, and height has been developed. This procedure will involve the use of a multimodal airborne sensing system that consists of a digital frame camera, a scanning laser rangefinder, and a position and orientation measurement system. Data from the multimodal sensing...

  10. Melanoma detection using smartphone and multimode hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    MacKinnon, Nicholas; Vasefi, Fartash; Booth, Nicholas; Farkas, Daniel L.

    2016-04-01

    This project's goal is to determine how to effectively implement a technology continuum from a low cost, remotely deployable imaging device to a more sophisticated multimode imaging system within a standard clinical practice. In this work a smartphone is used in conjunction with an optical attachment to capture cross-polarized and collinear color images of a nevus that are analyzed to quantify chromophore distribution. The nevus is also imaged by a multimode hyperspectral system, our proprietary SkinSpect™ device. Relative accuracy and biological plausibility of the two systems algorithms are compared to assess aspects of feasibility of in-home or primary care practitioner smartphone screening prior to rigorous clinical analysis via the SkinSpect.

  11. The Role of Multimodal Analgesia in Spine Surgery.

    PubMed

    Kurd, Mark F; Kreitz, Tyler; Schroeder, Gregory; Vaccaro, Alexander R

    2017-04-01

    Optimal postoperative pain control allows for faster recovery, reduced complications, and improved patient satisfaction. Historically, pain management after spine surgery relied heavily on opioid medications. Multimodal regimens were developed to reduce opioid consumption and associated adverse effects. Multimodal approaches used in orthopaedic surgery of the lower extremity, especially joint arthroplasty, have been well described and studies have shown reduced opioid consumption, improved pain and function, and decreased length of stay. A growing body of evidence supports multimodal analgesia in spine surgery. Methods include the use of preemptive analgesia, NSAIDs, the neuromodulatory agents gabapentin and pregabalin, acetaminophen, and extended-action local anesthesia. The development of a standard approach to multimodal analgesia in spine surgery requires extensive assessment of the literature. Because a substantial number of spine surgeries are performed annually, a standardized approach to multimodal analgesia may provide considerable benefits, particularly in the context of the increased emphasis on accountability within the healthcare system.

  12. Potential of Cognitive Computing and Cognitive Systems

    NASA Astrophysics Data System (ADS)

    Noor, Ahmed K.

    2015-01-01

    Cognitive computing and cognitive technologies are game changers for future engineering systems, as well as for engineering practice and training. They are major drivers for knowledge automation work, and the creation of cognitive products with higher levels of intelligence than current smart products. This paper gives a brief review of cognitive computing and some of the cognitive engineering systems activities. The potential of cognitive technologies is outlined, along with a brief description of future cognitive environments, incorporating cognitive assistants - specialized proactive intelligent software agents designed to follow and interact with humans and other cognitive assistants across the environments. The cognitive assistants engage, individually or collectively, with humans through a combination of adaptive multimodal interfaces, and advanced visualization and navigation techniques. The realization of future cognitive environments requires the development of a cognitive innovation ecosystem for the engineering workforce. The continuously expanding major components of the ecosystem include integrated knowledge discovery and exploitation facilities (incorporating predictive and prescriptive big data analytics); novel cognitive modeling and visual simulation facilities; cognitive multimodal interfaces; and cognitive mobile and wearable devices. The ecosystem will provide timely, engaging, personalized / collaborative, learning and effective decision making. It will stimulate creativity and innovation, and prepare the participants to work in future cognitive enterprises and develop new cognitive products of increasing complexity. http://www.aee.odu.edu/cognitivecomp

  13. Multicomponent-flow analyses by multimode method of characteristics

    USGS Publications Warehouse

    Lai, Chintu

    1994-01-01

    For unsteady open-channel flows having N interacting unknown variables, a system of N mutually independent, partial differential equations can be used to describe the flow-field. The system generally belongs to marching-type problems and permits transformation into characteristic equations that are associated with N distinct characteristics directions. Because characteristics can be considered 'wave' or 'disturbance' propagation, a fluvial system so described can be viewed as adequately definable using these N component waves. A numerical algorithm to solve the N families of characteristics can then be introduced for formulation of an N-component flow-simulation model. The multimode method of characteristics (MMOC), a new numerical scheme that has a combined capacity of several specified-time-interval (STI) schemes of the method of characteristics, makes numerical modeling of such N-component riverine flows feasible and attainable. Merging different STI schemes yields different kinds of MMOC schemes, for which two kinds are displayed herein. With the MMOC, each characteristics is dynamically treated by an appropriate numerical mode, which should lead to an effective and suitable global simulation, covering various types of unsteady flow. The scheme is always linearly stable and its numerical accuracy can be systematically analyzed. By increasing the N value, one can develop a progressively sophisticated model that addresses increasingly complex river-mechanics problems.

  14. Amphiphilic semiconducting polymer as multifunctional nanocarrier for fluorescence/photoacoustic imaging guided chemo-photothermal therapy.

    PubMed

    Jiang, Yuyan; Cui, Dong; Fang, Yuan; Zhen, Xu; Upputuri, Paul Kumar; Pramanik, Manojit; Ding, Dan; Pu, Kanyi

    2017-11-01

    Chemo-photothermal nanotheranostics has the advantage of synergistic therapeutic effect, providing opportunities for optimized cancer therapy. However, current chemo-photothermal nanotheranostic systems generally comprise more than three components, encountering the potential issues of unstable nanostructures and unexpected conflicts in optical and biophysical properties among different components. We herein synthesize an amphiphilic semiconducting polymer (PEG-PCB) and utilize it as a multifunctional nanocarrier to simplify chemo-photothermal nanotheranostics. PEG-PCB has a semiconducting backbone that not only serves as the diagnostic component for near-infrared (NIR) fluorescence and photoacoustic (PA) imaging, but also acts as the therapeutic agent for photothermal therapy. In addition, the hydrophobic backbone of PEG-PCB provides strong hydrophobic and π-π interactions with the aromatic anticancer drug such as doxorubicin for drug encapsulation and delivery. Such a trifunctionality of PEG-PCB eventually results in a greatly simplified nanotheranostic system with only two components but multimodal imaging and therapeutic capacities, permitting effective NIR fluorescence/PA imaging guided chemo-photothermal therapy of cancer in living mice. Our study thus provides a molecular engineering approach to integrate essential properties into one polymer for multimodal nanotheranostics. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Pollution going multimodal: the complex impact of the human-altered sensory environment on animal perception and performance.

    PubMed

    Halfwerk, Wouter; Slabbekoorn, Hans

    2015-04-01

    Anthropogenic sensory pollution is affecting ecosystems worldwide. Human actions generate acoustic noise, emanate artificial light and emit chemical substances. All of these pollutants are known to affect animals. Most studies on anthropogenic pollution address the impact of pollutants in unimodal sensory domains. High levels of anthropogenic noise, for example, have been shown to interfere with acoustic signals and cues. However, animals rely on multiple senses, and pollutants often co-occur. Thus, a full ecological assessment of the impact of anthropogenic activities requires a multimodal approach. We describe how sensory pollutants can co-occur and how covariance among pollutants may differ from natural situations. We review how animals combine information that arrives at their sensory systems through different modalities and outline how sensory conditions can interfere with multimodal perception. Finally, we describe how sensory pollutants can affect the perception, behaviour and endocrinology of animals within and across sensory modalities. We conclude that sensory pollution can affect animals in complex ways due to interactions among sensory stimuli, neural processing and behavioural and endocrinal feedback. We call for more empirical data on covariance among sensory conditions, for instance, data on correlated levels in noise and light pollution. Furthermore, we encourage researchers to test animal responses to a full-factorial set of sensory pollutants in the presence or the absence of ecologically important signals and cues. We realize that such approach is often time and energy consuming, but we think this is the only way to fully understand the multimodal impact of sensory pollution on animal performance and perception. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  16. Deep features for efficient multi-biometric recognition with face and ear images

    NASA Astrophysics Data System (ADS)

    Omara, Ibrahim; Xiao, Gang; Amrani, Moussa; Yan, Zifei; Zuo, Wangmeng

    2017-07-01

    Recently, multimodal biometric systems have received considerable research interest in many applications especially in the fields of security. Multimodal systems can increase the resistance to spoof attacks, provide more details and flexibility, and lead to better performance and lower error rate. In this paper, we present a multimodal biometric system based on face and ear, and propose how to exploit the extracted deep features from Convolutional Neural Networks (CNNs) on the face and ear images to introduce more powerful discriminative features and robust representation ability for them. First, the deep features for face and ear images are extracted based on VGG-M Net. Second, the extracted deep features are fused by using a traditional concatenation and a Discriminant Correlation Analysis (DCA) algorithm. Third, multiclass support vector machine is adopted for matching and classification. The experimental results show that the proposed multimodal system based on deep features is efficient and achieves a promising recognition rate up to 100 % by using face and ear. In addition, the results indicate that the fusion based on DCA is superior to traditional fusion.

  17. An Evaluation of Multimodal Interactions with Technology while Learning Science Concepts

    ERIC Educational Resources Information Center

    Anastopoulou, Stamatina; Sharples, Mike; Baber, Chris

    2011-01-01

    This paper explores the value of employing multiple modalities to facilitate science learning with technology. In particular, it is argued that when multiple modalities are employed, learners construct strong relations between physical movement and visual representations of motion. Body interactions with visual representations, enabled by…

  18. Multimodal Application for Foreign Language Teaching

    ERIC Educational Resources Information Center

    Magal-Royo, Teresa; Gimenez-Lopez, Jose Luis; Pairy, Blas; Garcia-Laborda, Jesus; Gonzalez-Del Rio, Jimena

    2011-01-01

    The current development of educational applications for language learning has experienced a qualitative change in the criteria of interaction between users and devices due to the technological advances of input and output data through keyboard, mouse, stylus, tactile screen, etc. The multiple interactions generated in a natural way by humans…

  19. Multimodal Transcription of Video: Examining Interaction in Early Years Classrooms

    ERIC Educational Resources Information Center

    Cowan, Kate

    2014-01-01

    Video is an increasingly popular data collection tool for those undertaking social research, offering a temporal, sequential, fine-grained record which is durable, malleable and sharable. These characteristics make video a valuable resource for researching Early Years classrooms, particularly with regard to the study of children's interaction in…

  20. The Influential Interactions of Pre-Kindergarten Writers

    ERIC Educational Resources Information Center

    Kissel, Brian; Hansen, Jane; Tower, Holly; Lawrence, Jody

    2011-01-01

    This article examines six years of ethnographic research in Robyn Davis's pre-kindergarten classroom in the USA. Using a theoretical framework to embed writing within a social semiotic that is multimodal and has social intent (Street, 2003), the authors show how children used interactions during writing to create various written products. Three…

  1. A Study of Multimodal Discourse in the Design of Interactive Digital Material for Language Learning

    ERIC Educational Resources Information Center

    Burset, Silvia; Bosch, Emma; Pujolà, Joan-Tomàs

    2016-01-01

    This study analyses some published interactive materials for the learning of Spanish as a f?irst language and English as a Foreign Language (EFL) commonly used in primary and secondary education in Spain. The present investigation looks into the relationships between text and image on the interface of Interactive Digital Material (IDM) to develop…

  2. Virtual Teleoperation for Unmanned Aerial Vehicles

    DTIC Science & Technology

    2012-01-24

    Gilbert, S., “Wayfinder: Evaluating Multitouch Interaction in Supervisory Control of Unmanned Vehicles,” Proceedings of ASME 2nd World Conference on... interactive virtual reality environment that fuses available information into a coherent picture that can be viewed from multiple perspectives and scales...for multimodal interaction • Generally abstracted controller hardware and graphical interfaces facilitating deployment on a variety of VR platform

  3. Image-guided plasma therapy of cutaneous wound

    NASA Astrophysics Data System (ADS)

    Zhang, Zhiwu; Ren, Wenqi; Yu, Zelin; Zhang, Shiwu; Yue, Ting; Xu, Ronald

    2014-02-01

    The wound healing process involves the reparative phases of inflammation, proliferation, and remodeling. Interrupting any of these phases may result in chronically unhealed wounds, amputation, or even patient death. Despite the clinical significance in chronic wound management, no effective methods have been developed for quantitative image-guided treatment. We integrated a multimodal imaging system with a cold atmospheric plasma probe for image-guided treatment of chronic wound. Multimodal imaging system offers a non-invasive, painless, simultaneous and quantitative assessment of cutaneous wound healing. Cold atmospheric plasma accelerates the wound healing process through many mechanisms including decontamination, coagulation and stimulation of the wound healing. The therapeutic effect of cold atmospheric plasma is studied in vivo under the guidance of a multimodal imaging system. Cutaneous wounds are created on the dorsal skin of the nude mice. During the healing process, the sample wound is treated by cold atmospheric plasma at different controlled dosage, while the control wound is healed naturally. The multimodal imaging system integrating a multispectral imaging module and a laser speckle imaging module is used to collect the information of cutaneous tissue oxygenation (i.e. oxygen saturation, StO2) and blood perfusion simultaneously to assess and guide the plasma therapy. Our preliminary tests show that cold atmospheric plasma in combination with multimodal imaging guidance has the potential to facilitate the healing of chronic wounds.

  4. Collaborative Micro Aerial Vehicle Exploration of Outdoor Environments

    DTIC Science & Technology

    2010-02-01

    with accelerometer or multitouch capabilities have also simply followed the same WYSIWYG paradigm. Finally, no HRI research exists on interacting ...Nudge Control relies on multimodal interaction ( multitouch gestures and tilting) to create a rich control interaction without cluttering the display...used in one of two different modes. NG mode uses a set of tilting gestures while CT mode uses multitouch gestures to interact with the vehicle. Both of

  5. Optical sensor in planar configuration based on multimode interference

    NASA Astrophysics Data System (ADS)

    Blahut, Marek

    2017-08-01

    In the paper a numerical analysis of optical sensors based on multimode interference in planar one-dimensional step-index configuration is presented. The structure consists in single-mode input and output waveguides and multimode waveguide which guide only few modes. Material parameters discussed refer to a SU8 polymer waveguide on SiO2 substrate. The optical system described will be designed to the analysis of biological substances.

  6. Multimode four-wave mixing in an unresolved sideband optomechanical system

    NASA Astrophysics Data System (ADS)

    Li, Zongyang; You, Xiang; Li, Yongmin; Liu, Yong-Chun; Peng, Kunchi

    2018-03-01

    We have studied multimode four-wave mixing (FWM) in an unresolved sideband cavity optomechanical system. The radiation pressure coupling between the cavity fields and multiple mechanical modes results in the formation of a series of tripod-type energy-level systems, which induce the multimode FWM phenomenon. The FWM mechanism enables remarkable amplification of a weak signal field accompanied by the generation of an FWM field when only a microwatt-level pump field is applied. For proper system parameters, the amplified signal and FWM fields have equal intensity with opposite phases. The gain and frequency response bandwidth of the signal field can be dynamically tuned by varying the pump intensity, optomechanical coupling strength, and additional feedback control. Under certain conditions, the frequency response bandwidth can be very narrow and reaches the level of hertz.

  7. Multimodality bonchoscopic imaging of tracheopathica osteochondroplastica

    NASA Astrophysics Data System (ADS)

    Colt, Henri; Murgu, Septimiu D.; Ahn, Yeh-Chan; Brenner, Matt

    2009-05-01

    Results of a commercial optical coherence tomography system used as part of a multimodality diagnostic bronchoscopy platform are presented for a 61-year-old patient with central airway obstruction from tracheopathica osteochondroplastica. Comparison to results of white-light bronchoscopy, histology, and endobronchial ultrasound examination are accompanied by a discussion of resolution, penetration depth, contrast, and field of view of these imaging modalities. White-light bronchoscopy revealed irregularly shaped, firm submucosal nodules along cartilaginous structures of the anterior and lateral walls of the trachea, sparing the muscular posterior membrane. Endobronchial ultrasound showed a hyperechoic density of 0.4 cm thickness. optical coherence tomography (OCT) was performed using a commercially available, compact time-domain OCT system (Niris System, Imalux Corp., Cleveland, Ohio) with a magnetically actuating probe (two-dimensional, front imaging, and inside actuation). Images showed epithelium, upper submucosa, and osseous submucosal nodule layers corresponding with histopathology. To our knowledge, this is the first time these commercially available systems are used as part of a multimodality bronchoscopy platform to study diagnostic imaging of a benign disease causing central airway obstruction. Further studies are needed to optimize these systems for pulmonary applications and to determine how new-generation imaging modalities will be integrated into a multimodality bronchoscopy platform.

  8. Multimodal transport and TransLoad facilities in Arkansas.

    DOT National Transportation Integrated Search

    2015-01-01

    National Priorities consist of building a clean and ecient 21st century : transportation sector, and Multimodal Transportation is one of ve : Transportation System Eciency strategies at the US Department of Energy. Six : locomotives co...

  9. Enhanced Particle Swarm Optimization Algorithm: Efficient Training of ReaxFF Reactive Force Fields.

    PubMed

    Furman, David; Carmeli, Benny; Zeiri, Yehuda; Kosloff, Ronnie

    2018-06-12

    Particle swarm optimization (PSO) is a powerful metaheuristic population-based global optimization algorithm. However, when it is applied to nonseparable objective functions, its performance on multimodal landscapes is significantly degraded. Here we show that a significant improvement in the search quality and efficiency on multimodal functions can be achieved by enhancing the basic rotation-invariant PSO algorithm with isotropic Gaussian mutation operators. The new algorithm demonstrates superior performance across several nonlinear, multimodal benchmark functions compared with the rotation-invariant PSO algorithm and the well-established simulated annealing and sequential one-parameter parabolic interpolation methods. A search for the optimal set of parameters for the dispersion interaction model in the ReaxFF- lg reactive force field was carried out with respect to accurate DFT-TS calculations. The resulting optimized force field accurately describes the equations of state of several high-energy molecular crystals where such interactions are of crucial importance. The improved algorithm also presents better performance compared to a genetic algorithm optimization method in the optimization of the parameters of a ReaxFF- lg correction model. The computational framework is implemented in a stand-alone C++ code that allows the straightforward development of ReaxFF reactive force fields.

  10. Multimodal emotional state recognition using sequence-dependent deep hierarchical features.

    PubMed

    Barros, Pablo; Jirak, Doreen; Weber, Cornelius; Wermter, Stefan

    2015-12-01

    Emotional state recognition has become an important topic for human-robot interaction in the past years. By determining emotion expressions, robots can identify important variables of human behavior and use these to communicate in a more human-like fashion and thereby extend the interaction possibilities. Human emotions are multimodal and spontaneous, which makes them hard to be recognized by robots. Each modality has its own restrictions and constraints which, together with the non-structured behavior of spontaneous expressions, create several difficulties for the approaches present in the literature, which are based on several explicit feature extraction techniques and manual modality fusion. Our model uses a hierarchical feature representation to deal with spontaneous emotions, and learns how to integrate multiple modalities for non-verbal emotion recognition, making it suitable to be used in an HRI scenario. Our experiments show that a significant improvement of recognition accuracy is achieved when we use hierarchical features and multimodal information, and our model improves the accuracy of state-of-the-art approaches from 82.5% reported in the literature to 91.3% for a benchmark dataset on spontaneous emotion expressions. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Negative Differential Conductivity in an Interacting Quantum Gas.

    PubMed

    Labouvie, Ralf; Santra, Bodhaditya; Heun, Simon; Wimberger, Sandro; Ott, Herwig

    2015-07-31

    We report on the observation of negative differential conductivity (NDC) in a quantum transport device for neutral atoms employing a multimode tunneling junction. The system is realized with a Bose-Einstein condensate loaded in a one-dimensional optical lattice with high site occupancy. We induce an initial difference in chemical potential at one site by local atom removal. The ensuing transport dynamics are governed by the interplay between the tunneling coupling, the interaction energy, and intrinsic collisions, which turn the coherent coupling into a hopping process. The resulting current-voltage characteristics exhibit NDC, for which we identify atom number-dependent tunneling as a new microscopic mechanism. Our study opens new ways for the future implementation and control of complex neutral atom quantum circuits.

  12. The role of voice input for human-machine communication.

    PubMed Central

    Cohen, P R; Oviatt, S L

    1995-01-01

    Optimism is growing that the near future will witness rapid growth in human-computer interaction using voice. System prototypes have recently been built that demonstrate speaker-independent real-time speech recognition, and understanding of naturally spoken utterances with vocabularies of 1000 to 2000 words, and larger. Already, computer manufacturers are building speech recognition subsystems into their new product lines. However, before this technology can be broadly useful, a substantial knowledge base is needed about human spoken language and performance during computer-based spoken interaction. This paper reviews application areas in which spoken interaction can play a significant role, assesses potential benefits of spoken interaction with machines, and compares voice with other modalities of human-computer interaction. It also discusses information that will be needed to build a firm empirical foundation for the design of future spoken and multimodal interfaces. Finally, it argues for a more systematic and scientific approach to investigating spoken input and performance with future language technology. PMID:7479803

  13. Trip generation data collection in urban areas.

    DOT National Transportation Integrated Search

    2014-09-01

    There is currently limited data on urban, multimodal trip generation at the individual site level. This lack of : data limits the ability of transportation agencies to assess development impacts on the transportation system : in urban and multimodal ...

  14. Modeling regional freight flow assignment through intermodal terminals

    DOT National Transportation Integrated Search

    2005-03-01

    An analytical model is developed to assign regional freight across a multimodal highway and railway network using geographic information systems. As part of the regional planning process, the model is an iterative procedure that assigns multimodal fr...

  15. Multimodal imaging of cutaneous wound tissue

    NASA Astrophysics Data System (ADS)

    Zhang, Shiwu; Gnyawali, Surya; Huang, Jiwei; Ren, Wenqi; Gordillo, Gayle; Sen, Chandan K.; Xu, Ronald

    2015-01-01

    Quantitative assessment of wound tissue ischemia, perfusion, and inflammation provides critical information for appropriate detection, staging, and treatment of chronic wounds. However, few methods are available for simultaneous assessment of these tissue parameters in a noninvasive and quantitative fashion. We integrated hyperspectral, laser speckle, and thermographic imaging modalities in a single-experimental setup for multimodal assessment of tissue oxygenation, perfusion, and inflammation characteristics. Algorithms were developed for appropriate coregistration between wound images acquired by different imaging modalities at different times. The multimodal wound imaging system was validated in an occlusion experiment, where oxygenation and perfusion maps of a healthy subject's upper extremity were continuously monitored during a postocclusive reactive hyperemia procedure and compared with standard measurements. The system was also tested in a clinical trial where a wound of three millimeters in diameter was introduced on a healthy subject's lower extremity and the healing process was continuously monitored. Our in vivo experiments demonstrated the clinical feasibility of multimodal cutaneous wound imaging.

  16. Stepwise Connectivity of the Modal Cortex Reveals the Multimodal Organization of the Human Brain

    PubMed Central

    Sepulcre, Jorge; Sabuncu, Mert R.; Yeo, Thomas B.; Liu, Hesheng; Johnson, Keith A.

    2012-01-01

    How human beings integrate information from external sources and internal cognition to produce a coherent experience is still not well understood. During the past decades, anatomical, neurophysiological and neuroimaging research in multimodal integration have stood out in the effort to understand the perceptual binding properties of the brain. Areas in the human lateral occipito-temporal, prefrontal and posterior parietal cortices have been associated with sensory multimodal processing. Even though this, rather patchy, organization of brain regions gives us a glimpse of the perceptual convergence, the articulation of the flow of information from modality-related to the more parallel cognitive processing systems remains elusive. Using a method called Stepwise Functional Connectivity analysis, the present study analyzes the functional connectome and transitions from primary sensory cortices to higher-order brain systems. We identify the large-scale multimodal integration network and essential connectivity axes for perceptual integration in the human brain. PMID:22855814

  17. An integrated GIS-based data model for multimodal urban public transportation analysis and management

    NASA Astrophysics Data System (ADS)

    Chen, Shaopei; Tan, Jianjun; Ray, C.; Claramunt, C.; Sun, Qinqin

    2008-10-01

    Diversity is one of the main characteristics of transportation data collected from multiple sources or formats, which can be extremely complex and disparate. Moreover, these multimodal transportation data are usually characterised by spatial and temporal properties. Multimodal transportation network data modelling involves both an engineering and research domain that has attracted the design of a number of spatio-temporal data models in the geographic information system (GIS). However, the application of these specific models to multimodal transportation network is still a challenging task. This research addresses this challenge from both integrated multimodal data organization and object-oriented modelling perspectives, that is, how a complex urban transportation network should be organized, represented and modeled appropriately when considering a multimodal point of view, and using object-oriented modelling method. We proposed an integrated GIS-based data model for multimodal urban transportation network that lays a foundation to enhance the multimodal transportation network analysis and management. This modelling method organizes and integrates multimodal transit network data, and supports multiple representations for spatio-temporal objects and relationship as both visual and graphic views. The data model is expressed by using a spatio-temporal object-oriented modelling method, i.e., the unified modelling language (UML) extended to spatial and temporal plug-in for visual languages (PVLs), which provides an essential support to the spatio-temporal data modelling for transportation GIS.

  18. Multimode fiber optic wavelength division multiplexing

    NASA Technical Reports Server (NTRS)

    Spencer, J. L.

    1982-01-01

    Optical wavelength division multiplexing (WDM) systems, with signals transmitted on different wavelengths through a single optical fiber, can have increased bandwidth and fault isolation properties over single wavelength optical systems. Two WDM system designs that might be used with multimode fibers are considered and a general description of the components which could be used to implement the system are given. The components described are sources, multiplexers, demultiplexers, and detectors. Emphasis is given to the demultiplexer technique which is the major developmental component in the WDM system.

  19. Nonlinear dynamics and control of a vibrating rectangular plate

    NASA Technical Reports Server (NTRS)

    Shebalin, J. V.

    1983-01-01

    The von Karman equations of nonlinear elasticity are solved for the case of a vibrating rectangular plate by meams of a Fourier spectral transform method. The amplification of a particular Fourier mode by nonlinear transfer of energy is demonstrated for this conservative system. The multi-mode system is reduced to a minimal (two mode) system, retaining the qualitative features of the multi-mode system. The effect of a modal control law on the dynamics of this minimal nonlinear elastic system is examined.

  20. FIBER AND INTEGRATED OPTICS: Efficiency of nonstationary transformation of the spatial coherence of pulsed laser radiation in a multimode optical fibre upon self-phase modulation

    NASA Astrophysics Data System (ADS)

    Kitsak, M. A.; Kitsak, A. I.

    2007-08-01

    The model scheme of the nonlinear mechanism of transformation (decreasing) of the spatial coherence of a pulsed laser field in an extended multimode optical fibre upon nonstationary interaction with the fibre core is theoretically analysed. The case is considered when the spatial statistics of input radiation is caused by phase fluctuations. The analytic expression is obtained which relates the number of spatially coherent radiation modes with the spatially energy parameters on the initial radiation and fibre parameters. The efficiency of decorrelation of radiation upon excitation of the thermal and electrostriction nonlinearities in the fibre is estimated. Experimental studies are performed which revealed the basic properties of the transformation of the spatial coherence of a laser beam in a multimode fibre. The experimental results are compared with the predictions of the model of radiation transfer proposed in the paper. It is found that the spatial decorrelation of a light beam in a silica multimode fibre is mainly restricted by stimulated Raman scattering.

  1. Complementary Imaging of Silver Nanoparticle Interactions with Green Algae: Dark-Field Microscopy, Electron Microscopy, and Nanoscale Secondary Ion Mass Spectrometry.

    PubMed

    Sekine, Ryo; Moore, Katie L; Matzke, Marianne; Vallotton, Pascal; Jiang, Haibo; Hughes, Gareth M; Kirby, Jason K; Donner, Erica; Grovenor, Chris R M; Svendsen, Claus; Lombi, Enzo

    2017-11-28

    Increasing consumer use of engineered nanomaterials has led to significantly increased efforts to understand their potential impact on the environment and living organisms. Currently, no individual technique can provide all the necessary information such as their size, distribution, and chemistry in complex biological systems. Consequently, there is a need to develop complementary instrumental imaging approaches that provide enhanced understanding of these "bio-nano" interactions to overcome the limitations of individual techniques. Here we used a multimodal imaging approach incorporating dark-field light microscopy, high-resolution electron microscopy, and nanoscale secondary ion mass spectrometry (NanoSIMS). The aim was to gain insight into the bio-nano interactions of surface-functionalized silver nanoparticles (Ag-NPs) with the green algae Raphidocelis subcapitata, by combining the fidelity, spatial resolution, and elemental identification offered by the three techniques, respectively. Each technique revealed that Ag-NPs interact with the green algae with a dependence on the size (10 nm vs 60 nm) and surface functionality (tannic acid vs branched polyethylenimine, bPEI) of the NPs. Dark-field light microscopy revealed the presence of strong light scatterers on the algal cell surface, and SEM imaging confirmed their nanoparticulate nature and localization at nanoscale resolution. NanoSIMS imaging confirmed their chemical identity as Ag, with the majority of signal concentrated at the cell surface. Furthermore, SEM and NanoSIMS provided evidence of 10 nm bPEI Ag-NP internalization at higher concentrations (40 μg/L), correlating with the highest toxicity observed from these NPs. This multimodal approach thus demonstrated an effective approach to complement dose-response studies in nano-(eco)-toxicological investigations.

  2. Pace, Interactivity and Multimodality in Teachers' Design of Texts for Interactive Whiteboards in the Secondary School Classroom

    ERIC Educational Resources Information Center

    Jewitt, Carey; Moss, Gemma; Cardini, Alejandra

    2007-01-01

    Teachers making texts for use in the classroom is nothing new, it is an established aspect of pedagogic practice. The introduction of interactive whiteboards (IWBs) into UK secondary schools has, however, impacted on this practice in a number of ways. Changes in the site of design and display--from the printed page or worksheet and the blackboard…

  3. A strategic map for high-impact virtual experience design

    NASA Astrophysics Data System (ADS)

    Faste, Haakon; Bergamasco, Massimo

    2009-02-01

    We have employed methodologies of human centered design to inspire and guide the engineering of a definitive low-cost aesthetic multimodal experience intended to stimulate cultural growth. Using a combination of design research, trend analysis and the programming of immersive virtual 3D worlds, over 250 innovative concepts have been brainstormed, prototyped, evaluated and refined. These concepts have been used to create a strategic map for the development of highimpact virtual art experiences, the most promising of which have been incorporated into a multimodal environment programmed in the online interactive 3D platform XVR. A group of test users have evaluated the experience as it has evolved, using a multimodal interface with stereo vision, 3D audio and haptic feedback. This paper discusses the process, content, results, and impact on our engineering laboratory that this research has produced.

  4. Method and apparatus for operating a powertrain system upon detecting a stuck-closed clutch

    DOEpatents

    Hansen, R. Anthony

    2014-02-18

    A powertrain system includes a multi-mode transmission having a plurality of torque machines. A method for controlling the powertrain system includes identifying all presently applied clutches including commanded applied clutches and the stuck-closed clutch upon detecting one of the torque-transfer clutches is in a stuck-closed condition. A closed-loop control system is employed to control operation of the multi-mode transmission accounting for all the presently applied clutches.

  5. Multimodal integration of anatomy and physiology classes: How instructors utilize multimodal teaching in their classrooms

    NASA Astrophysics Data System (ADS)

    McGraw, Gerald M., Jr.

    Multimodality is the theory of communication as it applies to social and educational semiotics (making meaning through the use of multiple signs and symbols). The term multimodality describes a communication methodology that includes multiple textual, aural, and visual applications (modes) that are woven together to create what is referred to as an artifact. Multimodal teaching methodology attempts to create a deeper meaning to course content by activating the higher cognitive areas of the student's brain, creating a more sustained retention of the information (Murray, 2009). The introduction of multimodality educational methodologies as a means to more optimally engage students has been documented within educational literature. However, studies analyzing the distribution and penetration into basic sciences, more specifically anatomy and physiology, have not been forthcoming. This study used a quantitative survey design to determine the degree to which instructors integrated multimodality teaching practices into their course curricula. The instrument used for the study was designed by the researcher based on evidence found in the literature and sent to members of three associations/societies for anatomy and physiology instructors: the Human Anatomy and Physiology Society; the iTeach Anatomy & Physiology Collaborate; and the American Physiology Society. Respondents totaled 182 instructor members of two- and four-year, private and public higher learning colleges collected from the three organizations collectively with over 13,500 members in over 925 higher learning institutions nationwide. The study concluded that the expansion of multimodal methodologies into anatomy and physiology classrooms is at the beginning of the process and that there is ample opportunity for expansion. Instructors continue to use lecture as their primary means of interaction with students. Email is still the major form of out-of-class communication for full-time instructors. Instructors with greater than 16 years of teaching anatomy and physiology are less likely to use video or animation in their classroom than instructors with fewer years.

  6. The Interactional Management of Claims of Insufficient Knowledge in English Language Classrooms

    ERIC Educational Resources Information Center

    Sert, Olcay; Walsh, Steve

    2013-01-01

    This paper primarily investigates the interactional unfolding and management of "claims of insufficient knowledge" (Beach and Metzger 1997) in two English language classrooms from a multi-modal, conversation-analytic perspective. The analyses draw on a close, micro-analytic account of sequential organisation of talk as well as on various…

  7. Knowledge Construction, Meaning-Making and Interaction in CLIL Science Classroom Communities of Practice

    ERIC Educational Resources Information Center

    Evnitskaya, Natalia; Morton, Tom

    2011-01-01

    This paper draws on Wenger's model of community of practice to present preliminary findings on how processes of negotiation of meaning and identity formation occur in knowledge construction, meaning-making and interaction in two secondary Content and Language Integrated Learning (CLIL) science classrooms. It uses a multimodal conversation analysis…

  8. Interacting with… What? Exploring Children's Social and Sensory Practices in a Science Discovery Centre

    ERIC Educational Resources Information Center

    Dicks, Bella

    2013-01-01

    This paper presents findings from a qualitative UK study exploring the social practices of schoolchildren visiting an interactive science discovery centre. It is promoted as a place for "learning through doing", but the multi-modal, ethnographic methods adopted suggest that children were primarily engaged in (1) sensory pleasure-taking…

  9. Managing Knowledge Claims in Classroom Discourse: The Public Construction of a Homogeneous Epistemic Status

    ERIC Educational Resources Information Center

    Heller, Vivien

    2017-01-01

    Drawing on sequential and multimodal analysis of video-recorded classroom interactions, the paper examines in detail the interactional practices and verbal and bodily displays that serve to (re-)establish a congruency between the teacher's expectation with regard to the participants' relative knowledge and the students' actual knowledge claims. By…

  10. Multimodal Interaction in Ambient Intelligence Environments Using Speech, Localization and Robotics

    ERIC Educational Resources Information Center

    Galatas, Georgios

    2013-01-01

    An Ambient Intelligence Environment is meant to sense and respond to the presence of people, using its embedded technology. In order to effectively sense the activities and intentions of its inhabitants, such an environment needs to utilize information captured from multiple sensors and modalities. By doing so, the interaction becomes more natural…

  11. Effects of Group Reflection Variations in Project-Based Learning Integrated in a Web 2.0 Learning Space

    ERIC Educational Resources Information Center

    Kim, Paul; Hong, Ji-Seong; Bonk, Curtis; Lim, Gloria

    2011-01-01

    A Web 2.0 environment that is coupled with emerging multimodal interaction tools can have considerable influence on team learning outcomes. Today, technologies supporting social networking, collective intelligence, emotional interaction, and virtual communication are introducing new forms of collaboration that are profoundly impacting education.…

  12. Integration of Fiber-Optic Sensor Arrays into a Multi-Modal Tactile Sensor Processing System for Robotic End-Effectors

    PubMed Central

    Kampmann, Peter; Kirchner, Frank

    2014-01-01

    With the increasing complexity of robotic missions and the development towards long-term autonomous systems, the need for multi-modal sensing of the environment increases. Until now, the use of tactile sensor systems has been mostly based on sensing one modality of forces in the robotic end-effector. The use of a multi-modal tactile sensory system is motivated, which combines static and dynamic force sensor arrays together with an absolute force measurement system. This publication is focused on the development of a compact sensor interface for a fiber-optic sensor array, as optic measurement principles tend to have a bulky interface. Mechanical, electrical and software approaches are combined to realize an integrated structure that provides decentralized data pre-processing of the tactile measurements. Local behaviors are implemented using this setup to show the effectiveness of this approach. PMID:24743158

  13. A gantry-based tri-modality system for bioluminescence tomography

    PubMed Central

    Yan, Han; Lin, Yuting; Barber, William C.; Unlu, Mehmet Burcin; Gulsen, Gultekin

    2012-01-01

    A gantry-based tri-modality system that combines bioluminescence (BLT), diffuse optical (DOT), and x-ray computed tomography (XCT) into the same setting is presented here. The purpose of this system is to perform bioluminescence tomography using a multi-modality imaging approach. As parts of this hybrid system, XCT and DOT provide anatomical information and background optical property maps. This structural and functional a priori information is used to guide and restrain bioluminescence reconstruction algorithm and ultimately improve the BLT results. The performance of the combined system is evaluated using multi-modality phantoms. In particular, a cylindrical heterogeneous multi-modality phantom that contains regions with higher optical absorption and x-ray attenuation is constructed. We showed that a 1.5 mm diameter bioluminescence inclusion can be localized accurately with the functional a priori information while its source strength can be recovered more accurately using both structural and the functional a priori information. PMID:22559540

  14. ITOHealth: a multimodal middleware-oriented integrated architecture for discovering medical entities.

    PubMed

    Alor-Hernández, Giner; Sánchez-Cervantes, José Luis; Juárez-Martínez, Ulises; Posada-Gómez, Rubén; Cortes-Robles, Guillermo; Aguilar-Laserre, Alberto

    2012-03-01

    Emergency healthcare is one of the emerging application domains for information services, which requires highly multimodal information services. The time of consuming pre-hospital emergency process is critical. Therefore, the minimization of required time for providing primary care and consultation to patients is one of the crucial factors when trying to improve the healthcare delivery in emergency situations. In this sense, dynamic location of medical entities is a complex process that needs time and it can be critical when a person requires medical attention. This work presents a multimodal location-based system for locating and assigning medical entities called ITOHealth. ITOHealth provides a multimodal middleware-oriented integrated architecture using a service-oriented architecture in order to provide information of medical entities in mobile devices and web browsers with enriched interfaces providing multimodality support. ITOHealth's multimodality is based on the use of Microsoft Agent Characters, the integration of natural language voice to the characters, and multi-language and multi-characters support providing an advantage for users with visual impairments.

  15. Calibration for single multi-mode fiber digital scanning microscopy imaging system

    NASA Astrophysics Data System (ADS)

    Yin, Zhe; Liu, Guodong; Liu, Bingguo; Gan, Yu; Zhuang, Zhitao; Chen, Fengdong

    2015-11-01

    Single multimode fiber (MMF) digital scanning imaging system is a development tendency of modern endoscope. We concentrate on the calibration method of the imaging system. Calibration method comprises two processes, forming scanning focused spots and calibrating the couple factors varied with positions. Adaptive parallel coordinate algorithm (APC) is adopted to form the focused spots at the multimode fiber (MMF) output. Compare with other algorithm, APC contains many merits, i.e. rapid speed, small amount calculations and no iterations. The ratio of the optics power captured by MMF to the intensity of the focused spots is called couple factor. We setup the calibration experimental system to form the scanning focused spots and calculate the couple factors for different object positions. The experimental result the couple factor is higher in the center than the edge.

  16. Radiation patterns of multimode feed-horn-coupled bolometers for FAR-IR space applications

    NASA Astrophysics Data System (ADS)

    Kalinauskaite, Eimante; Murphy, J. Anthony; McAuley, Ian; Trappe, Neal A.; McCarthy, Darragh N.; Bracken, Colm P.; Doherty, Stephen; Gradziel, Marcin L.; O'Sullivan, Créidhe; Wilson, Daniel; Peacocke, Tully; Maffei, Bruno; Lamarre, Jean-Michel; Ade, Peter A. R.; Savini, Giorgio

    2017-02-01

    A multimode horn differs from a single mode horn in that it has a larger sized waveguide feeding it. Multimode horns can therefore be utilized as high efficiency feeds for bolometric detectors, providing increased throughput and sensitivity over single mode feeds, while also ensuring good control of the beam pattern characteristics. Although a cavity mounted bolometer can be modelled as a perfect black body radiator (using reciprocity in order to calculate beam patterns), nevertheless, this is an approximation. In this paper we present how this approach can be improved to actually include the cavity coupled bolometer, now modelled as a thin absorbing film. Generally, this is a big challenge for finite element software, in that the structures are typically electrically large. However, the radiation pattern of multimode horns can be more efficiently simulated using mode matching, typically with smooth-walled waveguide modes as the basis and computing an overall scattering matrix for the horn-waveguide-cavity system. Another issue on the optical efficiency of the detectors is the presence of any free space gaps, through which power can escape. This is best dealt with treating the system as an absorber. Appropriate reflection and transmission matrices can be determined for the cavity using the natural eigenfields of the bolometer cavity system. We discuss how the approach can be applied to proposed terahertz systems, and also present results on how the approach was applied to improve beam pattern predictions on the sky for the multi-mode HFI 857GHz channel on Planck.

  17. Bodily Explorations in Space: Social Experience of a Multimodal Art Installation

    NASA Astrophysics Data System (ADS)

    Jacucci, Giulio; Spagnolli, Anna; Chalambalakis, Alessandro; Morrison, Ann; Liikkanen, Lassi; Roveda, Stefano; Bertoncini, Massimo

    We contribute with an extensive field study of a public interactive art installation that applies multimodal interface technologies. The installation is part of a Theater production on Galileo Galilei and includes: projected galaxies that are generated and move according to motion of visitors changing colour depending on their voices; projected stars that configure themselves around shadows of visitors. In the study we employ emotion scales (PANAS), qualitative analysis of questionnaire answers and video-recordings. PANAS rates indicate dominantly positive feelings, further described in the subjective verbalizations as gravitating around interest, ludic pleasure and transport. Through the video analysis, we identified three phases in the interaction with the artwork (circumspection, testing, play) and two pervasive features of these phases (experience sharing and imitation), which were also found in the verbalizations. Both video and verbalisations suggest that visitor’s experience and ludic pleasure are rooted in the embodied, performative interaction with the installation, and is negotiated with the other visitors.

  18. Multi-modal assessment of on-road demand of voice and manual phone calling and voice navigation entry across two embedded vehicle systems.

    PubMed

    Mehler, Bruce; Kidd, David; Reimer, Bryan; Reagan, Ian; Dobres, Jonathan; McCartt, Anne

    2016-03-01

    One purpose of integrating voice interfaces into embedded vehicle systems is to reduce drivers' visual and manual distractions with 'infotainment' technologies. However, there is scant research on actual benefits in production vehicles or how different interface designs affect attentional demands. Driving performance, visual engagement, and indices of workload (heart rate, skin conductance, subjective ratings) were assessed in 80 drivers randomly assigned to drive a 2013 Chevrolet Equinox or Volvo XC60. The Chevrolet MyLink system allowed completing tasks with one voice command, while the Volvo Sensus required multiple commands to navigate the menu structure. When calling a phone contact, both voice systems reduced visual demand relative to the visual-manual interfaces, with reductions for drivers in the Equinox being greater. The Equinox 'one-shot' voice command showed advantages during contact calling but had significantly higher error rates than Sensus during destination address entry. For both secondary tasks, neither voice interface entirely eliminated visual demand. Practitioner Summary: The findings reinforce the observation that most, if not all, automotive auditory-vocal interfaces are multi-modal interfaces in which the full range of potential demands (auditory, vocal, visual, manipulative, cognitive, tactile, etc.) need to be considered in developing optimal implementations and evaluating drivers' interaction with the systems. Social Media: In-vehicle voice-interfaces can reduce visual demand but do not eliminate it and all types of demand need to be taken into account in a comprehensive evaluation.

  19. Multi-modal assessment of on-road demand of voice and manual phone calling and voice navigation entry across two embedded vehicle systems

    PubMed Central

    Mehler, Bruce; Kidd, David; Reimer, Bryan; Reagan, Ian; Dobres, Jonathan; McCartt, Anne

    2016-01-01

    Abstract One purpose of integrating voice interfaces into embedded vehicle systems is to reduce drivers’ visual and manual distractions with ‘infotainment’ technologies. However, there is scant research on actual benefits in production vehicles or how different interface designs affect attentional demands. Driving performance, visual engagement, and indices of workload (heart rate, skin conductance, subjective ratings) were assessed in 80 drivers randomly assigned to drive a 2013 Chevrolet Equinox or Volvo XC60. The Chevrolet MyLink system allowed completing tasks with one voice command, while the Volvo Sensus required multiple commands to navigate the menu structure. When calling a phone contact, both voice systems reduced visual demand relative to the visual–manual interfaces, with reductions for drivers in the Equinox being greater. The Equinox ‘one-shot’ voice command showed advantages during contact calling but had significantly higher error rates than Sensus during destination address entry. For both secondary tasks, neither voice interface entirely eliminated visual demand. Practitioner Summary: The findings reinforce the observation that most, if not all, automotive auditory–vocal interfaces are multi-modal interfaces in which the full range of potential demands (auditory, vocal, visual, manipulative, cognitive, tactile, etc.) need to be considered in developing optimal implementations and evaluating drivers’ interaction with the systems. Social Media: In-vehicle voice-interfaces can reduce visual demand but do not eliminate it and all types of demand need to be taken into account in a comprehensive evaluation. PMID:26269281

  20. Simultaneous multimodal ophthalmic imaging using swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography

    PubMed Central

    Malone, Joseph D.; El-Haddad, Mohamed T.; Bozic, Ivan; Tye, Logan A.; Majeau, Lucas; Godbout, Nicolas; Rollins, Andrew M.; Boudoux, Caroline; Joos, Karen M.; Patel, Shriji N.; Tao, Yuankai K.

    2016-01-01

    Scanning laser ophthalmoscopy (SLO) benefits diagnostic imaging and therapeutic guidance by allowing for high-speed en face imaging of retinal structures. When combined with optical coherence tomography (OCT), SLO enables real-time aiming and retinal tracking and provides complementary information for post-acquisition volumetric co-registration, bulk motion compensation, and averaging. However, multimodality SLO-OCT systems generally require dedicated light sources, scanners, relay optics, detectors, and additional digitization and synchronization electronics, which increase system complexity. Here, we present a multimodal ophthalmic imaging system using swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography (SS-SESLO-OCT) for in vivo human retinal imaging. SESLO reduces the complexity of en face imaging systems by multiplexing spatial positions as a function of wavelength. SESLO image quality benefited from single-mode illumination and multimode collection through a prototype double-clad fiber coupler, which optimized scattered light throughput and reduce speckle contrast while maintaining lateral resolution. Using a shared 1060 nm swept-source, shared scanner and imaging optics, and a shared dual-channel high-speed digitizer, we acquired inherently co-registered en face retinal images and OCT cross-sections simultaneously at 200 frames-per-second. PMID:28101411

  1. Multimodal optoacoustic and multiphoton fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Sela, Gali; Razansky, Daniel; Shoham, Shy

    2013-03-01

    Multiphoton microscopy is a powerful imaging modality that enables structural and functional imaging with cellular and sub-cellular resolution, deep within biological tissues. Yet, its main contrast mechanism relies on extrinsically administered fluorescent indicators. Here we developed a system for simultaneous multimodal optoacoustic and multiphoton fluorescence 3D imaging, which attains both absorption and fluorescence-based contrast by integrating an ultrasonic transducer into a two-photon laser scanning microscope. The system is readily shown to enable acquisition of multimodal microscopic images of fluorescently labeled targets and cell cultures as well as intrinsic absorption-based images of pigmented biological tissue. During initial experiments, it was further observed that that detected optoacoustically-induced response contains low frequency signal variations, presumably due to cavitation-mediated signal generation by the high repetition rate (80MHz) near IR femtosecond laser. The multimodal system may provide complementary structural and functional information to the fluorescently labeled tissue, by superimposing optoacoustic images of intrinsic tissue chromophores, such as melanin deposits, pigmentation, and hemoglobin or other extrinsic particle or dye-based markers highly absorptive in the NIR spectrum.

  2. MARA (Multimode Airborne Radar Altimeter) system documentation. Volume 1: MARA system requirements document

    NASA Technical Reports Server (NTRS)

    Parsons, C. L. (Editor)

    1989-01-01

    The Multimode Airborne Radar Altimeter (MARA), a flexible airborne radar remote sensing facility developed by NASA's Goddard Space Flight Center, is discussed. This volume describes the scientific justification for the development of the instrument and the translation of these scientific requirements into instrument design goals. Values for key instrument parameters are derived to accommodate these goals, and simulations and analytical models are used to estimate the developed system's performance.

  3. Coding gestural behavior with the NEUROGES--ELAN system.

    PubMed

    Lausberg, Hedda; Sloetjes, Han

    2009-08-01

    We present a coding system combined with an annotation tool for the analysis of gestural behavior. The NEUROGES coding system consists of three modules that progress from gesture kinetics to gesture function. Grounded on empirical neuropsychological and psychological studies, the theoretical assumption behind NEUROGES is that its main kinetic and functional movement categories are differentially associated with specific cognitive, emotional, and interactive functions. ELAN is a free, multimodal annotation tool for digital audio and video media. It supports multileveled transcription and complies with such standards as XML and Unicode. ELAN allows gesture categories to be stored with associated vocabularies that are reusable by means of template files. The combination of the NEUROGES coding system and the annotation tool ELAN creates an effective tool for empirical research on gestural behavior.

  4. Combined multi-modal photoacoustic tomography, optical coherence tomography (OCT) and OCT angiography system with an articulated probe for in vivo human skin structure and vasculature imaging

    PubMed Central

    Liu, Mengyang; Chen, Zhe; Zabihian, Behrooz; Sinz, Christoph; Zhang, Edward; Beard, Paul C.; Ginner, Laurin; Hoover, Erich; Minneman, Micheal P.; Leitgeb, Rainer A.; Kittler, Harald; Drexler, Wolfgang

    2016-01-01

    Cutaneous blood flow accounts for approximately 5% of cardiac output in human and plays a key role in a number of a physiological and pathological processes. We show for the first time a multi-modal photoacoustic tomography (PAT), optical coherence tomography (OCT) and OCT angiography system with an articulated probe to extract human cutaneous vasculature in vivo in various skin regions. OCT angiography supplements the microvasculature which PAT alone is unable to provide. Co-registered volumes for vessel network is further embedded in the morphologic image provided by OCT. This multi-modal system is therefore demonstrated as a valuable tool for comprehensive non-invasive human skin vasculature and morphology imaging in vivo. PMID:27699106

  5. Data fusion algorithm for rapid multi-mode dust concentration measurement system based on MEMS

    NASA Astrophysics Data System (ADS)

    Liao, Maohao; Lou, Wenzhong; Wang, Jinkui; Zhang, Yan

    2018-03-01

    As single measurement method cannot fully meet the technical requirements of dust concentration measurement, the multi-mode detection method is put forward, as well as the new requirements for data processing. This paper presents a new dust concentration measurement system which contains MEMS ultrasonic sensor and MEMS capacitance sensor, and presents a new data fusion algorithm for this multi-mode dust concentration measurement system. After analyzing the relation between the data of the composite measurement method, the data fusion algorithm based on Kalman filtering is established, which effectively improve the measurement accuracy, and ultimately forms a rapid data fusion model of dust concentration measurement. Test results show that the data fusion algorithm is able to realize the rapid and exact concentration detection.

  6. Developing effective serious games: the effect of background sound on visual fidelity perception with varying texture resolution.

    PubMed

    Rojas, David; Kapralos, Bill; Cristancho, Sayra; Collins, Karen; Hogue, Andrew; Conati, Cristina; Dubrowski, Adam

    2012-01-01

    Despite the benefits associated with virtual learning environments and serious games, there are open, fundamental issues regarding simulation fidelity and multi-modal cue interaction and their effect on immersion, transfer of knowledge, and retention. Here we describe the results of a study that examined the effect of ambient (background) sound on the perception of visual fidelity (defined with respect to texture resolution). Results suggest that the perception of visual fidelity is dependent on ambient sound and more specifically, white noise can have detrimental effects on our perception of high quality visuals. The results of this study will guide future studies that will ultimately aid in developing an understanding of the role that fidelity, and multi-modal interactions play with respect to knowledge transfer and retention for users of virtual simulations and serious games.

  7. SALSA: A Novel Dataset for Multimodal Group Behavior Analysis.

    PubMed

    Alameda-Pineda, Xavier; Staiano, Jacopo; Subramanian, Ramanathan; Batrinca, Ligia; Ricci, Elisa; Lepri, Bruno; Lanz, Oswald; Sebe, Nicu

    2016-08-01

    Studying free-standing conversational groups (FCGs) in unstructured social settings (e.g., cocktail party ) is gratifying due to the wealth of information available at the group (mining social networks) and individual (recognizing native behavioral and personality traits) levels. However, analyzing social scenes involving FCGs is also highly challenging due to the difficulty in extracting behavioral cues such as target locations, their speaking activity and head/body pose due to crowdedness and presence of extreme occlusions. To this end, we propose SALSA, a novel dataset facilitating multimodal and Synergetic sociAL Scene Analysis, and make two main contributions to research on automated social interaction analysis: (1) SALSA records social interactions among 18 participants in a natural, indoor environment for over 60 minutes, under the poster presentation and cocktail party contexts presenting difficulties in the form of low-resolution images, lighting variations, numerous occlusions, reverberations and interfering sound sources; (2) To alleviate these problems we facilitate multimodal analysis by recording the social interplay using four static surveillance cameras and sociometric badges worn by each participant, comprising the microphone, accelerometer, bluetooth and infrared sensors. In addition to raw data, we also provide annotations concerning individuals' personality as well as their position, head, body orientation and F-formation information over the entire event duration. Through extensive experiments with state-of-the-art approaches, we show (a) the limitations of current methods and (b) how the recorded multiple cues synergetically aid automatic analysis of social interactions. SALSA is available at http://tev.fbk.eu/salsa.

  8. Multi-Modal Use of a Socially Directed Call in Bonobos

    PubMed Central

    Genty, Emilie; Clay, Zanna; Hobaiter, Catherine; Zuberbühler, Klaus

    2014-01-01

    ‘Contest hoots’ are acoustically complex vocalisations produced by adult and subadult male bonobos (Pan paniscus). These calls are often directed at specific individuals and regularly combined with gestures and other body signals. The aim of our study was to describe the multi-modal use of this call type and to clarify its communicative and social function. To this end, we observed two large groups of bonobos, which generated a sample of 585 communicative interactions initiated by 10 different males. We found that contest hooting, with or without other associated signals, was produced to challenge and provoke a social reaction in the targeted individual, usually agonistic chase. Interestingly, ‘contest hoots’ were sometimes also used during friendly play. In both contexts, males were highly selective in whom they targeted by preferentially choosing individuals of equal or higher social rank, suggesting that the calls functioned to assert social status. Multi-modal sequences were not more successful in eliciting reactions than contest hoots given alone, but we found a significant difference in the choice of associated gestures between playful and agonistic contexts. During friendly play, contest hoots were significantly more often combined with soft than rough gestures compared to agonistic challenges, while the calls' acoustic structure remained the same. We conclude that contest hoots indicate the signaller's intention to interact socially with important group members, while the gestures provide additional cues concerning the nature of the desired interaction. PMID:24454745

  9. Multimode waveguide speckle patterns for compressive sensing.

    PubMed

    Valley, George C; Sefler, George A; Justin Shaw, T

    2016-06-01

    Compressive sensing (CS) of sparse gigahertz-band RF signals using microwave photonics may achieve better performances with smaller size, weight, and power than electronic CS or conventional Nyquist rate sampling. The critical element in a CS system is the device that produces the CS measurement matrix (MM). We show that passive speckle patterns in multimode waveguides potentially provide excellent MMs for CS. We measure and calculate the MM for a multimode fiber and perform simulations using this MM in a CS system. We show that the speckle MM exhibits the sharp phase transition and coherence properties needed for CS and that these properties are similar to those of a sub-Gaussian MM with the same mean and standard deviation. We calculate the MM for a multimode planar waveguide and find dimensions of the planar guide that give a speckle MM with a performance similar to that of the multimode fiber. The CS simulations show that all measured and calculated speckle MMs exhibit a robust performance with equal amplitude signals that are sparse in time, in frequency, and in wavelets (Haar wavelet transform). The planar waveguide results indicate a path to a microwave photonic integrated circuit for measuring sparse gigahertz-band RF signals using CS.

  10. Applications of Elpasolites as a Multimode Radiation Sensor

    NASA Astrophysics Data System (ADS)

    Guckes, Amber

    This study consists of both computational and experimental investigations. The computational results enabled detector design selections and confirmed experimental results. The experimental results determined that the CLYC scintillation detector can be applied as a functional and field-deployable multimode radiation sensor. The computational study utilized MCNP6 code to investigate the response of CLYC to various incident radiations and to determine the feasibility of its application as a handheld multimode sensor and as a single-scintillator collimated directional detection system. These simulations include: • Characterization of the response of the CLYC scintillator to gamma-rays and neutrons; • Study of the isotopic enrichment of 7Li versus 6Li in the CLYC for optimal detection of both thermal neutrons and fast neutrons; • Analysis of collimator designs to determine the optimal collimator for the single CLYC sensor directional detection system to assay gamma rays and neutrons; Simulations of a handheld CLYC multimode sensor and a single CLYC scintillator collimated directional detection system with the optimized collimator to determine the feasibility of detecting nuclear materials that could be encountered during field operations. These nuclear materials include depleted uranium, natural uranium, low-enriched uranium, highly-enriched uranium, reactor-grade plutonium, and weapons-grade plutonium. The experimental study includes the design, construction, and testing of both a handheld CLYC multimode sensor and a single CLYC scintillator collimated directional detection system. Both were designed in the Inventor CAD software and based on results of the computational study to optimize its performance. The handheld CLYC multimode sensor is modular, scalable, low?power, and optimized for high count rates. Commercial?off?the?shelf components were used where possible in order to optimize size, increase robustness, and minimize cost. The handheld CLYC multimode sensor was successfully tested to confirm its ability for gamma-ray and neutron detection, and gamma?ray and neutron spectroscopy. The sensor utilizes wireless data transfer for possible radiation mapping and network?centric deployment. The handheld multimode sensor was tested by performing laboratory measurements with various gamma-ray sources and neutron sources. The single CLYC scintillator collimated directional detection system is portable, robust, and capable of source localization and identification. The collimator was designed based on the results of the computational study and is constructed with high density polyethylene (HDPE) and lead (Pb). The collimator design and construction allows for the directional detection of gamma rays and fast neutrons utilizing only one scintillator which is interchangeable. For this study, a CLYC-7 scintillator was used. The collimated directional detection system was tested by performing laboratory directional measurements with various gamma-ray sources, 252Cf and a 239PuBe source.

  11. Content-based TV sports video retrieval using multimodal analysis

    NASA Astrophysics Data System (ADS)

    Yu, Yiqing; Liu, Huayong; Wang, Hongbin; Zhou, Dongru

    2003-09-01

    In this paper, we propose content-based video retrieval, which is a kind of retrieval by its semantical contents. Because video data is composed of multimodal information streams such as video, auditory and textual streams, we describe a strategy of using multimodal analysis for automatic parsing sports video. The paper first defines the basic structure of sports video database system, and then introduces a new approach that integrates visual stream analysis, speech recognition, speech signal processing and text extraction to realize video retrieval. The experimental results for TV sports video of football games indicate that the multimodal analysis is effective for video retrieval by quickly browsing tree-like video clips or inputting keywords within predefined domain.

  12. Angle selective fiber coupler.

    PubMed

    Barnoski, M K; Morrison, R J

    1976-01-01

    Angle selective input coupling through the side of a slightly tapered section of Corning highly multimode fiber has been experimentally demonstrated for the first time. This coupling technique allows the possibility of fabricating bidirectional (duplex) couplers for systems employing single strands of multimode, low loss fiber.

  13. Efficient implementation of arbitrary quantum state engineering in four-state system by counterdiabatic driving

    NASA Astrophysics Data System (ADS)

    Wang, Song-Bai; Chen, Ye-Hong; Wu, Qi-Cheng; Shi, Zhi-Cheng; Huang, Bi-Hua; Song, Jie; Xia, Yan

    2018-07-01

    A scheme is proposed to implement quantum state engineering (QSE) in a four-state system via counterdiabatic driving. In the scheme, single- and multi-mode driving methods are used respectively to drive the system to a target state at a predefined time. It is found that a fast QSE can be realized by utilizing simply designed pulses. In addition, a beneficial discussion on the energy consumption between the single- and multi-mode driving protocols shows that the multi-mode driving method seems to have a wider range of applications than the single-mode driving method with respect to different parameters. Finally, the scheme is also helpful for implementing the generalization QSE in high-dimensional systems via the concept of a dressed state. Therefore, the scheme can be implemented with the present experimental technology, which is useful in quantum information processing.

  14. Design and applications of a multimodality image data warehouse framework.

    PubMed

    Wong, Stephen T C; Hoo, Kent Soo; Knowlton, Robert C; Laxer, Kenneth D; Cao, Xinhau; Hawkins, Randall A; Dillon, William P; Arenson, Ronald L

    2002-01-01

    A comprehensive data warehouse framework is needed, which encompasses imaging and non-imaging information in supporting disease management and research. The authors propose such a framework, describe general design principles and system architecture, and illustrate a multimodality neuroimaging data warehouse system implemented for clinical epilepsy research. The data warehouse system is built on top of a picture archiving and communication system (PACS) environment and applies an iterative object-oriented analysis and design (OOAD) approach and recognized data interface and design standards. The implementation is based on a Java CORBA (Common Object Request Broker Architecture) and Web-based architecture that separates the graphical user interface presentation, data warehouse business services, data staging area, and backend source systems into distinct software layers. To illustrate the practicality of the data warehouse system, the authors describe two distinct biomedical applications--namely, clinical diagnostic workup of multimodality neuroimaging cases and research data analysis and decision threshold on seizure foci lateralization. The image data warehouse framework can be modified and generalized for new application domains.

  15. Design and Applications of a Multimodality Image Data Warehouse Framework

    PubMed Central

    Wong, Stephen T.C.; Hoo, Kent Soo; Knowlton, Robert C.; Laxer, Kenneth D.; Cao, Xinhau; Hawkins, Randall A.; Dillon, William P.; Arenson, Ronald L.

    2002-01-01

    A comprehensive data warehouse framework is needed, which encompasses imaging and non-imaging information in supporting disease management and research. The authors propose such a framework, describe general design principles and system architecture, and illustrate a multimodality neuroimaging data warehouse system implemented for clinical epilepsy research. The data warehouse system is built on top of a picture archiving and communication system (PACS) environment and applies an iterative object-oriented analysis and design (OOAD) approach and recognized data interface and design standards. The implementation is based on a Java CORBA (Common Object Request Broker Architecture) and Web-based architecture that separates the graphical user interface presentation, data warehouse business services, data staging area, and backend source systems into distinct software layers. To illustrate the practicality of the data warehouse system, the authors describe two distinct biomedical applications—namely, clinical diagnostic workup of multimodality neuroimaging cases and research data analysis and decision threshold on seizure foci lateralization. The image data warehouse framework can be modified and generalized for new application domains. PMID:11971885

  16. The Effects of Instructor-Avatar Immediacy in Second Life, an Immersive and Interactive Three-Dimensional Virtual Environment

    ERIC Educational Resources Information Center

    Lawless-Reljic, Sabine Karine

    2010-01-01

    Growing interest of educational institutions in desktop 3D graphic virtual environments for hybrid and distance education prompts questions on the efficacy of such tools. Virtual worlds, such as Second Life[R], enable computer-mediated immersion and interactions encompassing multimodal communication channels including audio, video, and text-.…

  17. Effects of Multimodal Instruction on Personal Finance Skills for High School Students

    ERIC Educational Resources Information Center

    Dyer, Steffani P.; Lambeth, Dawn T.; Martin, Ellice P.

    2016-01-01

    The purpose of the current research study was to compare the use of interactive instruction to direct instruction on the acquisition of personal finance skills for high school students. Participants were 45 high school seniors who were divided into a Traditional and an Interactive Instruction group. The 9-week research study measured the impact…

  18. Asteroseismology of White Dwarf Stars

    NASA Technical Reports Server (NTRS)

    Hansen, Carl J.

    1997-01-01

    The primary purpose of this investigation has been to study various aspects of multimode pulsations in variable white dwarfs. In particular, nonlinear interactions among pulsation modes in white dwarfs (and, to some extent, in other variable stars), analysis of recent observations where such interactions are important, and preliminary work on the effects of crystallization in cool white dwarfs are reported.

  19. Mandarin Students' Perceptions of Multimodal Interaction in a Web Conferencing Environment: A Satisfaction Survey

    ERIC Educational Resources Information Center

    Tseng, Jun-Jie

    2015-01-01

    A major indicator of whether online courses have been effective and successful is student satisfaction. Copious research points to lack of interaction as the most cited reason for student dissatisfaction. To improve this problem, new Computer-Mediated Communication (CMC) technology could be considered as an option to enhance the online learning…

  20. State-of-the-art survey of multimode fiber optic wavelength division multiplexing

    NASA Astrophysics Data System (ADS)

    Spencer, J. L.

    1983-05-01

    Optical wavelength division multiplexing (WDM) systems, with signals transmitted on different wavelengths through a single fiber, can have increased information capacity and fault isolation properties over single wavelength optical systems. This paper describes a typical WDM system. Also, a state-of-the-art survey of optical multimode components which could be used to implement the system is made. The components to be surveyed are sources, multiplexers, and detectors. Emphasis is given to the demultiplexer techniques which are the major development components in the WDM system.

  1. Evidence-based development and first usability testing of a social serious game based multi-modal system for early screening for atypical socio-cognitive development.

    PubMed

    Gyori, Miklos; Borsos, Zsófia; Stefanik, Krisztina

    2015-01-01

    At current, screening for, and diagnosis of, autism spectrum disorders (ASD) are based on purely behavioral data; established screening tools rely on human observation and ratings of relevant behaviors. The research and development project in the focus of this paper is aimed at designing, creating and evaluating a social serious game based multi-modal, interactive software system for screening for high functioning cases of ASD at kindergarten age. The aims of this paper are (1) to summarize the evidence-based design process and (2) to present results from the first usability test of the system. Game topic, candidate responses, and candidate game contents were identified via an iterative literature review. On this basis, the 1st partial prototype of the fully playable game has been created, with complete data recording functionality but without the decision making component. A first usability test was carried out on this prototype (n=13). Overall results were unambiguously promising. Although sporadic difficulties in, and slightly negative attitudes towards, using the game occasionally arose, these were confined to non-target-group children only. The next steps of development include (1) completing the game design; (2) carrying out first large-n field test; (3) creating the first prototype of the decision making component.

  2. Application of a multicompartment dynamical model to multimodal optical imaging for investigating individual cerebrovascular properties

    NASA Astrophysics Data System (ADS)

    Desjardins, Michèle; Gagnon, Louis; Gauthier, Claudine; Hoge, Rick D.; Dehaes, Mathieu; Desjardins-Crépeau, Laurence; Bherer, Louis; Lesage, Frédéric

    2009-02-01

    Biophysical models of hemodynamics provide a tool for quantitative multimodal brain imaging by allowing a deeper understanding of the interplay between neural activity and blood oxygenation, volume and flow responses to stimuli. Multicompartment dynamical models that describe the dynamics and interactions of the vascular and metabolic components of evoked hemodynamic responses have been developed in the literature. In this work, multimodal data using near-infrared spectroscopy (NIRS) and diffuse correlation flowmetry (DCF) is used to estimate total baseline hemoglobin concentration (HBT0) in 7 adult subjects. A validation of the model estimate and investigation of the partial volume effect is done by comparing with time-resolved spectroscopy (TRS) measures of absolute HBT0. Simultaneous NIRS and DCF measurements during hypercapnia are then performed, but are found to be hardly reproducible. The results raise questions about the feasibility of an all-optical model-based estimation of individual vascular properties.

  3. Multimodal targeted high relaxivity thermosensitive liposome for in vivo imaging

    NASA Astrophysics Data System (ADS)

    Kuijten, Maayke M. P.; Hannah Degeling, M.; Chen, John W.; Wojtkiewicz, Gregory; Waterman, Peter; Weissleder, Ralph; Azzi, Jamil; Nicolay, Klaas; Tannous, Bakhos A.

    2015-11-01

    Liposomes are spherical, self-closed structures formed by lipid bilayers that can encapsulate drugs and/or imaging agents in their hydrophilic core or within their membrane moiety, making them suitable delivery vehicles. We have synthesized a new liposome containing gadolinium-DOTA lipid bilayer, as a targeting multimodal molecular imaging agent for magnetic resonance and optical imaging. We showed that this liposome has a much higher molar relaxivities r1 and r2 compared to a more conventional liposome containing gadolinium-DTPA-BSA lipid. By incorporating both gadolinium and rhodamine in the lipid bilayer as well as biotin on its surface, we used this agent for multimodal imaging and targeting of tumors through the strong biotin-streptavidin interaction. Since this new liposome is thermosensitive, it can be used for ultrasound-mediated drug delivery at specific sites, such as tumors, and can be guided by magnetic resonance imaging.

  4. 3D hierarchical spatial representation and memory of multimodal sensory data

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Dow, Paul A.; Huber, David J.

    2009-04-01

    This paper describes an efficient method and system for representing, processing and understanding multi-modal sensory data. More specifically, it describes a computational method and system for how to process and remember multiple locations in multimodal sensory space (e.g., visual, auditory, somatosensory, etc.). The multimodal representation and memory is based on a biologically-inspired hierarchy of spatial representations implemented with novel analogues of real representations used in the human brain. The novelty of the work is in the computationally efficient and robust spatial representation of 3D locations in multimodal sensory space as well as an associated working memory for storage and recall of these representations at the desired level for goal-oriented action. We describe (1) A simple and efficient method for human-like hierarchical spatial representations of sensory data and how to associate, integrate and convert between these representations (head-centered coordinate system, body-centered coordinate, etc.); (2) a robust method for training and learning a mapping of points in multimodal sensory space (e.g., camera-visible object positions, location of auditory sources, etc.) to the above hierarchical spatial representations; and (3) a specification and implementation of a hierarchical spatial working memory based on the above for storage and recall at the desired level for goal-oriented action(s). This work is most useful for any machine or human-machine application that requires processing of multimodal sensory inputs, making sense of it from a spatial perspective (e.g., where is the sensory information coming from with respect to the machine and its parts) and then taking some goal-oriented action based on this spatial understanding. A multi-level spatial representation hierarchy means that heterogeneous sensory inputs (e.g., visual, auditory, somatosensory, etc.) can map onto the hierarchy at different levels. When controlling various machine/robot degrees of freedom, the desired movements and action can be computed from these different levels in the hierarchy. The most basic embodiment of this machine could be a pan-tilt camera system, an array of microphones, a machine with arm/hand like structure or/and a robot with some or all of the above capabilities. We describe the approach, system and present preliminary results on a real-robotic platform.

  5. Design and rationale of the Mechanical Retrieval and Recanalization of Stroke Clots Using Embolectomy (MR RESCUE) Trial.

    PubMed

    Kidwell, Chelsea S; Jahan, Reza; Alger, Jeffry R; Schaewe, Timothy J; Guzy, Judy; Starkman, Sidney; Elashoff, Robert; Gornbein, Jeffrey; Nenov, Val; Saver, Jeffrey L

    2014-01-01

    Multimodal imaging has the potential to identify acute ischaemic stroke patients most likely to benefit from late recanalization therapies. The general aim of the Mechanical Retrieval and Recanalization of Stroke Clots Using Embolectomy Trial is to investigate whether multimodal imaging can identify patients who will benefit substantially from mechanical embolectomy for the treatment of acute ischaemic stroke up to eight-hours from symptom onset. Mechanical Retrieval and Recanalization of Stroke Clots Using Embolectomy is a randomized, controlled, blinded-outcome clinical trial. Acute ischaemic stroke patients with large vessel intracranial internal carotid artery or middle cerebral artery M1 or M2 occlusion enrolled within eight-hours of symptom onset are eligible. The study sample size is 120 patients. Patients are randomized to endovascular embolectomy employing the Merci Retriever (Concentric Medical, Mountain View, CA) or the Penumbra System (Penumbra, Alameda, CA) vs. standard medical care, with randomization stratified by penumbral pattern. The primary aim of the trial is to test the hypothesis that the presence of substantial ischaemic penumbral tissue visualized on multimodal imaging (magnetic resonance imaging or computed tomography) predicts patients most likely to respond to mechanical embolectomy for treatment of acute ischaemic stroke due to a large vessel, intracranial occlusion up to eight-hours from symptom onset. This hypothesis will be tested by analysing whether pretreatment imaging pattern has a significant interaction with treatment as a determinant of functional outcome based on the distribution of scores on the modified Rankin Scale measure of global disability assessed 90 days post-stroke. Nested hypotheses test for (1) treatment efficacy in patients with a penumbral pattern pretreatment, and (2) absence of treatment benefit (equivalency) in patients without a penumbral pattern pretreatment. An additional aim will only be tested if the primary hypothesis of an interaction is negative: that patients treated with mechanical embolectomy have improved functional outcome vs. standard medical management. © 2012 The Authors. International Journal of Stroke © 2012 World Stroke Organization.

  6. Cortical light scattering during interictal epileptic spikes in frontal lobe epilepsy in children: A fast optical signal and electroencephalographic study.

    PubMed

    Manoochehri, Mana; Mahmoudzadeh, Mahdi; Bourel-Ponchel, Emilie; Wallois, Fabrice

    2017-12-01

    Interictal epileptic spikes (IES) represent a signature of the transient synchronous and excessive discharge of a large ensemble of cortical heterogeneous neurons. Epilepsy cannot be reduced to a hypersynchronous activation of neurons whose functioning is impaired, resulting on electroencephalogram (EEG) in epileptic seizures or IES. The complex pathophysiological mechanisms require a global approach to the interactions between neural synaptic and nonsynaptic, vascular, and metabolic systems. In the present study, we focused on the interaction between synaptic and nonsynaptic mechanisms through the simultaneous noninvasive multimodal multiscale recording of high-density EEG (HD-EEG; synaptic) and fast optical signal (FOS; nonsynaptic), which evaluate rapid changes in light scattering related to changes in membrane configuration occurring during neuronal activation of IES. To evaluate changes in light scattering occurring around IES, three children with frontal IES were simultaneously recorded with HD-EEG and FOS. To evaluate change in synchronization, time-frequency representation analysis of the HD-EEG was performed simultaneously around the IES. To independently evaluate our multimodal method, a control experiment with somatosensory stimuli was designed and applied to five healthy volunteers. Alternating increase-decrease-increase in optical signals occurred 200 ms before to 180 ms after the IES peak. These changes started before any changes in EEG signal. In addition, time-frequency domain EEG analysis revealed alternating decrease-increase-decrease in the EEG spectral power concomitantly with changes in the optical signal during IES. These results suggest a relationship between (de)synchronization and neuronal volume changes in frontal lobe epilepsy during IES. These changes in the neuronal environment around IES in frontal lobe epilepsy observed in children, as they have been in rats, raise new questions about the synaptic/nonsynaptic mechanisms that propel the neurons to hypersynchronization, as occurs during IES. We further demonstrate that this noninvasive multiscale multimodal approach is suitable for studying the pathophysiology of the IES in patients. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.

  7. Reliability of Travel Time: Challenges Posed by a Multimodal Transport Participation

    NASA Astrophysics Data System (ADS)

    Wanjek, Monika; Hauger, Georg

    2017-10-01

    Travel time reliability represents an essential component in individual decision making processes for transport participants, particularly regarding mode choices. As criteria that describe the quality of both transportation systems and transportation modes, travel time reliability is already frequently compiled, analysed and quoted as an argument. Currently, travel time reliability is solely mentioned on monomodal trips, while it has remained unconsidered on multimodal transport participation. Given the fact that multimodality gained significantly in importance, it is crucial to discuss how travel time reliability could be determined on multimodal trips. This paper points out the challenges that occur for applying travel time reliability on multimodal transport participation. Therefore, examples will be given within this paper. In order to illustrate theoretical ideas, trips and influencing factors that could be expected within the everyday transport behaviour of commuters in a (sub)urban area will be described.

  8. Bound vector solitons and soliton complexes for the coupled nonlinear Schrödinger equations.

    PubMed

    Sun, Zhi-Yuan; Gao, Yi-Tian; Yu, Xin; Liu, Wen-Jun; Liu, Ying

    2009-12-01

    Dynamic features describing the collisions of the bound vector solitons and soliton complexes are investigated for the coupled nonlinear Schrödinger (CNLS) equations, which model the propagation of the multimode soliton pulses under some physical situations in nonlinear fiber optics. Equations of such type have also been seen in water waves and plasmas. By the appropriate choices of the arbitrary parameters for the multisoliton solutions derived through the Hirota bilinear method, the periodic structures along the propagation are classified according to the relative relations of the real wave numbers. Furthermore, parameters are shown to control the intensity distributions and interaction patterns for the bound vector solitons and soliton complexes. Transformations of the soliton types (shape changing with intensity redistribution) during the collisions of those stationary structures with the regular one soliton are discussed, in which a class of inelastic properties is involved. Discussions could be expected to be helpful in interpreting such structures in the multimode nonlinear fiber optics and equally applied to other systems governed by the CNLS equations, e.g., the plasma physics and Bose-Einstein condensates.

  9. BER performance of multimode fiber low-frequency passbands in subcarrier multiplexing transmission

    NASA Astrophysics Data System (ADS)

    Patmanee, Jaruwat; Pinthong, Chairat; Kanprachar, Surachet

    2018-03-01

    Multimode fibers are normally known to have a channel for carrying a signal mainly by their 3-dB modal bandwidth ranging between 200 to 500 MHz-km, depending on the material and structure of the fiber. To use only this 3-dB modal bandwidth, a higher data rate signal cannot be successfully transmitted. Alternatively, it has been shown that the response of the multimode fibers at low-frequency region, defining as the frequency next to the 3-dB modal band, contains many passbands. Additionally, these low-frequency passbands have been shown to be predictable in terms of their peak frequencies; thus, suitable subcarrier frequencies can be obtained and used in SCM system. In this paper, the formula from the previous work for determining the peak frequency of all 6 low-frequency passbands is applied. These 6 passbands and the 3-dB modal band of the multimode fiber are used to convey a high data rate signal. The signal is separated into 7 subcarrier signals and transmitted over these 7 channels using SCM system. The performance of the received signal in terms of the bit-error-rate (BER) is determined and shown. Some modification and adjustment are done in order to improve the performance of the system. It is found that for a multimode fiber with a 200-MHz 3-dB modal bandwidth, a 500-Mbps data rate signal can be successfully transmitted with a BER of lower than 10-6 . The data rate transmitted over a multimode fiber can be increased 2.5 times comparing to the 3-dB modal bandwidth, without any coding technique applied.

  10. Nonreciprocal frequency conversion in a multimode microwave optomechanical circuit

    NASA Astrophysics Data System (ADS)

    Feofanov, A. K.; Bernier, N. R.; Toth, L. D.; Koottandavida, A.; Kippenberg, T. J.

    Nonreciprocal devices such as isolators, circulators, and directional amplifiers are pivotal to quantum signal processing with superconducting circuits. In the microwave domain, commercially available nonreciprocal devices are based on ferrite materials. They are barely compatible with superconducting quantum circuits, lossy, and cannot be integrated on chip. Significant potential exists for implementing non-magnetic chip-scale nonreciprocal devices using microwave optomechanical circuits. Here we demonstrate a possibility of nonreciprocal frequency conversion in a multimode microwave optomechanical circuit using solely optomechanical interaction between modes. The conversion scheme and the results reflecting the actual progress on the experimental implementation of the scheme will be presented.

  11. A low-cost multimodal head-mounted display system for neuroendoscopic surgery.

    PubMed

    Xu, Xinghua; Zheng, Yi; Yao, Shujing; Sun, Guochen; Xu, Bainan; Chen, Xiaolei

    2018-01-01

    With rapid advances in technology, wearable devices as head-mounted display (HMD) have been adopted for various uses in medical science, ranging from simply aiding in fitness to assisting surgery. We aimed to investigate the feasibility and practicability of a low-cost multimodal HMD system in neuroendoscopic surgery. A multimodal HMD system, mainly consisted of a HMD with two built-in displays, an action camera, and a laptop computer displaying reconstructed medical images, was developed to assist neuroendoscopic surgery. With this intensively integrated system, the neurosurgeon could freely switch between endoscopic image, three-dimensional (3D) reconstructed virtual endoscopy images, and surrounding environment images. Using a leap motion controller, the neurosurgeon could adjust or rotate the 3D virtual endoscopic images at a distance to better understand the positional relation between lesions and normal tissues at will. A total of 21 consecutive patients with ventricular system diseases underwent neuroendoscopic surgery with the aid of this system. All operations were accomplished successfully, and no system-related complications occurred. The HMD was comfortable to wear and easy to operate. Screen resolution of the HMD was high enough for the neurosurgeon to operate carefully. With the system, the neurosurgeon might get a better comprehension on lesions by freely switching among images of different modalities. The system had a steep learning curve, which meant a quick increment of skill with it. Compared with commercially available surgical assistant instruments, this system was relatively low-cost. The multimodal HMD system is feasible, practical, helpful, and relatively cost efficient in neuroendoscopic surgery.

  12. Multi-Mode Estimation for Small Fixed Wing Unmanned Aerial Vehicle Localization Based on a Linear Matrix Inequality Approach

    PubMed Central

    Elzoghby, Mostafa; Li, Fu; Arafa, Ibrahim. I.; Arif, Usman

    2017-01-01

    Information fusion from multiple sensors ensures the accuracy and robustness of a navigation system, especially in the absence of global positioning system (GPS) data which gets degraded in many cases. A way to deal with multi-mode estimation for a small fixed wing unmanned aerial vehicle (UAV) localization framework is proposed, which depends on utilizing a Luenberger observer-based linear matrix inequality (LMI) approach. The proposed estimation technique relies on the interaction between multiple measurement modes and a continuous observer. The state estimation is performed in a switching environment between multiple active sensors to exploit the available information as much as possible, especially in GPS-denied environments. Luenberger observer-based projection is implemented as a continuous observer to optimize the estimation performance. The observer gain might be chosen by solving a Lyapunov equation by means of a LMI algorithm. Convergence is achieved by utilizing the linear matrix inequality (LMI), based on Lyapunov stability which keeps the dynamic estimation error bounded by selecting the observer gain matrix (L). Simulation results are presented for a small UAV fixed wing localization problem. The results obtained using the proposed approach are compared with a single mode Extended Kalman Filter (EKF). Simulation results are presented to demonstrate the viability of the proposed strategy. PMID:28420214

  13. Multimode intravascular RF coil for MRI-guided interventions.

    PubMed

    Kurpad, Krishna N; Unal, Orhan

    2011-04-01

    To demonstrate the feasibility of using a single intravascular radiofrequency (RF) probe connected to the external magnetic resonance imaging (MRI) system via a single coaxial cable to perform active tip tracking and catheter visualization and high signal-to-noise ratio (SNR) intravascular imaging. A multimode intravascular RF coil was constructed on a 6F balloon catheter and interfaced to a 1.5T MRI scanner via a decoupling circuit. Bench measurements of coil impedances were followed by imaging experiments in saline and phantoms. The multimode coil behaves as an inductively coupled transmit coil. The forward-looking capability of 6 mm was measured. A greater than 3-fold increase in SNR compared to conventional imaging using optimized external coil was demonstrated. Simultaneous active tip tracking and catheter visualization was demonstrated. It is feasible to perform 1) active tip tracking, 2) catheter visualization, and 3) high SNR imaging using a single multimode intravascular RF coil that is connected to the external system via a single coaxial cable. Copyright © 2011 Wiley-Liss, Inc.

  14. Multi-mode Intravascular RF Coil for MRI-guided Interventions

    PubMed Central

    Kurpad, Krishna N.; Unal, Orhan

    2011-01-01

    Purpose To demonstrate the feasibility of using a single intravascular RF probe connected to the external MRI system via a single coaxial cable to perform active tip tracking and catheter visualization, and high SNR intravascular imaging. Materials and Methods A multi-mode intravascular RF coil was constructed on a 6F balloon catheter and interfaced to a 1.5T MRI scanner via a decoupling circuit. Bench measurements of coil impedances were followed by imaging experiments in saline and phantoms. Results The multi-mode coil behaves as an inductively-coupled transmit coil. Forward looking capability of 6mm is measured. Greater than 3-fold increase in SNR compared to conventional imaging using optimized external coil is demonstrated. Simultaneous active tip tracking and catheter visualization is demonstrated. Conclusions It is feasible to perform 1) active tip tracking, 2) catheter visualization, and 3) high SNR imaging using a single multi-mode intravascular RF coil that is connected to the external system via a single coaxial cable. PMID:21448969

  15. Multiplexed single-mode wavelength-to-time mapping of multimode light

    PubMed Central

    Chandrasekharan, Harikumar K; Izdebski, Frauke; Gris-Sánchez, Itandehui; Krstajić, Nikola; Walker, Richard; Bridle, Helen L.; Dalgarno, Paul A.; MacPherson, William N.; Henderson, Robert K.; Birks, Tim A.; Thomson, Robert R.

    2017-01-01

    When an optical pulse propagates along an optical fibre, different wavelengths travel at different group velocities. As a result, wavelength information is converted into arrival-time information, a process known as wavelength-to-time mapping. This phenomenon is most cleanly observed using a single-mode fibre transmission line, where spatial mode dispersion is not present, but the use of such fibres restricts possible applications. Here we demonstrate that photonic lanterns based on tapered single-mode multicore fibres provide an efficient way to couple multimode light to an array of single-photon avalanche detectors, each of which has its own time-to-digital converter for time-correlated single-photon counting. Exploiting this capability, we demonstrate the multiplexed single-mode wavelength-to-time mapping of multimode light using a multicore fibre photonic lantern with 121 single-mode cores, coupled to 121 detectors on a 32 × 32 detector array. This work paves the way to efficient multimode wavelength-to-time mapping systems with the spectral performance of single-mode systems. PMID:28120822

  16. High-resolution multimodal clinical multiphoton tomography of skin

    NASA Astrophysics Data System (ADS)

    König, Karsten

    2011-03-01

    This review focuses on multimodal multiphoton tomography based on near infrared femtosecond lasers. Clinical multiphoton tomographs for 3D high-resolution in vivo imaging have been placed into the market several years ago. The second generation of this Prism-Award winning High-Tech skin imaging tool (MPTflex) was introduced in 2010. The same year, the world's first clinical CARS studies have been performed with a hybrid multimodal multiphoton tomograph. In particular, non-fluorescent lipids and water as well as mitochondrial fluorescent NAD(P)H, fluorescent elastin, keratin, and melanin as well as SHG-active collagen has been imaged with submicron resolution in patients suffering from psoriasis. Further multimodal approaches include the combination of multiphoton tomographs with low-resolution wide-field systems such as ultrasound, optoacoustical, OCT, and dermoscopy systems. Multiphoton tomographs are currently employed in Australia, Japan, the US, and in several European countries for early diagnosis of skin cancer, optimization of treatment strategies, and cosmetic research including long-term testing of sunscreen nanoparticles as well as anti-aging products.

  17. Systems Proteomics for Translational Network Medicine

    PubMed Central

    Arrell, D. Kent; Terzic, Andre

    2012-01-01

    Universal principles underlying network science, and their ever-increasing applications in biomedicine, underscore the unprecedented capacity of systems biology based strategies to synthesize and resolve massive high throughput generated datasets. Enabling previously unattainable comprehension of biological complexity, systems approaches have accelerated progress in elucidating disease prediction, progression, and outcome. Applied to the spectrum of states spanning health and disease, network proteomics establishes a collation, integration, and prioritization algorithm to guide mapping and decoding of proteome landscapes from large-scale raw data. Providing unparalleled deconvolution of protein lists into global interactomes, integrative systems proteomics enables objective, multi-modal interpretation at molecular, pathway, and network scales, merging individual molecular components, their plurality of interactions, and functional contributions for systems comprehension. As such, network systems approaches are increasingly exploited for objective interpretation of cardiovascular proteomics studies. Here, we highlight network systems proteomic analysis pipelines for integration and biological interpretation through protein cartography, ontological categorization, pathway and functional enrichment and complex network analysis. PMID:22896016

  18. Mathematical biomarkers for the autonomic regulation of cardiovascular system.

    PubMed

    Campos, Luciana A; Pereira, Valter L; Muralikrishna, Amita; Albarwani, Sulayma; Brás, Susana; Gouveia, Sónia

    2013-10-07

    Heart rate and blood pressure are the most important vital signs in diagnosing disease. Both heart rate and blood pressure are characterized by a high degree of short term variability from moment to moment, medium term over the normal day and night as well as in the very long term over months to years. The study of new mathematical algorithms to evaluate the variability of these cardiovascular parameters has a high potential in the development of new methods for early detection of cardiovascular disease, to establish differential diagnosis with possible therapeutic consequences. The autonomic nervous system is a major player in the general adaptive reaction to stress and disease. The quantitative prediction of the autonomic interactions in multiple control loops pathways of cardiovascular system is directly applicable to clinical situations. Exploration of new multimodal analytical techniques for the variability of cardiovascular system may detect new approaches for deterministic parameter identification. A multimodal analysis of cardiovascular signals can be studied by evaluating their amplitudes, phases, time domain patterns, and sensitivity to imposed stimuli, i.e., drugs blocking the autonomic system. The causal effects, gains, and dynamic relationships may be studied through dynamical fuzzy logic models, such as the discrete-time model and discrete-event model. We expect an increase in accuracy of modeling and a better estimation of the heart rate and blood pressure time series, which could be of benefit for intelligent patient monitoring. We foresee that identifying quantitative mathematical biomarkers for autonomic nervous system will allow individual therapy adjustments to aim at the most favorable sympathetic-parasympathetic balance.

  19. Mathematical biomarkers for the autonomic regulation of cardiovascular system

    PubMed Central

    Campos, Luciana A.; Pereira, Valter L.; Muralikrishna, Amita; Albarwani, Sulayma; Brás, Susana; Gouveia, Sónia

    2013-01-01

    Heart rate and blood pressure are the most important vital signs in diagnosing disease. Both heart rate and blood pressure are characterized by a high degree of short term variability from moment to moment, medium term over the normal day and night as well as in the very long term over months to years. The study of new mathematical algorithms to evaluate the variability of these cardiovascular parameters has a high potential in the development of new methods for early detection of cardiovascular disease, to establish differential diagnosis with possible therapeutic consequences. The autonomic nervous system is a major player in the general adaptive reaction to stress and disease. The quantitative prediction of the autonomic interactions in multiple control loops pathways of cardiovascular system is directly applicable to clinical situations. Exploration of new multimodal analytical techniques for the variability of cardiovascular system may detect new approaches for deterministic parameter identification. A multimodal analysis of cardiovascular signals can be studied by evaluating their amplitudes, phases, time domain patterns, and sensitivity to imposed stimuli, i.e., drugs blocking the autonomic system. The causal effects, gains, and dynamic relationships may be studied through dynamical fuzzy logic models, such as the discrete-time model and discrete-event model. We expect an increase in accuracy of modeling and a better estimation of the heart rate and blood pressure time series, which could be of benefit for intelligent patient monitoring. We foresee that identifying quantitative mathematical biomarkers for autonomic nervous system will allow individual therapy adjustments to aim at the most favorable sympathetic-parasympathetic balance. PMID:24109456

  20. A robust probabilistic collaborative representation based classification for multimodal biometrics

    NASA Astrophysics Data System (ADS)

    Zhang, Jing; Liu, Huanxi; Ding, Derui; Xiao, Jianli

    2018-04-01

    Most of the traditional biometric recognition systems perform recognition with a single biometric indicator. These systems have suffered noisy data, interclass variations, unacceptable error rates, forged identity, and so on. Due to these inherent problems, it is not valid that many researchers attempt to enhance the performance of unimodal biometric systems with single features. Thus, multimodal biometrics is investigated to reduce some of these defects. This paper proposes a new multimodal biometric recognition approach by fused faces and fingerprints. For more recognizable features, the proposed method extracts block local binary pattern features for all modalities, and then combines them into a single framework. For better classification, it employs the robust probabilistic collaborative representation based classifier to recognize individuals. Experimental results indicate that the proposed method has improved the recognition accuracy compared to the unimodal biometrics.

  1. A least-squares parameter estimation algorithm for switched hammerstein systems with applications to the VOR

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.; Kearney, Robert E.; Galiana, Henrietta L.

    2005-01-01

    A "Multimode" or "switched" system is one that switches between various modes of operation. When a switch occurs from one mode to another, a discontinuity may result followed by a smooth evolution under the new regime. Characterizing the switching behavior of these systems is not well understood and, therefore, identification of multimode systems typically requires a preprocessing step to classify the observed data according to a mode of operation. A further consequence of the switched nature of these systems is that data available for parameter estimation of any subsystem may be inadequate. As such, identification and parameter estimation of multimode systems remains an unresolved problem. In this paper, we 1) show that the NARMAX model structure can be used to describe the impulsive-smooth behavior of switched systems, 2) propose a modified extended least squares (MELS) algorithm to estimate the coefficients of such models, and 3) demonstrate its applicability to simulated and real data from the Vestibulo-Ocular Reflex (VOR). The approach will also allow the identification of other nonlinear bio-systems, suspected of containing "hard" nonlinearities.

  2. Design of a compact low-power human-computer interaction equipment for hand motion

    NASA Astrophysics Data System (ADS)

    Wu, Xianwei; Jin, Wenguang

    2017-01-01

    Human-Computer Interaction (HCI) raises demand of convenience, endurance, responsiveness and naturalness. This paper describes a design of a compact wearable low-power HCI equipment applied to gesture recognition. System combines multi-mode sense signals: the vision sense signal and the motion sense signal, and the equipment is equipped with the depth camera and the motion sensor. The dimension (40 mm × 30 mm) and structure is compact and portable after tight integration. System is built on a module layered framework, which contributes to real-time collection (60 fps), process and transmission via synchronous confusion with asynchronous concurrent collection and wireless Blue 4.0 transmission. To minimize equipment's energy consumption, system makes use of low-power components, managing peripheral state dynamically, switching into idle mode intelligently, pulse-width modulation (PWM) of the NIR LEDs of the depth camera and algorithm optimization by the motion sensor. To test this equipment's function and performance, a gesture recognition algorithm is applied to system. As the result presents, general energy consumption could be as low as 0.5 W.

  3. Grammar Is a System That Characterizes Talk in Interaction

    PubMed Central

    Ginzburg, Jonathan; Poesio, Massimo

    2016-01-01

    Much of contemporary mainstream formal grammar theory is unable to provide analyses for language as it occurs in actual spoken interaction. Its analyses are developed for a cleaned up version of language which omits the disfluencies, non-sentential utterances, gestures, and many other phenomena that are ubiquitous in spoken language. Using evidence from linguistics, conversation analysis, multimodal communication, psychology, language acquisition, and neuroscience, we show these aspects of language use are rule governed in much the same way as phenomena captured by conventional grammars. Furthermore, we argue that over the past few years some of the tools required to provide a precise characterizations of such phenomena have begun to emerge in theoretical and computational linguistics; hence, there is no reason for treating them as “second class citizens” other than pre-theoretical assumptions about what should fall under the purview of grammar. Finally, we suggest that grammar formalisms covering such phenomena would provide a better foundation not just for linguistic analysis of face-to-face interaction, but also for sister disciplines, such as research on spoken dialogue systems and/or psychological work on language acquisition. PMID:28066279

  4. Effect of a multimodal exercise program on sleep disturbances and instrumental activities of daily living performance on Parkinson's and Alzheimer's disease patients.

    PubMed

    Nascimento, Carla Manuela Crispim; Ayan, Carlos; Cancela, Jose Maria; Gobbi, Lilian Teresa Bucken; Gobbi, Sebastião; Stella, Florindo

    2014-04-01

    To assess the contribution of a multimodal exercise program on the sleep disturbances (SD) and on the performance of instrumental activities daily living (IADL) in patients with clinical diagnosis of Alzheimer's disease (AD) and Parkinson's disease patients (PD). A total of 42 consecutive patients (23 training group, 19 control group) with PD and 35 demented patients with AD (19 trained group, 16 control group) were recruited. Participants in both training groups carried out three 1-h sessions per week of a multimodal exercise program for 6 months. The Pfeffer Questionnaire for Instrumental Activities and the Mini-Sleep Questionnaire were used to assess the effects of the program on IADL and SD respectively. Two-way ancova showed interactions in IADL and SD. Significant improvements were observed for these variables in both intervention groups, and maintenance or worsening was observed in control groups. The analysis of effect size showed these improvements. The present study results show that a mild to moderate intensity of multimodal physical exercises carried out on a regular basis over 6 months can contribute to reducing IADL deficits and attenuating SD. © 2013 Japan Geriatrics Society.

  5. Fast and Robust Registration of Multimodal Remote Sensing Images via Dense Orientated Gradient Feature

    NASA Astrophysics Data System (ADS)

    Ye, Y.

    2017-09-01

    This paper presents a fast and robust method for the registration of multimodal remote sensing data (e.g., optical, LiDAR, SAR and map). The proposed method is based on the hypothesis that structural similarity between images is preserved across different modalities. In the definition of the proposed method, we first develop a pixel-wise feature descriptor named Dense Orientated Gradient Histogram (DOGH), which can be computed effectively at every pixel and is robust to non-linear intensity differences between images. Then a fast similarity metric based on DOGH is built in frequency domain using the Fast Fourier Transform (FFT) technique. Finally, a template matching scheme is applied to detect tie points between images. Experimental results on different types of multimodal remote sensing images show that the proposed similarity metric has the superior matching performance and computational efficiency than the state-of-the-art methods. Moreover, based on the proposed similarity metric, we also design a fast and robust automatic registration system for multimodal images. This system has been evaluated using a pair of very large SAR and optical images (more than 20000 × 20000 pixels). Experimental results show that our system outperforms the two popular commercial software systems (i.e. ENVI and ERDAS) in both registration accuracy and computational efficiency.

  6. Interactive multi-mode blade impact analysis

    NASA Technical Reports Server (NTRS)

    Alexander, A.; Cornell, R. W.

    1978-01-01

    The theoretical methodology used in developing an analysis for the response of turbine engine fan blades subjected to soft-body (bird) impacts is reported, and the computer program developed using this methodology as its basis is described. This computer program is an outgrowth of two programs that were previously developed for the purpose of studying problems of a similar nature (a 3-mode beam impact analysis and a multi-mode beam impact analysis). The present program utilizes an improved missile model that is interactively coupled with blade motion which is more consistent with actual observations. It takes into account local deformation at the impact area, blade camber effects, and the spreading of the impacted missile mass on the blade surface. In addition, it accommodates plate-type mode shapes. The analysis capability in this computer program represents a significant improvement in the development of the methodology for evaluating potential fan blade materials and designs with regard to foreign object impact resistance.

  7. Gesture-Based Customer Interactions: Deaf and Hearing Mumbaikars' Multimodal and Metrolingual Practices

    ERIC Educational Resources Information Center

    Kusters, Annelies

    2017-01-01

    The article furthers the study of urban multilingual (i.e. metrolingual) practices, in particular the study of customer interactions, by a focus on the use of gestures in these practices. The article focuses on fluent deaf signers and hearing non-signers in Mumbai who use gestures to communicate with each other, often combined with mouthing,…

  8. Effects of Multimodal Mandala Yoga on Social and Emotional Skills for Youth with Autism Spectrum Disorder: An Exploratory Study.

    PubMed

    Litchke, Lyn Gorbett; Liu, Ting; Castro, Stephanie

    2018-01-01

    Youth with autism spectrum disorder (ASD) demonstrates impairment in the ability to socially and emotionally relate to others that can limit participation in groups, interaction with peers, and building successful life relationships. The aim of this exploratory study was to examine the effects of a novel multimodal Mandala yoga program on social and emotional skills for youth with ASD. Five males with ASD attended 1 h yoga sessions, twice a week for 4 weeks. Multimodal Mandala yoga comprised 26 circular partner/group poses, color and tracing sheets, rhythmic chanting, yoga cards, and games. Treatment and Research Institute for ASD Social Skills Assessment (TSSA) scores were collected before and after the eight yoga sessions. The Modified Facial Mood Scale (MFMS) was used to observe mood changes before and after each yoga class. Paired sample t -tests were conducted on TSSA and MFMS scores to compare social and emotional differences post the 4-week camp. Narrative field notes were documented after each of the eight yoga sessions. A significant improvement from pre- to post-test was found in overall TSSA ( t (4) = -5.744, P = 0.005) and on respondent to initiation ( t (4) = -3.726, P = 0.020), initiating interaction ( t (4) = -8.5, P = 0.039), and affective understanding and perspective taking subscales ( t (4) = -5.171 P = 0.007). Youth's MFMS scores increased from 80% to 100% at the end of eight yoga sessions demonstrating a pleasant or positive mood. Thematic analysis of the narrative notes identified three key factors associated with the yoga experience: (a) enhanced mood and emotional expression, (b) increased empathy toward others, and (c) improved teamwork skills. This multimodal Mandala yoga training has implication for developing positive social and emotional skills for youth with ASD.

  9. Readiness of freight transportation system at special economic zone of Lhokseumawe

    NASA Astrophysics Data System (ADS)

    Fithra, Herman; Sirojuzilam, Saleh, Sofyan M.; Erlina

    2017-11-01

    Geo-economic advantages of Lhokseumawe and Aceh Utara District lies on the geographical location of Aceh crossed by Sea Lane of Communication (Sloc), the Malacca Strait. Located at the Malacca Strait, the Special Economic Zone (Kawasan Ekonomi Khusus/ KEK) of Arun Lhokseumawe has a comparative advantage to be part of the global production network or the global value chain. This study aims to determine freight transportation system to support KEK Lhokseumawe, especially the availability of multimodal transport and multimodal infrastructure. The result shows that KEK Lhokseumawe driven by SOEs in Lhokseumawe and Aceh Utara is urgent to be realized for economic acceleration and to grow new economic growth in Aceh. Multimodal transport in KEK Lhokseumawe is also available, including Ro-Ro ships, train availability from Dewantara sub-district to Muara Batu Sub-district, various types of truck with small, medium and large capacity. The available multimodal infrastructure includes international sea ports, road network connectivity with structure pavement rating of 94.62%, and railroad tracks indicating that multimodal transportation in KEK Lhokseumawe are ready to utilize. Regulatory requirements relating to the operation of all ports in KEK Lhokseumawe as export / import gate are required and serve the loading and loading activities of Containers, and as a place of origin of goods on the east coast of Aceh.

  10. Multimodal compared to pharmacologic treatments for chronic tension-type headache in adolescents.

    PubMed

    Przekop, Peter; Przekop, Allison; Haviland, Mark G

    2016-10-01

    Chronic tension-type headache (CTTH) in children and adolescents is a serious medical condition, with considerable morbidity and few effective, evidence-based treatments. We performed a chart review of 83 adolescents (age range = 13-18 years; 67 girls and 16 boys) diagnosed with CTTH. Two treatment protocols were compared: multimodal (osteopathic manipulative treatments, mindfulness, and qi gong) and pharmacologic (amitriptyline or gabapentin). Four outcomes (headache frequency, pain intensity, general health, and health interference) were assessed at three time points (baseline, 3 months, and 6 months). A fifth outcome, number of bilateral tender points, was recorded at baseline and 6 months. All five were evaluated statistically with a linear mixed model. Although both multimodal and pharmacologic treatments were effective for CTTH (time effects for all measures were significant at p < .001), results from each analysis favored multimodal treatment (the five group by time interaction effects were significant at or below the p < .001 level). Headache frequency in the pharmacologic group, for example, reduced from a monthly average (95% Confidence Interval shown in parentheses) of 23.9 (21.8, 26.0) to 16.4 (14.3, 18.6) and in the multimodal group from 22.3 (20.1, 24.5) to 4.9 (2.6, 7.2) (a substantial group difference). Pain intensity (worst in the last 24 hours, 0-10 scale) was reduced in the pharmacologic group from 6.2 (5.6, 6.9) to 3.4 (2.7, 4.1) and from 6.1 (5.4, 6.8) to 2.0 (1.2, 2.7) in the multimodal group (a less substantial difference). Across the other three assessments, group differences were larger for general health and number of tender points and less so for pain restriction. Multimodal treatment for adolescent CTTH appears to be effective. Randomized controlled trials are needed to confirm these promising results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Multimodal autofluorescence detection of cancer: from single cells to living organism

    NASA Astrophysics Data System (ADS)

    Horilova, J.; Cunderlikova, B.; Cagalinec, M.; Chorvat, D.; Marcek Chorvatova, A.

    2018-02-01

    Multimodal optical imaging of suspected tissues is showing to be a promising method for distinguishing suspected cancerous tissues from healthy ones. In particular, the combination of steady-state spectroscopic methods with timeresolved fluorescence provides more precise insight into native metabolism when focused on tissue autofluorescence. Cancer is linked to specific metabolic remodelation detectable spectroscopically. In this work, we evaluate possibilities and limitations of multimodal optical cancer detection in single cells, collagen-based 3D cell cultures and in living organisms (whole mice), as a representation of gradually increasing complexity of model systems.

  12. Optimization of an integrated wavelength monitor device

    NASA Astrophysics Data System (ADS)

    Wang, Pengfei; Brambilla, Gilberto; Semenova, Yuliya; Wu, Qiang; Farrell, Gerald

    2011-05-01

    In this paper an edge filter based on multimode interference in an integrated waveguide is optimized for a wavelength monitoring application. This can also be used as a demodulation element in a fibre Bragg grating sensing system. A global optimization algorithm is presented for the optimum design of the multimode interference device, including a range of parameters of the multimode waveguide, such as length, width and position of the input and output waveguides. The designed structure demonstrates the desired spectral response for wavelength measurements. Fabrication tolerance is also analysed numerically for this structure.

  13. Multi-Modal Performance Measures in Oregon: Developing a Transportation Cost Index Based Upon Multi-Modal Network and Land Use Information

    DOT National Transportation Integrated Search

    2016-02-01

    Transportation Cost Index is a performance measure for transportation and land use systems originally proposed and piloted by Reiff and Gregor (2005). It fills important niches of existing similar measures in term of policy areas covered and type of ...

  14. Comprehension Strategy Instruction for Multimodal Texts in Science

    ERIC Educational Resources Information Center

    Alvermann, Donna E.; Wilson, Amy Alexandra

    2011-01-01

    This article highlights examples from a middle-school science teacher's instruction using multimodal texts. Its importance lies in reconciling narrowed definitions of reading (and hence reading instruction) with the need to develop students' critical awareness as they engage with multiple sign systems, or semiotic resources, used for constructing…

  15. Semi-Immersive Virtual Turbine Engine Simulation System

    NASA Astrophysics Data System (ADS)

    Abidi, Mustufa H.; Al-Ahmari, Abdulrahman M.; Ahmad, Ali; Darmoul, Saber; Ameen, Wadea

    2018-05-01

    The design and verification of assembly operations is essential for planning product production operations. Recently, virtual prototyping has witnessed tremendous progress, and has reached a stage where current environments enable rich and multi-modal interaction between designers and models through stereoscopic visuals, surround sound, and haptic feedback. The benefits of building and using Virtual Reality (VR) models in assembly process verification are discussed in this paper. In this paper, we present the virtual assembly (VA) of an aircraft turbine engine. The assembly parts and sequences are explained using a virtual reality design system. The system enables stereoscopic visuals, surround sounds, and ample and intuitive interaction with developed models. A special software architecture is suggested to describe the assembly parts and assembly sequence in VR. A collision detection mechanism is employed that provides visual feedback to check the interference between components. The system is tested for virtual prototype and assembly sequencing of a turbine engine. We show that the developed system is comprehensive in terms of VR feedback mechanisms, which include visual, auditory, tactile, as well as force feedback. The system is shown to be effective and efficient for validating the design of assembly, part design, and operations planning.

  16. Changing the game: exploring infants' participation in early play routines

    PubMed Central

    Fantasia, Valentina; Fasulo, Alessandra; Costall, Alan; López, Beatriz

    2014-01-01

    Play has proved to have a central role in children's development, most notably in rule learning (Piaget, 1965; Sutton-Smith, 1979) and negotiation of roles and goals (Garvey, 1974; Bruner et al., 1976). Yet very little research has been done on early play. The present study focuses on early social games, i.e., vocal-kinetic play routines that mothers use to interact with infants from very early on. We explored 3-month-old infants and their mothers performing a routine game first in the usual way, then in two violated conditions: without gestures and without sound. The aim of the study is to investigate infants' participation and expectations in the game and whether this participation is affected by changes in the multimodal format of the game. Infants' facial expressions, gaze, and body movements were coded to measure levels of engagement and affective state across the three conditions. Results showed a significant decrease in Limbs Movements and expressions of Positive Affect, an increase in Gaze Away and in Stunned Expression when the game structure was violated. These results indicate that the violated game conditions were experienced as less engaging, either because of an unexpected break in the established joint routine, or simply because they were weaker versions of the same game. Overall, our results suggest that structured, multimodal play routines may constitute interactional contexts that only work as integrated units of auditory and motor resources, representing early communicative contexts which prepare the ground for later, more complex multimodal interactions, such as verbal exchanges. PMID:24936192

  17. Design and installation of a multimode microscopy system

    NASA Astrophysics Data System (ADS)

    Helm, Johannes P.; Haug, Finn-Mogens S.; Storm, Johan F.; Ottersen, Ole-Petter

    2001-04-01

    We describe design and installation of a multi-mode microscopy core facility in an environment of varied research activity in life-sciences. The experimentators can select any combination of a) microscopes (upright, upright fixed-stage, inverted), b) microscopy modes (widefield, DIC, IRDIC, widefield epifluorescence, transmission LSM, reflection and fluorescence CLSM, MPLSM), c) imaging techniques (direct observation, video observation, photography, quantitative camera-recording, flying spot scanning), d) auxiliary systems (equipment for live specimen imaging, electrophysiology, time-coordinated laser-scanning and electrophysiology, patch-clamp). The equipment is installed on one large vibration-isolating optical table (3m X 1.5m X 0.3m). Electronics, auxiliary equipment, and a fiber-coupled, remotely controlled Ar+-Kr+ laser are mounted in a rack system fixed to the ceiling. The design of the shelves allows the head of the CSLM to be moved to any of the microscopes without increasing critical cable lengths. At the same time easy access to all the units is preserved. The beam of a Titanium-Sapphire laser, controlled by means of an EOM and a prism GVD, is coupled directly to the microscopes. Three mirrors mounted on a single precision translation table are integrated into the beam steering system so that the beam can easily be redirected to any of the microscopes. All the available instruments can be operated by the educated and trained user. The system is popular among researchers in neuroanatomy, embryology, cell biology, molecular biology - including the study of protein interactions, e.g. by means of FRET, and electrophysiology. Its colocalization with an EM facility promises to provide considerable synergy effects.

  18. Hand biometric recognition based on fused hand geometry and vascular patterns.

    PubMed

    Park, GiTae; Kim, Soowon

    2013-02-28

    A hand biometric authentication method based on measurements of the user's hand geometry and vascular pattern is proposed. To acquire the hand geometry, the thickness of the side view of the hand, the K-curvature with a hand-shaped chain code, the lengths and angles of the finger valleys, and the lengths and profiles of the fingers were used, and for the vascular pattern, the direction-based vascular-pattern extraction method was used, and thus, a new multimodal biometric approach is proposed. The proposed multimodal biometric system uses only one image to extract the feature points. This system can be configured for low-cost devices. Our multimodal biometric-approach hand-geometry (the side view of the hand and the back of hand) and vascular-pattern recognition method performs at the score level. The results of our study showed that the equal error rate of the proposed system was 0.06%.

  19. Hand Biometric Recognition Based on Fused Hand Geometry and Vascular Patterns

    PubMed Central

    Park, GiTae; Kim, Soowon

    2013-01-01

    A hand biometric authentication method based on measurements of the user's hand geometry and vascular pattern is proposed. To acquire the hand geometry, the thickness of the side view of the hand, the K-curvature with a hand-shaped chain code, the lengths and angles of the finger valleys, and the lengths and profiles of the fingers were used, and for the vascular pattern, the direction-based vascular-pattern extraction method was used, and thus, a new multimodal biometric approach is proposed. The proposed multimodal biometric system uses only one image to extract the feature points. This system can be configured for low-cost devices. Our multimodal biometric-approach hand-geometry (the side view of the hand and the back of hand) and vascular-pattern recognition method performs at the score level. The results of our study showed that the equal error rate of the proposed system was 0.06%. PMID:23449119

  20. Analysis of psychological factors for quality assessment of interactive multimodal service

    NASA Astrophysics Data System (ADS)

    Yamagishi, Kazuhisa; Hayashi, Takanori

    2005-03-01

    We proposed a subjective quality assessment model for interactive multimodal services. First, psychological factors of an audiovisual communication service were extracted by using the semantic differential (SD) technique and factor analysis. Forty subjects participated in subjective tests and performed point-to-point conversational tasks on a PC-based TV phone that exhibits various network qualities. The subjects assessed those qualities on the basis of 25 pairs of adjectives. Two psychological factors, i.e., an aesthetic feeling and a feeling of activity, were extracted from the results. Then, quality impairment factors affecting these two psychological factors were analyzed. We found that the aesthetic feeling is mainly affected by IP packet loss and video coding bit rate, and the feeling of activity depends on delay time and video frame rate. We then proposed an opinion model derived from the relationships among quality impairment factors, psychological factors, and overall quality. The results indicated that the estimation error of the proposed model is almost equivalent to the statistical reliability of the subjective score. Finally, using the proposed model, we discuss guidelines for quality design of interactive audiovisual communication services.

  1. A study on the nature of interactions of mixed-mode ligands HEA and PPA HyperCel using phenylglyoxal modified lysozyme.

    PubMed

    Pezzini, J; Cabanne, C; Dupuy, J-W; Gantier, R; Santarelli, X

    2014-06-01

    Mixed mode chromatography, or multimodal chromatography, involves the exploitation of combinations of several interactions in a controlled manner, to facilitate the rapid capture of proteins. Mixed-mode ligands like HEA and PPA HyperCel™ facilitate different kinds of interactions (hydrophobic, ionic, etc.) under different conditions. In order to better characterize the nature of this multi-modal interaction, we sought to study a protein, lysozyme, which is normally not retained by these mixed mode resins under normal binding conditions. Lysozyme was modified specifically at Arginine residues by the action of phenylglyoxal, and was extensively studied in this work to better characterize the mixed-mode interactions of HEA HyperCel™ and PPA HyperCel™ chromatographic supports. We show here that the adsorption behaviour of lysozyme on HEA and PPA HyperCel™ mixed mode sorbents varies depending on the degree of charge modification at the surface of the protein. Experiments using conventional cation exchange and hydrophobic interaction chromatography confirm that both charge and hydrophobicity modification occurs at the surface of the protein after lysozyme reaction with phenylglyoxal. The results emanating from this work using HEA and PPA HyperCel sorbents strongly suggest that mixed mode chromatography can efficiently separate closely related proteins of only minor surface charge and/or hydrophobicity differences. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. ADMultiImg: a novel missing modality transfer learning based CAD system for diagnosis of MCI due to AD using incomplete multi-modality imaging data

    NASA Astrophysics Data System (ADS)

    Liu, Xiaonan; Chen, Kewei; Wu, Teresa; Weidman, David; Lure, Fleming; Li, Jing

    2018-02-01

    Alzheimer's Disease (AD) is the most common cause of dementia and currently has no cure. Treatments targeting early stages of AD such as Mild Cognitive Impairment (MCI) may be most effective to deaccelerate AD, thus attracting increasing attention. However, MCI has substantial heterogeneity in that it can be caused by various underlying conditions, not only AD. To detect MCI due to AD, NIA-AA published updated consensus criteria in 2011, in which the use of multi-modality images was highlighted as one of the most promising methods. It is of great interest to develop a CAD system based on automatic, quantitative analysis of multi-modality images and machine learning algorithms to help physicians more adequately diagnose MCI due to AD. The challenge, however, is that multi-modality images are not universally available for many patients due to cost, access, safety, and lack of consent. We developed a novel Missing Modality Transfer Learning (MMTL) algorithm capable of utilizing whatever imaging modalities are available for an MCI patient to diagnose the patient's likelihood of MCI due to AD. Furthermore, we integrated MMTL with radiomics steps including image processing, feature extraction, and feature screening, and a post-processing for uncertainty quantification (UQ), and developed a CAD system called "ADMultiImg" to assist clinical diagnosis of MCI due to AD using multi-modality images together with patient demographic and genetic information. Tested on ADNI date, our system can generate a diagnosis with high accuracy even for patients with only partially available image modalities (AUC=0.94), and therefore may have broad clinical utility.

  3. Multimodal imaging of ischemic wounds

    NASA Astrophysics Data System (ADS)

    Zhang, Shiwu; Gnyawali, Surya; Huang, Jiwei; Liu, Peng; Gordillo, Gayle; Sen, Chandan K.; Xu, Ronald

    2012-12-01

    The wound healing process involves the reparative phases of inflammation, proliferation, and remodeling. Interrupting any of these phases may result in chronically unhealed wounds, amputation, or even patient death. Quantitative assessment of wound tissue ischemia, perfusion, and inflammation provides critical information for appropriate detection, staging, and treatment of chronic wounds. However, no method is available for noninvasive, simultaneous, and quantitative imaging of these tissue parameters. We integrated hyperspectral, laser speckle, and thermographic imaging modalities into a single setup for multimodal assessment of tissue oxygenation, perfusion, and inflammation characteristics. Advanced algorithms were developed for accurate reconstruction of wound oxygenation and appropriate co-registration between different imaging modalities. The multimodal wound imaging system was validated by an ongoing clinical trials approved by OSU IRB. In the clinical trial, a wound of 3mm in diameter was introduced on a healthy subject's lower extremity and the healing process was serially monitored by the multimodal imaging setup. Our experiments demonstrated the clinical usability of multimodal wound imaging.

  4. Multi-Modality Phantom Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huber, Jennifer S.; Peng, Qiyu; Moses, William W.

    2009-03-20

    Multi-modality imaging has an increasing role in the diagnosis and treatment of a large number of diseases, particularly if both functional and anatomical information are acquired and accurately co-registered. Hence, there is a resulting need for multi modality phantoms in order to validate image co-registration and calibrate the imaging systems. We present our PET-ultrasound phantom development, including PET and ultrasound images of a simple prostate phantom. We use agar and gelatin mixed with a radioactive solution. We also present our development of custom multi-modality phantoms that are compatible with PET, transrectal ultrasound (TRUS), MRI and CT imaging. We describe bothmore » our selection of tissue mimicking materials and phantom construction procedures. These custom PET-TRUS-CT-MRI prostate phantoms use agargelatin radioactive mixtures with additional contrast agents and preservatives. We show multi-modality images of these custom prostate phantoms, as well as discuss phantom construction alternatives. Although we are currently focused on prostate imaging, this phantom development is applicable to many multi-modality imaging applications.« less

  5. Failure Analysis of a Complex Learning Framework Incorporating Multi-Modal and Semi-Supervised Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pullum, Laura L; Symons, Christopher T

    2011-01-01

    Machine learning is used in many applications, from machine vision to speech recognition to decision support systems, and is used to test applications. However, though much has been done to evaluate the performance of machine learning algorithms, little has been done to verify the algorithms or examine their failure modes. Moreover, complex learning frameworks often require stepping beyond black box evaluation to distinguish between errors based on natural limits on learning and errors that arise from mistakes in implementation. We present a conceptual architecture, failure model and taxonomy, and failure modes and effects analysis (FMEA) of a semi-supervised, multi-modal learningmore » system, and provide specific examples from its use in a radiological analysis assistant system. The goal of the research described in this paper is to provide a foundation from which dependability analysis of systems using semi-supervised, multi-modal learning can be conducted. The methods presented provide a first step towards that overall goal.« less

  6. Theoretical analysis of the performance of code division multiple access communications over multimode optical fiber channels. Part 1: Transmission and detection

    NASA Astrophysics Data System (ADS)

    Walker, Ernest L.

    1994-05-01

    This paper presents results of a theoretical investigation to evaluate the performance of code division multiple access communications over multimode optical fiber channels in an asynchronous, multiuser communication network environment. The system is evaluated using Gold sequences for spectral spreading of the baseband signal from each user employing direct-sequence biphase shift keying and intensity modulation techniques. The transmission channel model employed is a lossless linear system approximation of the field transfer function for the alpha -profile multimode optical fiber. Due to channel model complexity, a correlation receiver model employing a suboptimal receive filter was used in calculating the peak output signal at the ith receiver. In Part 1, the performance measures for the system, i.e., signal-to-noise ratio and bit error probability for the ith receiver, are derived as functions of channel characteristics, spectral spreading, number of active users, and the bit energy to noise (white) spectral density ratio. In Part 2, the overall system performance is evaluated.

  7. Development of a brain monitoring system for multimodality investigation in awake rats.

    PubMed

    Limnuson, Kanokwan; Narayan, Raj K; Chiluwal, Amrit; Bouton, Chad; Ping Wang; Chunyan Li

    2016-08-01

    Multimodal brain monitoring is an important approach to gain insight into brain function, modulation, and pathology. We have developed a unique micromachined neural probe capable of real-time continuous monitoring of multiple physiological, biochemical and electrophysiological variables. However, to date, it has only been used in anesthetized animals due to a lack of an appropriate interface for awake animals. We have developed a versatile headstage for recording the small neural signal and bridging the sensors to the remote sensing units for multimodal brain monitoring in awake rats. The developed system has been successfully validated in awake rats by simultaneously measuring four cerebral variables: electrocorticography, oxygen tension, temperature and cerebral blood flow. Reliable signal recordings were obtained with minimal artifacts from movement and environmental noise. For the first time, multiple variables of cerebral function and metabolism were simultaneously recorded from awake rats using a single neural probe. The system is envisioned for studying the effects of pharmacologic treatments, mapping the development of central nervous system diseases, and better understanding normal cerebral physiology.

  8. Wind- and Rain-Induced Vibrations Impose Different Selection Pressures on Multimodal Signaling.

    PubMed

    Halfwerk, Wouter; Ryan, Michael J; Wilson, Preston S

    2016-09-01

    The world is a noisy place, and animals have evolved a myriad of strategies to communicate in it. Animal communication signals are, however, often multimodal; their components can be processed by multiple sensory systems, and noise can thus affect signal components across different modalities. We studied the effect of environmental noise on multimodal communication in the túngara frog (Physalaemus pustulosus). Males communicate with rivals using airborne sounds combined with call-induced water ripples. We tested males under control as well as noisy conditions in which we mimicked rain- and wind-induced vibrations on the water surface. Males responded more strongly to a multimodal playback in which sound and ripples were combined, compared to a unimodal sound-only playback, but only in the absence of rain and wind. Under windy conditions, males decreased their response to the multimodal playback, suggesting that wind noise interferes with the detection of rival ripples. Under rainy conditions, males increased their response, irrespective of signal playback, suggesting that different noise sources can have different impacts on communication. Our findings show that noise in an additional sensory channel can affect multimodal signal perception and thereby drive signal evolution, but not always in the expected direction.

  9. 4-wave dynamics in kinetic wave turbulence

    NASA Astrophysics Data System (ADS)

    Chibbaro, Sergio; Dematteis, Giovanni; Rondoni, Lamberto

    2018-01-01

    A general Hamiltonian wave system with quartic resonances is considered, in the standard kinetic limit of a continuum of weakly interacting dispersive waves with random phases. The evolution equation for the multimode characteristic function Z is obtained within an ;interaction representation; and a perturbation expansion in the small nonlinearity parameter. A frequency renormalization is performed to remove linear terms that do not appear in the 3-wave case. Feynman-Wyld diagrams are used to average over phases, leading to a first order differential evolution equation for Z. A hierarchy of equations, analogous to the Boltzmann hierarchy for low density gases is derived, which preserves in time the property of random phases and amplitudes. This amounts to a general formalism for both the N-mode and the 1-mode PDF equations for 4-wave turbulent systems, suitable for numerical simulations and for investigating intermittency. Some of the main results which are developed here in detail have been tested numerically in a recent work.

  10. Demonstration of analyzers for multimode photonic time-bin qubits

    NASA Astrophysics Data System (ADS)

    Jin, Jeongwan; Agne, Sascha; Bourgoin, Jean-Philippe; Zhang, Yanbao; Lütkenhaus, Norbert; Jennewein, Thomas

    2018-04-01

    We demonstrate two approaches for unbalanced interferometers as time-bin qubit analyzers for quantum communication, robust against mode distortions and polarization effects as expected from free-space quantum communication systems including wavefront deformations, path fluctuations, pointing errors, and optical elements. Despite strong spatial and temporal distortions of the optical mode of a time-bin qubit, entangled with a separate polarization qubit, we verify entanglement using the Negative Partial Transpose, with the measured visibility of up to 0.85 ±0.01 . The robustness of the analyzers is further demonstrated for various angles of incidence up to 0 .2∘ . The output of the interferometers is coupled into multimode fiber yielding a high system throughput of 0.74. Therefore, these analyzers are suitable and efficient for quantum communication over multimode optical channels.

  11. Development of ClearPEM-Sonic, a multimodal mammography system for PET and Ultrasound

    NASA Astrophysics Data System (ADS)

    Cucciati, G.; Auffray, E.; Bugalho, R.; Cao, L.; Di Vara, N.; Farina, F.; Felix, N.; Frisch, B.; Ghezzi, A.; Juhan, V.; Jun, D.; Lasaygues, P.; Lecoq, P.; Mensah, S.; Mundler, O.; Neves, J.; Paganoni, M.; Peter, J.; Pizzichemi, M.; Siles, P.; Silva, J. C.; Silva, R.; Tavernier, S.; Tessonnier, L.; Varela, J.

    2014-03-01

    ClearPEM-Sonic is an innovative imaging device specifically developed for breast cancer. The possibility to work in PEM-Ultrasound multimodality allows to obtain metabolic and morphological information increasing the specificity of the exam. The ClearPEM detector is developed to maximize the sensitivity and the spatial resolution as compared to Whole-Body PET scanners. It is coupled with a 3D ultrasound system, the SuperSonic Imagine Aixplorer that improves the specificity of the exam by providing a tissue elasticity map. This work describes the ClearPEM-Sonic project focusing on the technological developments it has required, the technical merits (and limits) and the first multimodal images acquired on a dedicated phantom. It finally presents selected clinical case studies that confirm the value of PEM information.

  12. Berlin Kompass: Multimodal Gameful Empowerment for Foreign Language Learning

    ERIC Educational Resources Information Center

    Kallioniemi, Pekka; Posti, Laura-Pihkala; Hakulinen, Jaakko; Turunen, Markku; Keskinen, Tuuli; Raisamo, Roope

    2015-01-01

    This article presents an innovative, gameful, multimodal, and authentic learning environment for training of oral communication in a foreign language--a virtual adventure called Berlin Kompass. After a brief presentation of the pedagogical and technological backgrounds, the system is described. Central results of a series of pilots in autumn 2013…

  13. Many Choices, One Destination: Multimodal University Brand Construction in an Urban Public Transportation System

    ERIC Educational Resources Information Center

    Blanco Ramírez, Gerardo

    2016-01-01

    Amidst global competition in higher education, colleges and universities adopt strategies that mimic and adapt business practices. Branding is now a widespread practice in higher education; multimodal advertisement is a manifestation of emerging branding strategies for universities. While the visibility of brands in higher education has grown…

  14. Student's Uncertainty Modeling through a Multimodal Sensor-Based Approach

    ERIC Educational Resources Information Center

    Jraidi, Imene; Frasson, Claude

    2013-01-01

    Detecting the student internal state during learning is a key construct in educational environment and particularly in Intelligent Tutoring Systems (ITS). Students' uncertainty is of primary interest as it is deeply rooted in the process of knowledge construction. In this paper we propose a new sensor-based multimodal approach to model…

  15. TDRSS multimode transponder program S-band modification

    NASA Technical Reports Server (NTRS)

    Mackey, J. E.

    1975-01-01

    The S-Band TDRS multimode transponder and its associated ground support equipment is described. The transponder demonstrates candidate modulation techniques to provide the required information for the design of an eventual S-band transponder suitable for installation in a user satellite, capable of operating as part of a Tracking and Data Relay Satellite (TDRS) system.

  16. Automatic Rejection Of Multimode Laser Pulses

    NASA Technical Reports Server (NTRS)

    Tratt, David M.; Menzies, Robert T.; Esproles, Carlos

    1991-01-01

    Characteristic modulation detected, enabling rejection of multimode signals. Monitoring circuit senses multiple longitudinal mode oscillation of transversely excited, atmospheric-pressure (TEA) CO2 laser. Facility developed for inclusion into coherent detection laser radar (LIDAR) system. However, circuit described of use in any experiment where desireable to record data only when laser operates in single longitudinal mode.

  17. A Standardization Evaluation Potential Study of the Common Multi-Mode Radar Program.

    DTIC Science & Technology

    1979-11-01

    Radar, the RX (RF-16 etc.), Enhanced Tactical Fighter ( ETF ), and A-7. Candidate radar systems applicable to the Common Multi-Mode Radar Program...RSTC R Resupply Time to Overseas Located Bases (hours) RSTO R Depot Stock Safety Factor (standard deviations) DLY R Shipping Time to Depot from CONUS

  18. Feedforward Equalizers for MDM-WDM in Multimode Fiber Interconnects

    NASA Astrophysics Data System (ADS)

    Masunda, Tendai; Amphawan, Angela

    2018-04-01

    In this paper, we present new tap configurations of a feedforward equalizer to mitigate mode coupling in a 60-Gbps 18-channel mode-wavelength division multiplexing system in a 2.5-km-long multimode fiber. The performance of the equalization is measured through analyses on eye diagrams, power coupling coefficients and bit-error rates.

  19. A Multimodal Neural Network Recruited by Expertise with Musical Notation

    ERIC Educational Resources Information Center

    Wong, Yetta Kwailing; Gauthier, Isabel

    2010-01-01

    Prior neuroimaging work on visual perceptual expertise has focused on changes in the visual system, ignoring possible effects of acquiring expert visual skills in nonvisual areas. We investigated expertise for reading musical notation, a skill likely to be associated with multimodal abilities. We compared brain activity in music-reading experts…

  20. Concept for Classifying Facade Elements Based on Material, Geometry and Thermal Radiation Using Multimodal Uav Remote Sensing

    NASA Astrophysics Data System (ADS)

    Ilehag, R.; Schenk, A.; Hinz, S.

    2017-08-01

    This paper presents a concept for classification of facade elements, based on the material and the geometry of the elements in addition to the thermal radiation of the facade with the usage of a multimodal Unmanned Aerial Vehicle (UAV) system. Once the concept is finalized and functional, the workflow can be used for energy demand estimations for buildings by exploiting existing methods for estimation of heat transfer coefficient and the transmitted heat loss. The multimodal system consists of a thermal, a hyperspectral and an optical sensor, which can be operational with a UAV. While dealing with sensors that operate in different spectra and have different technical specifications, such as the radiometric and the geometric resolution, the challenges that are faced are presented. Addressed are the different approaches of data fusion, such as image registration, generation of 3D models by performing image matching and the means for classification based on either the geometry of the object or the pixel values. As a first step towards realizing the concept, the result from a geometric calibration with a designed multimodal calibration pattern is presented.

  1. Optically pre-amplified lidar-radar

    NASA Astrophysics Data System (ADS)

    Morvan, Loic; Dolfi, Daniel; Huignard, Jean-Pierre

    2001-09-01

    We present the concept of an optically pre-amplified intensity modulated lidar, where the modulation frequency is in the microwave domain (1-10 GHz). Such a system permits to combine directivity of laser beams with mature radar processing. As an intensity modulated or dual-frequency laser beam is directed on a target, the backscattered intensity is collected by an optical system, pass through an optical preamplifier, and is detected on a high speed photodiode in a direct detection scheme. A radar type processing permits then to extract range, speed and identification information. The association of spatially multimode amplifier and direct detection allows low sensitivity to atmospheric turbulence and large field of view. We demonstrated theoretically that optical pre-amplification can greatly enhance sensitivity, even in spatially multimode amplifiers, such as free-space amplifier or multimode doped fiber. Computed range estimates based on this concept are presented. Laboratory demonstrations using 1 to 3 GHz modulated laser sources and >20 dB gain in multimode amplifiers are detailed. Preliminary experimental results on range and speed measurements and possible use for large amplitude vibrometry will be presented.

  2. Data Processing And Machine Learning Methods For Multi-Modal Operator State Classification Systems

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan A.

    2015-01-01

    This document is intended as an introduction to a set of common signal processing learning methods that may be used in the software portion of a functional crew state monitoring system. This includes overviews of both the theory of the methods involved, as well as examples of implementation. Practical considerations are discussed for implementing modular, flexible, and scalable processing and classification software for a multi-modal, multi-channel monitoring system. Example source code is also given for all of the discussed processing and classification methods.

  3. A new piezoelectric energy harvesting design concept: multimodal energy harvesting skin.

    PubMed

    Lee, Soobum; Youn, Byeng D

    2011-03-01

    This paper presents an advanced design concept for a piezoelectric energy harvesting (EH), referred to as multimodal EH skin. This EH design facilitates the use of multimodal vibration and enhances power harvesting efficiency. The multimodal EH skin is an extension of our previous work, EH skin, which was an innovative design paradigm for a piezoelectric energy harvester: a vibrating skin structure and an additional thin piezoelectric layer in one device. A computational (finite element) model of the multilayered assembly - the vibrating skin structure and piezoelectric layer - is constructed and the optimal topology and/or shape of the piezoelectric layer is found for maximum power generation from multiple vibration modes. A design rationale for the multimodal EH skin was proposed: designing a piezoelectric material distribution and external resistors. In the material design step, the piezoelectric material is segmented by inflection lines from multiple vibration modes of interests to minimize voltage cancellation. The inflection lines are detected using the voltage phase. In the external resistor design step, the resistor values are found for each segment to maximize power output. The presented design concept, which can be applied to any engineering system with multimodal harmonic-vibrating skins, was applied to two case studies: an aircraft skin and a power transformer panel. The excellent performance of multimodal EH skin was demonstrated, showing larger power generation than EH skin without segmentation or unimodal EH skin.

  4. New medical workstation for multimodality communication systems

    NASA Astrophysics Data System (ADS)

    Kotsopoulos, Stavros A.; Lymberopoulos, Dimitris C.

    1993-07-01

    The introduction of special teleworking and advanced remote expert consultation procedures in the modern multimodality medical communication systems, has an effective result in the way of confronting synchronous and asynchronous patient cases, by the physicians. The common denominator in developing the above procedures is to use special designated Medical Workstations (MWS). The present paper deals with the implementation of a MWS which facilitates the doctors of medicine to handle efficiently multimedia data in an ISDN communication environment.

  5. Simultaneous neural and movement recording in large-scale immersive virtual environments.

    PubMed

    Snider, Joseph; Plank, Markus; Lee, Dongpyo; Poizner, Howard

    2013-10-01

    Virtual reality (VR) allows precise control and manipulation of rich, dynamic stimuli that, when coupled with on-line motion capture and neural monitoring, can provide a powerful means both of understanding brain behavioral relations in the high dimensional world and of assessing and treating a variety of neural disorders. Here we present a system that combines state-of-the-art, fully immersive, 3D, multi-modal VR with temporally aligned electroencephalographic (EEG) recordings. The VR system is dynamic and interactive across visual, auditory, and haptic interactions, providing sight, sound, touch, and force. Crucially, it does so with simultaneous EEG recordings while subjects actively move about a 20 × 20 ft² space. The overall end-to-end latency between real movement and its simulated movement in the VR is approximately 40 ms. Spatial precision of the various devices is on the order of millimeters. The temporal alignment with the neural recordings is accurate to within approximately 1 ms. This powerful combination of systems opens up a new window into brain-behavioral relations and a new means of assessment and rehabilitation of individuals with motor and other disorders.

  6. A Passive Learning Sensor Architecture for Multimodal Image Labeling: An Application for Social Robots.

    PubMed

    Gutiérrez, Marco A; Manso, Luis J; Pandya, Harit; Núñez, Pedro

    2017-02-11

    Object detection and classification have countless applications in human-robot interacting systems. It is a necessary skill for autonomous robots that perform tasks in household scenarios. Despite the great advances in deep learning and computer vision, social robots performing non-trivial tasks usually spend most of their time finding and modeling objects. Working in real scenarios means dealing with constant environment changes and relatively low-quality sensor data due to the distance at which objects are often found. Ambient intelligence systems equipped with different sensors can also benefit from the ability to find objects, enabling them to inform humans about their location. For these applications to succeed, systems need to detect the objects that may potentially contain other objects, working with relatively low-resolution sensor data. A passive learning architecture for sensors has been designed in order to take advantage of multimodal information, obtained using an RGB-D camera and trained semantic language models. The main contribution of the architecture lies in the improvement of the performance of the sensor under conditions of low resolution and high light variations using a combination of image labeling and word semantics. The tests performed on each of the stages of the architecture compare this solution with current research labeling techniques for the application of an autonomous social robot working in an apartment. The results obtained demonstrate that the proposed sensor architecture outperforms state-of-the-art approaches.

  7. Hand hygiene and healthcare system change within multi-modal promotion: a narrative review.

    PubMed

    Allegranzi, B; Sax, H; Pittet, D

    2013-02-01

    Many factors may influence the level of compliance with hand hygiene recommendations by healthcare workers. Lack of products and facilities as well as their inappropriate and non-ergonomic location represent important barriers. Targeted actions aimed at making hand hygiene practices feasible during healthcare delivery by ensuring that the necessary infrastructure is in place, defined as 'system change', are essential to improve hand hygiene in healthcare. In particular, access to alcohol-based hand rubs (AHRs) enables appropriate and timely hand hygiene performance at the point of care. The feasibility and impact of system change within multi-modal strategies have been demonstrated both at institutional level and on a large scale. The introduction of AHRs overcomes some important barriers to best hand hygiene practices and is associated with higher compliance, especially when integrated within multi-modal strategies. Several studies demonstrated the association between AHR consumption and reduction in healthcare-associated infection, in particular, meticillin-resistant Staphylococcus aureus bacteraemia. Recent reports demonstrate the feasibility and success of system change implementation on a large scale. The World Health Organization and other investigators have reported the challenges and encouraging results of implementing hand hygiene improvement strategies, including AHR introduction, in settings with limited resources. This review summarizes the available evidence demonstrating the need for system change and its importance within multi-modal hand hygiene improvement strategies. This topic is also discussed in a global perspective and highlights some controversial issues. Copyright © 2013 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  8. The Artistic Infant Directed Performance: A Mycroanalysis of the Adult's Movements and Sounds.

    PubMed

    Español, Silvia; Shifres, Favio

    2015-09-01

    Intersubjectivity experiences established between adults and infants are partially determined by the particular ways in which adults are active in front of babies. An important amount of research focuses on the "musicality" of infant-directed speech (defined melodic contours, tonal and rhythm variations, etc.) and its role in linguistic enculturation. However, researchers have recently suggested that adults also bring a multimodal performance to infants. According to this, some scholars seem to find indicators of the genesis of the performing arts (mainly music and dance) in such a multimodal stimulation. We analyze the adult performance using analytical categories and methodologies of analysis broadly validated in the fields of music performance and movement analysis in contemporary dance. We present microanalyses of an adult-7 month old infant interaction scene that evidenced structural aspects of infant directed multimodal performance compatible with music and dance structures, and suggest functions of adult performance similar to performing arts functions or related to them.

  9. Revealing Spatial Variation and Correlation of Urban Travels from Big Trajectory Data

    NASA Astrophysics Data System (ADS)

    Li, X.; Tu, W.; Shen, S.; Yue, Y.; Luo, N.; Li, Q.

    2017-09-01

    With the development of information and communication technology, spatial-temporal data that contain rich human mobility information are growing rapidly. However, the consistency of multi-mode human travel behind multi-source spatial-temporal data is not clear. To this aim, we utilized a week of taxies' and buses' GPS trajectory data and smart card data in Shenzhen, China to extract city-wide travel information of taxi, bus and metro and tested the correlation of multi-mode travel characteristics. Both the global correlation and local correlation of typical travel indicator were examined. The results show that: (1) Significant differences exist in of urban multi-mode travels. The correlation between bus travels and taxi travels, metro travel and taxi travels are globally low but locally high. (2) There are spatial differences of the correlation relationship between bus, metro and taxi travel. These findings help us understanding urban travels deeply therefore facilitate both the transport policy making and human-space interaction research.

  10. Detection of elemental mercury by multimode diode laser correlation spectroscopy.

    PubMed

    Lou, Xiutao; Somesfalean, Gabriel; Svanberg, Sune; Zhang, Zhiguo; Wu, Shaohua

    2012-02-27

    We demonstrate a method for elemental mercury detection based on correlation spectroscopy employing UV laser radiation generated by sum-frequency mixing of two visible multimode diode lasers. Resonance matching of the multimode UV laser is achieved in a wide wavelength range and with good tolerance for various operating conditions. Large mode-hops provide an off-resonance baseline, eliminating interferences from other gas species with broadband absorption. A sensitivity of 1 μg/m3 is obtained for a 1-m path length and 30-s integration time. The performance of the system shows promise for mercury monitoring in industrial applications.

  11. The expert surgical assistant. An intelligent virtual environment with multimodal input.

    PubMed

    Billinghurst, M; Savage, J; Oppenheimer, P; Edmond, C

    1996-01-01

    Virtual Reality has made computer interfaces more intuitive but not more intelligent. This paper shows how an expert system can be coupled with multimodal input in a virtual environment to provide an intelligent simulation tool or surgical assistant. This is accomplished in three steps. First, voice and gestural input is interpreted and represented in a common semantic form. Second, a rule-based expert system is used to infer context and user actions from this semantic representation. Finally, the inferred user actions are matched against steps in a surgical procedure to monitor the user's progress and provide automatic feedback. In addition, the system can respond immediately to multimodal commands for navigational assistance and/or identification of critical anatomical structures. To show how these methods are used we present a prototype sinus surgery interface. The approach described here may easily be extended to a wide variety of medical and non-medical training applications by making simple changes to the expert system database and virtual environment models. Successful implementation of an expert system in both simulated and real surgery has enormous potential for the surgeon both in training and clinical practice.

  12. Multimodal digital color imaging system for facial skin lesion analysis

    NASA Astrophysics Data System (ADS)

    Bae, Youngwoo; Lee, Youn-Heum; Jung, Byungjo

    2008-02-01

    In dermatology, various digital imaging modalities have been used as an important tool to quantitatively evaluate the treatment effect of skin lesions. Cross-polarization color image was used to evaluate skin chromophores (melanin and hemoglobin) information and parallel-polarization image to evaluate skin texture information. In addition, UV-A induced fluorescent image has been widely used to evaluate various skin conditions such as sebum, keratosis, sun damages, and vitiligo. In order to maximize the evaluation efficacy of various skin lesions, it is necessary to integrate various imaging modalities into an imaging system. In this study, we propose a multimodal digital color imaging system, which provides four different digital color images of standard color image, parallel and cross-polarization color image, and UV-A induced fluorescent color image. Herein, we describe the imaging system and present the examples of image analysis. By analyzing the color information and morphological features of facial skin lesions, we are able to comparably and simultaneously evaluate various skin lesions. In conclusion, we are sure that the multimodal color imaging system can be utilized as an important assistant tool in dermatology.

  13. Integrable pair-transition-coupled nonlinear Schrödinger equations.

    PubMed

    Ling, Liming; Zhao, Li-Chen

    2015-08-01

    We study integrable coupled nonlinear Schrödinger equations with pair particle transition between components. Based on exact solutions of the coupled model with attractive or repulsive interaction, we predict that some new dynamics of nonlinear excitations can exist, such as the striking transition dynamics of breathers, new excitation patterns for rogue waves, topological kink excitations, and other new stable excitation structures. In particular, we find that nonlinear wave solutions of this coupled system can be written as a linear superposition of solutions for the simplest scalar nonlinear Schrödinger equation. Possibilities to observe them are discussed in a cigar-shaped Bose-Einstein condensate with two hyperfine states. The results would enrich our knowledge on nonlinear excitations in many coupled nonlinear systems with transition coupling effects, such as multimode nonlinear fibers, coupled waveguides, and a multicomponent Bose-Einstein condensate system.

  14. A study of multibiometric traits of identical twins

    NASA Astrophysics Data System (ADS)

    Sun, Zhenan; Paulino, Alessandra A.; Feng, Jianjiang; Chai, Zhenhua; Tan, Tieniu; Jain, Anil K.

    2010-04-01

    The increase in twin births has created a requirement for biometric systems to accurately determine the identity of a person who has an identical twin. The discriminability of some of the identical twin biometric traits, such as fingerprints, iris, and palmprints, is supported by anatomy and the formation process of the biometric characteristic, which state they are different even in identical twins due to a number of random factors during the gestation period. For the first time, we collected multiple biometric traits (fingerprint, face, and iris) of 66 families of twins, and we performed unimodal and multimodal matching experiments to assess the ability of biometric systems in distinguishing identical twins. Our experiments show that unimodal finger biometric systems can distinguish two different persons who are not identical twins better than they can distinguish identical twins; this difference is much larger in the face biometric system and it is not significant in the iris biometric system. Multimodal biometric systems that combine different units of the same biometric modality (e.g. multiple fingerprints or left and right irises.) show the best performance among all the unimodal and multimodal biometric systems, achieving an almost perfect separation between genuine and impostor distributions.

  15. Joint sparse representation for robust multimodal biometrics recognition.

    PubMed

    Shekhar, Sumit; Patel, Vishal M; Nasrabadi, Nasser M; Chellappa, Rama

    2014-01-01

    Traditional biometric recognition systems rely on a single biometric signature for authentication. While the advantage of using multiple sources of information for establishing the identity has been widely recognized, computational models for multimodal biometrics recognition have only recently received attention. We propose a multimodal sparse representation method, which represents the test data by a sparse linear combination of training data, while constraining the observations from different modalities of the test subject to share their sparse representations. Thus, we simultaneously take into account correlations as well as coupling information among biometric modalities. A multimodal quality measure is also proposed to weigh each modality as it gets fused. Furthermore, we also kernelize the algorithm to handle nonlinearity in data. The optimization problem is solved using an efficient alternative direction method. Various experiments show that the proposed method compares favorably with competing fusion-based methods.

  16. Multimodality Image Fusion-Guided Procedures: Technique, Accuracy, and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abi-Jaoudeh, Nadine, E-mail: naj@mail.nih.gov; Kruecker, Jochen, E-mail: jochen.kruecker@philips.com; Kadoury, Samuel, E-mail: samuel.kadoury@polymtl.ca

    2012-10-15

    Personalized therapies play an increasingly critical role in cancer care: Image guidance with multimodality image fusion facilitates the targeting of specific tissue for tissue characterization and plays a role in drug discovery and optimization of tailored therapies. Positron-emission tomography (PET), magnetic resonance imaging (MRI), and contrast-enhanced computed tomography (CT) may offer additional information not otherwise available to the operator during minimally invasive image-guided procedures, such as biopsy and ablation. With use of multimodality image fusion for image-guided interventions, navigation with advanced modalities does not require the physical presence of the PET, MRI, or CT imaging system. Several commercially available methodsmore » of image-fusion and device navigation are reviewed along with an explanation of common tracking hardware and software. An overview of current clinical applications for multimodality navigation is provided.« less

  17. Sustaining Multimodal Language Learner Interactions Online

    ERIC Educational Resources Information Center

    Satar, H. Müge

    2015-01-01

    Social presence is considered an important quality in computer-mediated communication as it promotes willingness in learners to take risks through participation in interpersonal exchanges (Kehrwald, 2008) and makes communication more natural (Lowenthal, 2010). While social presence has mostly been investigated through questionnaire data and…

  18. Towards a Computational Model of Sketching

    DTIC Science & Technology

    2000-01-01

    interaction that sketching provides in human-to- human communication , multimodal research will rely heavily upon, and even drive, AI research . This...can. Dimensions of sketching The power of sketching in human communication arises from the high bandwidth it provides [21] . There is high perceptual

  19. Interactive visualization and analysis of multimodal datasets for surgical applications.

    PubMed

    Kirmizibayrak, Can; Yim, Yeny; Wakid, Mike; Hahn, James

    2012-12-01

    Surgeons use information from multiple sources when making surgical decisions. These include volumetric datasets (such as CT, PET, MRI, and their variants), 2D datasets (such as endoscopic videos), and vector-valued datasets (such as computer simulations). Presenting all the information to the user in an effective manner is a challenging problem. In this paper, we present a visualization approach that displays the information from various sources in a single coherent view. The system allows the user to explore and manipulate volumetric datasets, display analysis of dataset values in local regions, combine 2D and 3D imaging modalities and display results of vector-based computer simulations. Several interaction methods are discussed: in addition to traditional interfaces including mouse and trackers, gesture-based natural interaction methods are shown to control these visualizations with real-time performance. An example of a medical application (medialization laryngoplasty) is presented to demonstrate how the combination of different modalities can be used in a surgical setting with our approach.

  20. On the origin and removal of interference patterns in coated multimode fibres

    NASA Astrophysics Data System (ADS)

    Padilla Michel, Yazmin; Pulwer, Silvio; Saffari, Pouneh; Ksianzou, Viachaslau; Schrader, Sigurd

    2016-07-01

    In this study, we present the experimental investigations on interference patterns, such as those already reported in VIMOS-IFU, and up to now no appropriate explanation has been presented. These interference patterns are produced in multimode fibres coated with acrylate or polyimide, which is the preferred coating material for the fibres used in IFUs. Our experiments show that, under specific conditions, cladding modes interact with the coating and produce interference. Our results show that the conditions at which the fibre is held during data acquisition has an impact in the output spectrum. Altering the positioning conditions of the fibre leads to the changes into the interference pattern, therefore, fibres should be carefully manipulated in order to minimise this potential problem and improve the performance of these instruments. Finally we present a simple way of predicting and modelling this interference produced from the visible to the near infrared spectra. This model can be included in the data reduction pipeline in order to remove the interference patterns. These results should be of interest for the optimisation of the data reduction pipelines of instruments using optical fibres. Considering these results will benefit innovations and developments of high performance fibre systems.

  1. Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery

    PubMed Central

    Fuerst, Bernhard; Tateno, Keisuke; Johnson, Alex; Fotouhi, Javad; Osgood, Greg; Tombari, Federico; Navab, Nassir

    2017-01-01

    Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion. PMID:29184659

  2. Logistics, Multimodal, and Shipper Partner 2.0.15 Tools: Guide to Importing Carrier Data Using the Outside Data Import Function 2015 Data Year - United States Version

    EPA Pesticide Factsheets

    This document provides guidance for Logistics, Multi-modal, and Shippers on how to use outside data collection systems to populate the SmartWay tools carrier data and activity sections using an automated method. (EPA publication # EPA-420-B-16-057a)

  3. Synchronization of oscillations in coupled multimode optoelectronic oscillators: bifurcation analysis

    NASA Astrophysics Data System (ADS)

    Balakin, M.; Gulyaev, A.; Kazaryan, A.; Yarovoy, O.

    2018-04-01

    We study influence of time delay in coupling on the dynamics of two coupled multimode optoelectronic oscillators. We reveal the structure of main synchronization region on the parameter plane and main bifurcations leading to synchronization and multistability formation. The dynamics of the system is studied in a wide range of values of control parameters.

  4. Counting statistics of many-particle quantum walks

    NASA Astrophysics Data System (ADS)

    Mayer, Klaus; Tichy, Malte C.; Mintert, Florian; Konrad, Thomas; Buchleitner, Andreas

    2011-06-01

    We study quantum walks of many noninteracting particles on a beam splitter array as a paradigmatic testing ground for the competition of single- and many-particle interference in a multimode system. We derive a general expression for multimode particle-number correlation functions, valid for bosons and fermions, and infer pronounced signatures of many-particle interferences in the counting statistics.

  5. 75 FR 51327 - Bureau of Political-Military Affairs: Directorate of Defense Trade Controls; Notifications to the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-19

    ..., and defense services for the Hughes Air Defense Radar and Air Defense System (HADAR) in Taiwan for the... technical data, and defense services, for the GD-53 Multimode Radar on Taiwan's Indigenous Defensive Fighter... agreement associated with this notification consists of components of the GD-53 Multimode Radar. The end...

  6. Real English: A Translator to Enable Natural Language Man-Machine Conversation.

    ERIC Educational Resources Information Center

    Gautin, Harvey

    This dissertation presents a pragmatic interpreter/translator called Real English to serve as a natural language man-machine communication interface in a multi-mode on-line information retrieval system. This multi-mode feature affords the user a library-like searching tool by giving him access to a dictionary, lexicon, thesaurus, synonym table,…

  7. High performance and highly reliable Raman-based distributed temperature sensors based on correlation-coded OTDR and multimode graded-index fibers

    NASA Astrophysics Data System (ADS)

    Soto, M. A.; Sahu, P. K.; Faralli, S.; Sacchi, G.; Bolognini, G.; Di Pasquale, F.; Nebendahl, B.; Rueck, C.

    2007-07-01

    The performance of distributed temperature sensor systems based on spontaneous Raman scattering and coded OTDR are investigated. The evaluated DTS system, which is based on correlation coding, uses graded-index multimode fibers, operates over short-to-medium distances (up to 8 km) with high spatial and temperature resolutions (better than 1 m and 0.3 K at 4 km distance with 10 min measuring time) and high repeatability even throughout a wide temperature range.

  8. Design and fabrication of multimode interference couplers based on digital micro-mirror system

    NASA Astrophysics Data System (ADS)

    Wu, Sumei; He, Xingdao; Shen, Chenbo

    2008-03-01

    Multimode interference (MMI) couplers, based on the self-imaging effect (SIE), are accepted popularly in integrated optics. According to the importance of MMI devices, in this paper, we present a novel method to design and fabricate MMI couplers. A technology of maskless lithography to make MMI couplers based on a smart digital micro-mirror device (DMD) system is proposed. A 1×4 MMI device is designed as an example, which shows the present method is efficient and cost-effective.

  9. Rough-Cut Capacity Planning in Multimodal Freight Transportation Networks

    DTIC Science & Technology

    2012-09-30

    transportation system to losses in es - tablished routes or assets? That is, what is the nature and length of system capability degradation due to these...Multimodal Rough-Cut Capacity Planning is mod- eled using the Resource Constrained Shortest Path Problem. We demonstrate how this approach supports...of non-zero ele - ments and the 0 entries depict appropriately dimensioned blocks of 0 entries.∣∣∣∣∑ k Ck ∣∣∣∣ Σ 0 0 0 0 Σ 0 0

  10. Multimodal sensorimotor system in unicellular zoospores of a fungus.

    PubMed

    Swafford, Andrew J M; Oakley, Todd H

    2018-01-19

    Complex sensory systems often underlie critical behaviors, including avoiding predators and locating prey, mates and shelter. Multisensory systems that control motor behavior even appear in unicellular eukaryotes, such as Chlamydomonas , which are important laboratory models for sensory biology. However, we know of no unicellular opisthokonts that control motor behavior using a multimodal sensory system. Therefore, existing single-celled models for multimodal sensorimotor integration are very distantly related to animals. Here, we describe a multisensory system that controls the motor function of unicellular fungal zoospores. We found that zoospores of Allomyces arbusculus exhibit both phototaxis and chemotaxis. Furthermore, we report that closely related Allomyces species respond to either the chemical or the light stimuli presented in this study, not both, and likely do not share this multisensory system. This diversity of sensory systems within Allomyces provides a rare example of a comparative framework that can be used to examine the evolution of sensory systems following the gain/loss of available sensory modalities. The tractability of Allomyces and related fungi as laboratory organisms will facilitate detailed mechanistic investigations into the genetic underpinnings of novel photosensory systems, and how multisensory systems may have functioned in early opisthokonts before multicellularity allowed for the evolution of specialized cell types. © 2018. Published by The Company of Biologists Ltd.

  11. Multimodal corridor and capacity analysis manual. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-12-31

    This report presents the results of research carried out under NCHRP Project 8-31, Long-Term Availability of Multimodal Corridor Capacity. The report is presented as a manual on multimodal corridor and capacity analysis. Because transportation-system and corridor capacity for freight and passengers is critical for meeting current and future transportation demand, this manual will provide much needed assistance to a wide range of practitioners, particularly those engaged in performance analysis, capacity management, needs studies, systems planning, and corridor development planning--including major investment studies. It provides information regarding capacity analysis approaches for highways, rail, pipelines, and waterways and presents available options formore » enhancing corridor capacity and performance through various strategies such as new capacity development, freeing up unused capacity, or control of travel demand. Evaluation methods for these options are included.« less

  12. Multimode optomechanical system in the quantum regime.

    PubMed

    Nielsen, William Hvidtfelt Padkær; Tsaturyan, Yeghishe; Møller, Christoffer Bo; Polzik, Eugene S; Schliesser, Albert

    2017-01-03

    We realize a simple and robust optomechanical system with a multitude of long-lived (Q > 10 7 ) mechanical modes in a phononic-bandgap shielded membrane resonator. An optical mode of a compact Fabry-Perot resonator detects these modes' motion with a measurement rate (96 kHz) that exceeds the mechanical decoherence rates already at moderate cryogenic temperatures (10 K). Reaching this quantum regime entails, inter alia, quantum measurement backaction exceeding thermal forces and thus strong optomechanical quantum correlations. In particular, we observe ponderomotive squeezing of the output light mediated by a multitude of mechanical resonator modes, with quantum noise suppression up to -2.4 dB (-3.6 dB if corrected for detection losses) and bandwidths ≲90 kHz. The multimode nature of the membrane and Fabry-Perot resonators will allow multimode entanglement involving electromagnetic, mechanical, and spin degrees of freedom.

  13. Two-photon quantum walk in a multimode fiber

    PubMed Central

    Defienne, Hugo; Barbieri, Marco; Walmsley, Ian A.; Smith, Brian J.; Gigan, Sylvain

    2016-01-01

    Multiphoton propagation in connected structures—a quantum walk—offers the potential of simulating complex physical systems and provides a route to universal quantum computation. Increasing the complexity of quantum photonic networks where the walk occurs is essential for many applications. We implement a quantum walk of indistinguishable photon pairs in a multimode fiber supporting 380 modes. Using wavefront shaping, we control the propagation of the two-photon state through the fiber in which all modes are coupled. Excitation of arbitrary output modes of the system is realized by controlling classical and quantum interferences. This report demonstrates a highly multimode platform for multiphoton interference experiments and provides a powerful method to program a general high-dimensional multiport optical circuit. This work paves the way for the next generation of photonic devices for quantum simulation, computing, and communication. PMID:27152325

  14. Multifocus confocal Raman microspectroscopy for fast multimode vibrational imaging of living cells.

    PubMed

    Okuno, Masanari; Hamaguchi, Hiro-o

    2010-12-15

    We have developed a multifocus confocal Raman microspectroscopic system for the fast multimode vibrational imaging of living cells. It consists of an inverted microscope equipped with a microlens array, a pinhole array, a fiber bundle, and a multichannel Raman spectrometer. Forty-eight Raman spectra from 48 foci under the microscope are simultaneously obtained by using multifocus excitation and image-compression techniques. The multifocus confocal configuration suppresses the background generated from the cover glass and the cell culturing medium so that high-contrast images are obtainable with a short accumulation time. The system enables us to obtain multimode (10 different vibrational modes) vibrational images of living cells in tens of seconds with only 1 mW laser power at one focal point. This image acquisition time is more than 10 times faster than that in conventional single-focus Raman microspectroscopy.

  15. Multimode optomechanical system in the quantum regime

    NASA Astrophysics Data System (ADS)

    Hvidtfelt Padkær Nielsen, William; Tsaturyan, Yeghishe; Møller, Christoffer Bo; Polzik, Eugene S.; Schliesser, Albert

    2017-01-01

    We realize a simple and robust optomechanical system with a multitude of long-lived (Q > 107) mechanical modes in a phononic-bandgap shielded membrane resonator. An optical mode of a compact Fabry-Perot resonator detects these modes’ motion with a measurement rate (96 kHz) that exceeds the mechanical decoherence rates already at moderate cryogenic temperatures (10 K). Reaching this quantum regime entails, inter alia, quantum measurement backaction exceeding thermal forces and thus strong optomechanical quantum correlations. In particular, we observe ponderomotive squeezing of the output light mediated by a multitude of mechanical resonator modes, with quantum noise suppression up to -2.4 dB (-3.6 dB if corrected for detection losses) and bandwidths ≲90 kHz. The multimode nature of the membrane and Fabry-Perot resonators will allow multimode entanglement involving electromagnetic, mechanical, and spin degrees of freedom.

  16. Android Based Behavioral Biometric Authentication via Multi-Modal Fusion

    DTIC Science & Technology

    2014-06-12

    such as the way he or she uses the mouse, or interacts with the Graphical User Interface (GUI) [9]. Described simply, standard biometrics is determined...as a login screen on a standard computer. Active authentication is authentication that occurs dynamically throughout interaction with the device. A...because they are higher level constructs in themselves. The Android framework was specifically used for capturing the multitouch gestures: pinch and zoom

  17. Microgravity vestibular investigations (10-IML-1)

    NASA Technical Reports Server (NTRS)

    Reschke, Millard F.

    1992-01-01

    Our perception of how we are oriented in space is dependent on the interaction of virtually every sensory system. For example, to move about in our environment we integrate inputs in our brain from visual, haptic (kinesthetic, proprioceptive, and cutaneous), auditory systems, and labyrinths. In addition to this multimodal system for orientation, our expectations about the direction and speed of our chosen movement are also important. Changes in our environment and the way we interact with the new stimuli will result in a different interpretation by the nervous system of the incoming sensory information. We will adapt to the change in appropriate ways. Because our orientation system is adaptable and complex, it is often difficult to trace a response or change in behavior to any one source of information in this synergistic orientation system. However, with a carefully designed investigation, it is possible to measure signals at the appropriate level of response (both electrophysiological and perceptual) and determine the effect that stimulus rearrangement has on our sense of orientation. The environment of orbital flight represents the stimulus arrangement that is our immediate concern. The Microgravity Vestibular Investigations (MVI) represent a group of experiments designed to investigate the effects of orbital flight and a return to Earth on our orientation system.

  18. Cortical inter-hemispheric circuits for multimodal vocal learning in songbirds.

    PubMed

    Paterson, Amy K; Bottjer, Sarah W

    2017-10-15

    Vocal learning in songbirds and humans is strongly influenced by social interactions based on sensory inputs from several modalities. Songbird vocal learning is mediated by cortico-basal ganglia circuits that include the SHELL region of lateral magnocellular nucleus of the anterior nidopallium (LMAN), but little is known concerning neural pathways that could integrate multimodal sensory information with SHELL circuitry. In addition, cortical pathways that mediate the precise coordination between hemispheres required for song production have been little studied. In order to identify candidate mechanisms for multimodal sensory integration and bilateral coordination for vocal learning in zebra finches, we investigated the anatomical organization of two regions that receive input from SHELL: the dorsal caudolateral nidopallium (dNCL SHELL ) and a region within the ventral arcopallium (Av). Anterograde and retrograde tracing experiments revealed a topographically organized inter-hemispheric circuit: SHELL and dNCL SHELL , as well as adjacent nidopallial areas, send axonal projections to ipsilateral Av; Av in turn projects to contralateral SHELL, dNCL SHELL , and regions of nidopallium adjacent to each. Av on each side also projects directly to contralateral Av. dNCL SHELL and Av each integrate inputs from ipsilateral SHELL with inputs from sensory regions in surrounding nidopallium, suggesting that they function to integrate multimodal sensory information with song-related responses within LMAN-SHELL during vocal learning. Av projections share this integrated information from the ipsilateral hemisphere with contralateral sensory and song-learning regions. Our results suggest that the inter-hemispheric pathway through Av may function to integrate multimodal sensory feedback with vocal-learning circuitry and coordinate bilateral vocal behavior. © 2017 Wiley Periodicals, Inc.

  19. A framework for biomedical figure segmentation towards image-based document retrieval

    PubMed Central

    2013-01-01

    The figures included in many of the biomedical publications play an important role in understanding the biological experiments and facts described within. Recent studies have shown that it is possible to integrate the information that is extracted from figures in classical document classification and retrieval tasks in order to improve their accuracy. One important observation about the figures included in biomedical publications is that they are often composed of multiple subfigures or panels, each describing different methodologies or results. The use of these multimodal figures is a common practice in bioscience, as experimental results are graphically validated via multiple methodologies or procedures. Thus, for a better use of multimodal figures in document classification or retrieval tasks, as well as for providing the evidence source for derived assertions, it is important to automatically segment multimodal figures into subfigures and panels. This is a challenging task, however, as different panels can contain similar objects (i.e., barcharts and linecharts) with multiple layouts. Also, certain types of biomedical figures are text-heavy (e.g., DNA sequences and protein sequences images) and they differ from traditional images. As a result, classical image segmentation techniques based on low-level image features, such as edges or color, are not directly applicable to robustly partition multimodal figures into single modal panels. In this paper, we describe a robust solution for automatically identifying and segmenting unimodal panels from a multimodal figure. Our framework starts by robustly harvesting figure-caption pairs from biomedical articles. We base our approach on the observation that the document layout can be used to identify encoded figures and figure boundaries within PDF files. Taking into consideration the document layout allows us to correctly extract figures from the PDF document and associate their corresponding caption. We combine pixel-level representations of the extracted images with information gathered from their corresponding captions to estimate the number of panels in the figure. Thus, our approach simultaneously identifies the number of panels and the layout of figures. In order to evaluate the approach described here, we applied our system on documents containing protein-protein interactions (PPIs) and compared the results against a gold standard that was annotated by biologists. Experimental results showed that our automatic figure segmentation approach surpasses pure caption-based and image-based approaches, achieving a 96.64% accuracy. To allow for efficient retrieval of information, as well as to provide the basis for integration into document classification and retrieval systems among other, we further developed a web-based interface that lets users easily retrieve panels containing the terms specified in the user queries. PMID:24565394

  20. Cross-cultural evidence for multimodal motherese: Asian Indian mothers' adaptive use of synchronous words and gestures.

    PubMed

    Gogate, Lakshmi; Maganti, Madhavilatha; Bahrick, Lorraine E

    2015-01-01

    In a quasi-experimental study, 24 Asian Indian mothers were asked to teach novel (target) names for two objects and two actions to their children of three different levels of lexical mapping development: prelexical (5-8 months), early lexical (9-17 months), and advanced lexical (20-43 months). Target naming (n=1482) and non-target naming (other, n=2411) were coded for synchronous spoken words and object motion (multimodal motherese) and other naming styles. Indian mothers abundantly used multimodal motherese with target words to highlight novel word-referent relations, paralleling earlier findings from American mothers. They used it with target words more often for prelexical infants than for advanced lexical children and to name target actions later in children's development. Unlike American mothers, Indian mothers also abundantly used multimodal motherese to name target objects later in children's development. Finally, monolingual mothers who spoke a verb-dominant Indian language used multimodal motherese more often than bilingual mothers who also spoke noun-dominant English to their children. The findings suggest that within a dynamic and reciprocal mother-infant communication system, multimodal motherese adapts to unify novel words and referents across cultures. It adapts to children's level of lexical development and to ambient language-specific lexical dominance hierarchies. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Multimodal pain stimulation of the gastrointestinal tract

    PubMed Central

    Drewes, Asbjørn Mohr; Gregersen, Hans

    2006-01-01

    Understanding and characterization of pain and other sensory symptoms are among the most important issues in the diagnosis and assessment of patient with gastrointestinal disorders. Methods to evoke and assess experimental pain have recently developed into a new area with the possibility for multimodal stimulation (e.g., electrical, mechanical, thermal and chemical stimulation) of different nerves and pain pathways in the human gut. Such methods mimic to a high degree the pain experienced in the clinic. Multimodal pain methods have increased our basic understanding of different peripheral receptors in the gut in health and disease. Together with advanced muscle analysis, the methods have increased our understanding of receptors sensitive to mechanical, chemical and temperature stimuli in diseases, such as systemic sclerosis and diabetes. The methods can also be used to unravel central pain mechanisms, such as those involved in allodynia, hyperalgesia and referred pain. Abnormalities in central pain mechanisms are often seen in patients with chronic gut pain and hence methods relying on multimodal pain stimulation may help to understand the symptoms in these patients. Sex differences have been observed in several diseases of the gut, and differences in central pain processing between males and females have been hypothesized using multimodal pain stimulations. Finally, multimodal methods have recently been used to gain more insight into the effect of drugs against pain in the GI tract. Hence, the multimodal methods undoubtedly represents a major step forward in the future characterization and treatment of patients with various diseases of the gut. PMID:16688791

  2. Force Sensitive Handles and Capacitive Touch Sensor for Driving a Flexible Haptic-Based Immersive System

    PubMed Central

    Covarrubias, Mario; Bordegoni, Monica; Cugini, Umberto

    2013-01-01

    In this article, we present an approach that uses both two force sensitive handles (FSH) and a flexible capacitive touch sensor (FCTS) to drive a haptic-based immersive system. The immersive system has been developed as part of a multimodal interface for product design. The haptic interface consists of a strip that can be used by product designers to evaluate the quality of a 3D virtual shape by using touch, vision and hearing and, also, to interactively change the shape of the virtual object. Specifically, the user interacts with the FSH to move the virtual object and to appropriately position the haptic interface for retrieving the six degrees of freedom required for both manipulation and modification modalities. The FCTS allows the system to track the movement and position of the user's fingers on the strip, which is used for rendering visual and sound feedback. Two evaluation experiments are described, which involve both the evaluation and the modification of a 3D shape. Results show that the use of the haptic strip for the evaluation of aesthetic shapes is effective and supports product designers in the appreciation of the aesthetic qualities of the shape. PMID:24113680

  3. Force sensitive handles and capacitive touch sensor for driving a flexible haptic-based immersive system.

    PubMed

    Covarrubias, Mario; Bordegoni, Monica; Cugini, Umberto

    2013-10-09

    In this article, we present an approach that uses both two force sensitive handles (FSH) and a flexible capacitive touch sensor (FCTS) to drive a haptic-based immersive system. The immersive system has been developed as part of a multimodal interface for product design. The haptic interface consists of a strip that can be used by product designers to evaluate the quality of a 3D virtual shape by using touch, vision and hearing and, also, to interactively change the shape of the virtual object. Specifically, the user interacts with the FSH to move the virtual object and to appropriately position the haptic interface for retrieving the six degrees of freedom required for both manipulation and modification modalities. The FCTS allows the system to track the movement and position of the user's fingers on the strip, which is used for rendering visual and sound feedback. Two evaluation experiments are described, which involve both the evaluation and the modification of a 3D shape. Results show that the use of the haptic strip for the evaluation of aesthetic shapes is effective and supports product designers in the appreciation of the aesthetic qualities of the shape.

  4. Collaborative interactive visualization: exploratory concept

    NASA Astrophysics Data System (ADS)

    Mokhtari, Marielle; Lavigne, Valérie; Drolet, Frédéric

    2015-05-01

    Dealing with an ever increasing amount of data is a challenge that military intelligence analysts or team of analysts face day to day. Increased individual and collective comprehension goes through collaboration between people. Better is the collaboration, better will be the comprehension. Nowadays, various technologies support and enhance collaboration by allowing people to connect and collaborate in settings as varied as across mobile devices, over networked computers, display walls, tabletop surfaces, to name just a few. A powerful collaboration system includes traditional and multimodal visualization features to achieve effective human communication. Interactive visualization strengthens collaboration because this approach is conducive to incrementally building a mental assessment of the data meaning. The purpose of this paper is to present an overview of the envisioned collaboration architecture and the interactive visualization concepts underlying the Sensemaking Support System prototype developed to support analysts in the context of the Joint Intelligence Collection and Analysis Capability project at DRDC Valcartier. It presents the current version of the architecture, discusses future capabilities to help analyst(s) in the accomplishment of their tasks and finally recommends collaboration and visualization technologies allowing to go a step further both as individual and as a team.

  5. Developing a multimodal biometric authentication system using soft computing methods.

    PubMed

    Malcangi, Mario

    2015-01-01

    Robust personal authentication is becoming ever more important in computer-based applications. Among a variety of methods, biometric offers several advantages, mainly in embedded system applications. Hard and soft multi-biometric, combined with hard and soft computing methods, can be applied to improve the personal authentication process and to generalize the applicability. This chapter describes the embedded implementation of a multi-biometric (voiceprint and fingerprint) multimodal identification system based on hard computing methods (DSP) for feature extraction and matching, an artificial neural network (ANN) for soft feature pattern matching, and a fuzzy logic engine (FLE) for data fusion and decision.

  6. Multimodal Literacies: Imagining Lives through Korean Dramas

    ERIC Educational Resources Information Center

    Kim, Grace MyHyun; Omerbašic, Delila

    2017-01-01

    Global networks of information and interactions have created new conditions for access to myriad literacies, languages, and communities. Engagements with transnational texts and communities can support the imagination of lives different from one's local context. This article presents data from two qualitative studies of adolescent literacy…

  7. Emerging Affordances in Telecollaborative Multimodal Interactions

    ERIC Educational Resources Information Center

    Dey-Plissonneau, Aparajita; Blin, Françoise

    2016-01-01

    Drawing on Gibson's (1977) theory of affordances, Computer-Assisted Language Learning (CALL) affordances are a combination of technological, social, educational, and linguistic affordances (Blin, 2016). This paper reports on a preliminary study that sought to identify the emergence of affordances during an online video conferencing session between…

  8. Interculturalities: Reframing Identities in Intercultural Communication

    ERIC Educational Resources Information Center

    Nair-Venugopal, Shanta

    2009-01-01

    This paper attempts to reframe identities as "interculturalities" in the multimodal ways in which language is used for identity construction, specifically as responses to questionnaires, articulations within limited narratives, on-line interactions and in community ways of speaking a localised variety of English. Relying on a framework…

  9. Nucleophosmin integrates within the nucleolus via multi-modal interactions with proteins displaying R-rich linear motifs and rRNA.

    PubMed

    Mitrea, Diana M; Cika, Jaclyn A; Guy, Clifford S; Ban, David; Banerjee, Priya R; Stanley, Christopher B; Nourse, Amanda; Deniz, Ashok A; Kriwacki, Richard W

    2016-02-02

    The nucleolus is a membrane-less organelle formed through liquid-liquid phase separation of its components from the surrounding nucleoplasm. Here, we show that nucleophosmin (NPM1) integrates within the nucleolus via a multi-modal mechanism involving multivalent interactions with proteins containing arginine-rich linear motifs (R-motifs) and ribosomal RNA (rRNA). Importantly, these R-motifs are found in canonical nucleolar localization signals. Based on a novel combination of biophysical approaches, we propose a model for the molecular organization within liquid-like droplets formed by the N-terminal domain of NPM1 and R-motif peptides, thus providing insights into the structural organization of the nucleolus. We identify multivalency of acidic tracts and folded nucleic acid binding domains, mediated by N-terminal domain oligomerization, as structural features required for phase separation of NPM1 with other nucleolar components in vitro and for localization within mammalian nucleoli. We propose that one mechanism of nucleolar localization involves phase separation of proteins within the nucleolus.

  10. A multi-mode manipulator display system for controlling remote robotic systems

    NASA Technical Reports Server (NTRS)

    Massimino, Michael J.; Meschler, Michael F.; Rodriguez, Alberto A.

    1994-01-01

    The objective and contribution of the research presented in this paper is to provide a Multi-Mode Manipulator Display System (MMDS) to assist a human operator with the control of remote manipulator systems. Such systems include space based manipulators such as the space shuttle remote manipulator system (SRMS) and future ground controlled teleoperated and telescience space systems. The MMDS contains a number of display modes and submodes which display position control cues position data in graphical formats, based primarily on manipulator position and joint angle data. Therefore the MMDS is not dependent on visual information for input and can assist the operator especially when visual feedback is inadequate. This paper provides descriptions of the new modes and experiment results to date.

  11. Multimodal swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography at 400 kHz

    NASA Astrophysics Data System (ADS)

    El-Haddad, Mohamed T.; Joos, Karen M.; Patel, Shriji N.; Tao, Yuankai K.

    2017-02-01

    Multimodal imaging systems that combine scanning laser ophthalmoscopy (SLO) and optical coherence tomography (OCT) have demonstrated the utility of concurrent en face and volumetric imaging for aiming, eye tracking, bulk motion compensation, mosaicking, and contrast enhancement. However, this additional functionality trades off with increased system complexity and cost because both SLO and OCT generally require dedicated light sources, galvanometer scanners, relay and imaging optics, detectors, and control and digitization electronics. We previously demonstrated multimodal ophthalmic imaging using swept-source spectrally encoded SLO and OCT (SS-SESLO-OCT). Here, we present system enhancements and a new optical design that increase our SS-SESLO-OCT data throughput by >7x and field-of-view (FOV) by >4x. A 200 kHz 1060 nm Axsun swept-source was optically buffered to 400 kHz sweep-rate, and SESLO and OCT were simultaneously digitized on dual input channels of a 4 GS/s digitizer at 1.2 GS/s per channel using a custom k-clock. We show in vivo human imaging of the anterior segment out to the limbus and retinal fundus over a >40° FOV. In addition, nine overlapping volumetric SS-SESLO-OCT volumes were acquired under video-rate SESLO preview and guidance. In post-processing, all nine SESLO images and en face projections of the corresponding OCT volumes were mosaicked to show widefield multimodal fundus imaging with a >80° FOV. Concurrent multimodal SS-SESLO-OCT may have applications in clinical diagnostic imaging by enabling aiming, image registration, and multi-field mosaicking and benefit intraoperative imaging by allowing for real-time surgical feedback, instrument tracking, and overlays of computationally extracted image-based surrogate biomarkers of disease.

  12. Holographic storage of biphoton entanglement.

    PubMed

    Dai, Han-Ning; Zhang, Han; Yang, Sheng-Jun; Zhao, Tian-Ming; Rui, Jun; Deng, You-Jin; Li, Li; Liu, Nai-Le; Chen, Shuai; Bao, Xiao-Hui; Jin, Xian-Min; Zhao, Bo; Pan, Jian-Wei

    2012-05-25

    Coherent and reversible storage of multiphoton entanglement with a multimode quantum memory is essential for scalable all-optical quantum information processing. Although a single photon has been successfully stored in different quantum systems, storage of multiphoton entanglement remains challenging because of the critical requirement for coherent control of the photonic entanglement source, multimode quantum memory, and quantum interface between them. Here we demonstrate a coherent and reversible storage of biphoton Bell-type entanglement with a holographic multimode atomic-ensemble-based quantum memory. The retrieved biphoton entanglement violates the Bell inequality for 1 μs storage time and a memory-process fidelity of 98% is demonstrated by quantum state tomography.

  13. A multimodal imaging platform with integrated simultaneous photoacoustic microscopy, optical coherence tomography, optical Doppler tomography and fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Dadkhah, Arash; Zhou, Jun; Yeasmin, Nusrat; Jiao, Shuliang

    2018-02-01

    Various optical imaging modalities with different optical contrast mechanisms have been developed over the past years. Although most of these imaging techniques are being used in many biomedical applications and researches, integration of these techniques will allow researchers to reach the full potential of these technologies. Nevertheless, combining different imaging techniques is always challenging due to the difference in optical and hardware requirements for different imaging systems. Here, we developed a multimodal optical imaging system with the capability of providing comprehensive structural, functional and molecular information of living tissue in micrometer scale. This imaging system integrates photoacoustic microscopy (PAM), optical coherence tomography (OCT), optical Doppler tomography (ODT) and fluorescence microscopy in one platform. Optical-resolution PAM (OR-PAM) provides absorption-based imaging of biological tissues. Spectral domain OCT is able to provide structural information based on the scattering property of biological sample with no need for exogenous contrast agents. In addition, ODT is a functional extension of OCT with the capability of measurement and visualization of blood flow based on the Doppler effect. Fluorescence microscopy allows to reveal molecular information of biological tissue using autofluoresce or exogenous fluorophores. In-vivo as well as ex-vivo imaging studies demonstrated the capability of our multimodal imaging system to provide comprehensive microscopic information on biological tissues. Integrating all the aforementioned imaging modalities for simultaneous multimodal imaging has promising potential for preclinical research and clinical practice in the near future.

  14. Novel multifunctional theranostic liposome drug delivery system: construction, characterization, and multimodality MR, near-infrared fluorescent, and nuclear imaging.

    PubMed

    Li, Shihong; Goins, Beth; Zhang, Lujun; Bao, Ande

    2012-06-20

    Liposomes are effective lipid nanoparticle drug delivery systems, which can also be functionalized with noninvasive multimodality imaging agents with each modality providing distinct information and having synergistic advantages in diagnosis, monitoring of disease treatment, and evaluation of liposomal drug pharmacokinetics. We designed and constructed a multifunctional theranostic liposomal drug delivery system, which integrated multimodality magnetic resonance (MR), near-infrared (NIR) fluorescent and nuclear imaging of liposomal drug delivery, and therapy monitoring and prediction. The premanufactured liposomes were composed of DSPC/cholesterol/Gd-DOTA-DSPE/DOTA-DSPE with the molar ratio of 39:35:25:1 and having ammonium sulfate/pH gradient. A lipidized NIR fluorescent tracer, IRDye-DSPE, was effectively postinserted into the premanufactured liposomes. Doxorubicin could be effectively postloaded into the multifunctional liposomes. The multifunctional doxorubicin-liposomes could also be stably radiolabeled with (99m)Tc or (64)Cu for single-photon emission computed tomography (SPECT) or positron emission tomography (PET) imaging, respectively. MR images displayed the high-resolution micro-intratumoral distribution of the liposomes in squamous cell carcinoma of head and neck (SCCHN) tumor xenografts in nude rats after intratumoral injection. NIR fluorescent, SPECT, and PET images also clearly showed either the high intratumoral retention or distribution of the multifunctional liposomes. This multifunctional drug carrying liposome system is promising for disease theranostics allowing noninvasive multimodality NIR fluorescent, MR, SPECT, and PET imaging of their in vivo behavior and capitalizing on the inherent advantages of each modality.

  15. Object recognition through a multi-mode fiber

    NASA Astrophysics Data System (ADS)

    Takagi, Ryosuke; Horisaki, Ryoichi; Tanida, Jun

    2017-04-01

    We present a method of recognizing an object through a multi-mode fiber. A number of speckle patterns transmitted through a multi-mode fiber are provided to a classifier based on machine learning. We experimentally demonstrated binary classification of face and non-face targets based on the method. The measurement process of the experimental setup was random and nonlinear because a multi-mode fiber is a typical strongly scattering medium and any reference light was not used in our setup. Comparisons between three supervised learning methods, support vector machine, adaptive boosting, and neural network, are also provided. All of those learning methods achieved high accuracy rates at about 90% for the classification. The approach presented here can realize a compact and smart optical sensor. It is practically useful for medical applications, such as endoscopy. Also our study indicated a promising utilization of artificial intelligence, which has rapidly progressed, for reducing optical and computational costs in optical sensing systems.

  16. Preventing the development of chronic pain after orthopaedic surgery with preventive multimodal analgesic techniques.

    PubMed

    Reuben, Scott S; Buvanendran, Asokumar

    2007-06-01

    The prevalences of complex regional pain syndrome, phantom limb pain, chronic donor-site pain, and persistent pain following total joint arthroplasty are alarmingly high. Central nervous system plasticity that occurs in response to tissue injury may contribute to the development of persistent postoperative pain. Many researchers have focused on methods to prevent central neuroplastic changes from occurring through the utilization of preemptive or preventive multimodal analgesic techniques. Multimodal analgesia allows a reduction in the doses of individual drugs for postoperative pain and thus a lower prevalence of opioid-related adverse events. The rationale for this strategy is the achievement of sufficient analgesia due to the additive effects of, or the synergistic effects between, different analgesics. Effective multimodal analgesic techniques include the use of nonsteroidal anti-inflammatory drugs, local anesthetics, alpha-2 agonists, ketamine, alpha(2)-delta ligands, and opioids.

  17. Modelling multimodal expression of emotion in a virtual agent.

    PubMed

    Pelachaud, Catherine

    2009-12-12

    Over the past few years we have been developing an expressive embodied conversational agent system. In particular, we have developed a model of multimodal behaviours that includes dynamism and complex facial expressions. The first feature refers to the qualitative execution of behaviours. Our model is based on perceptual studies and encompasses several parameters that modulate multimodal behaviours. The second feature, the model of complex expressions, follows a componential approach where a new expression is obtained by combining facial areas of other expressions. Lately we have been working on adding temporal dynamism to expressions. So far they have been designed statically, typically at their apex. Only full-blown expressions could be modelled. To overcome this limitation, we have defined a representation scheme that describes the temporal evolution of the expression of an emotion. It is no longer represented by a static definition but by a temporally ordered sequence of multimodal signals.

  18. Reflection Effects in Multimode Fiber Systems Utilizing Laser Transmitters

    NASA Technical Reports Server (NTRS)

    Bates, Harry E.

    1991-01-01

    A number of optical communication lines are now in use at NASA-Kennedy for the transmission of voice, computer data, and video signals. Now, all of these channels use a single carrier wavelength centered near 1300 or 1550 nm. Engineering tests in the past have given indications of the growth of systematic and random noise in the RF spectrum of a fiber network as the number of connector pairs is increased. This noise seems to occur when a laser transmitter is used instead of a LED. It has been suggested that the noise is caused by back reflections created at connector fiber interfaces. Experiments were performed to explore the effect of reflection on the transmitting laser under conditions of reflective feedback. This effort included computer integration of some of the instrumentation in the fiber optic lab using the Lab View software recently acquired by the lab group. The main goal was to interface the Anritsu Optical and RF spectrum analyzers to the MacIntosh II computer so that laser spectra and network RF spectra could be simultaneously and rapidly acquired in a form convenient for analysis. Both single and multimode fiber is installed at Kennedy. Since most are multimode, this effort concentrated on multimode systems.

  19. Reflection effects in multimode fiber systems utilizing laser transmitters

    NASA Astrophysics Data System (ADS)

    Bates, Harry E.

    1991-11-01

    A number of optical communication lines are now in use at NASA-Kennedy for the transmission of voice, computer data, and video signals. Now, all of these channels use a single carrier wavelength centered near 1300 or 1550 nm. Engineering tests in the past have given indications of the growth of systematic and random noise in the RF spectrum of a fiber network as the number of connector pairs is increased. This noise seems to occur when a laser transmitter is used instead of a LED. It has been suggested that the noise is caused by back reflections created at connector fiber interfaces. Experiments were performed to explore the effect of reflection on the transmitting laser under conditions of reflective feedback. This effort included computer integration of some of the instrumentation in the fiber optic lab using the Lab View software recently acquired by the lab group. The main goal was to interface the Anritsu Optical and RF spectrum analyzers to the MacIntosh II computer so that laser spectra and network RF spectra could be simultaneously and rapidly acquired in a form convenient for analysis. Both single and multimode fiber is installed at Kennedy. Since most are multimode, this effort concentrated on multimode systems.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ehlen, Mark A.; Sun, Amy C.; Pepple, Mark A.

    The potential impacts of man-made and natural disasters on chemical plants, complexes, and supply chains are of great importance to homeland security. To be able to estimate these impacts, we developed an agent-based chemical supply chain model that includes: chemical plants with enterprise operations such as purchasing, production scheduling, and inventories; merchant chemical markets, and multi-modal chemical shipments. Large-scale simulations of chemical-plant activities and supply chain interactions, running on desktop computers, are used to estimate the scope and duration of disruptive-event impacts, and overall system resilience, based on the extent to which individual chemical plants can adjust their internal operationsmore » (e.g., production mixes and levels) versus their external interactions (market sales and purchases, and transportation routes and modes). As a result, to illustrate how the model estimates the impacts of a hurricane disruption, a simple example model centered on 1,4-butanediol is presented.« less

  1. Atoms and molecules in cavities, from weak to strong coupling in quantum-electrodynamics (QED) chemistry

    PubMed Central

    Flick, Johannes; Ruggenthaler, Michael; Appel, Heiko

    2017-01-01

    In this work, we provide an overview of how well-established concepts in the fields of quantum chemistry and material sciences have to be adapted when the quantum nature of light becomes important in correlated matter–photon problems. We analyze model systems in optical cavities, where the matter–photon interaction is considered from the weak- to the strong-coupling limit and for individual photon modes as well as for the multimode case. We identify fundamental changes in Born–Oppenheimer surfaces, spectroscopic quantities, conical intersections, and efficiency for quantum control. We conclude by applying our recently developed quantum-electrodynamical density-functional theory to spontaneous emission and show how a straightforward approximation accurately describes the correlated electron–photon dynamics. This work paves the way to describe matter–photon interactions from first principles and addresses the emergence of new states of matter in chemistry and material science. PMID:28275094

  2. Chemical supply chain modeling for analysis of homeland security events

    DOE PAGES

    Ehlen, Mark A.; Sun, Amy C.; Pepple, Mark A.; ...

    2013-09-06

    The potential impacts of man-made and natural disasters on chemical plants, complexes, and supply chains are of great importance to homeland security. To be able to estimate these impacts, we developed an agent-based chemical supply chain model that includes: chemical plants with enterprise operations such as purchasing, production scheduling, and inventories; merchant chemical markets, and multi-modal chemical shipments. Large-scale simulations of chemical-plant activities and supply chain interactions, running on desktop computers, are used to estimate the scope and duration of disruptive-event impacts, and overall system resilience, based on the extent to which individual chemical plants can adjust their internal operationsmore » (e.g., production mixes and levels) versus their external interactions (market sales and purchases, and transportation routes and modes). As a result, to illustrate how the model estimates the impacts of a hurricane disruption, a simple example model centered on 1,4-butanediol is presented.« less

  3. Nanoparticles as multimodal photon transducers of ionizing radiation

    NASA Astrophysics Data System (ADS)

    Pratt, Edwin C.; Shaffer, Travis M.; Zhang, Qize; Drain, Charles Michael; Grimm, Jan

    2018-05-01

    In biomedical imaging, nanoparticles combined with radionuclides that generate Cerenkov luminescence are used in diagnostic imaging, photon-induced therapies and as activatable probes. In these applications, the nanoparticle is often viewed as a carrier inert to ionizing radiation from the radionuclide. However, certain phenomena such as enhanced nanoparticle luminescence and generation of reactive oxygen species cannot be completely explained by Cerenkov luminescence interactions with nanoparticles. Herein, we report methods to examine the mechanisms of nanoparticle excitation by radionuclides, including interactions with Cerenkov luminescence, β particles and γ radiation. We demonstrate that β-scintillation contributes appreciably to excitation and reactivity in certain nanoparticle systems, and that excitation by radionuclides of nanoparticles composed of large atomic number atoms generates X-rays, enabling multiplexed imaging through single photon emission computed tomography. These findings demonstrate practical optical imaging and therapy using radionuclides with emission energies below the Cerenkov threshold, thereby expanding the list of applicable radionuclides.

  4. Optical Dark Rogue Wave

    NASA Astrophysics Data System (ADS)

    Frisquet, Benoit; Kibler, Bertrand; Morin, Philippe; Baronio, Fabio; Conforti, Matteo; Millot, Guy; Wabnitz, Stefan

    2016-02-01

    Photonics enables to develop simple lab experiments that mimic water rogue wave generation phenomena, as well as relativistic gravitational effects such as event horizons, gravitational lensing and Hawking radiation. The basis for analog gravity experiments is light propagation through an effective moving medium obtained via the nonlinear response of the material. So far, analogue gravity kinematics was reproduced in scalar optical wave propagation test models. Multimode and spatiotemporal nonlinear interactions exhibit a rich spectrum of excitations, which may substantially expand the range of rogue wave phenomena, and lead to novel space-time analogies, for example with multi-particle interactions. By injecting two colliding and modulated pumps with orthogonal states of polarization in a randomly birefringent telecommunication optical fiber, we provide the first experimental demonstration of an optical dark rogue wave. We also introduce the concept of multi-component analog gravity, whereby localized spatiotemporal horizons are associated with the dark rogue wave solution of the two-component nonlinear Schrödinger system.

  5. Optical Dark Rogue Wave.

    PubMed

    Frisquet, Benoit; Kibler, Bertrand; Morin, Philippe; Baronio, Fabio; Conforti, Matteo; Millot, Guy; Wabnitz, Stefan

    2016-02-11

    Photonics enables to develop simple lab experiments that mimic water rogue wave generation phenomena, as well as relativistic gravitational effects such as event horizons, gravitational lensing and Hawking radiation. The basis for analog gravity experiments is light propagation through an effective moving medium obtained via the nonlinear response of the material. So far, analogue gravity kinematics was reproduced in scalar optical wave propagation test models. Multimode and spatiotemporal nonlinear interactions exhibit a rich spectrum of excitations, which may substantially expand the range of rogue wave phenomena, and lead to novel space-time analogies, for example with multi-particle interactions. By injecting two colliding and modulated pumps with orthogonal states of polarization in a randomly birefringent telecommunication optical fiber, we provide the first experimental demonstration of an optical dark rogue wave. We also introduce the concept of multi-component analog gravity, whereby localized spatiotemporal horizons are associated with the dark rogue wave solution of the two-component nonlinear Schrödinger system.

  6. Optical Dark Rogue Wave

    PubMed Central

    Frisquet, Benoit; Kibler, Bertrand; Morin, Philippe; Baronio, Fabio; Conforti, Matteo; Millot, Guy; Wabnitz, Stefan

    2016-01-01

    Photonics enables to develop simple lab experiments that mimic water rogue wave generation phenomena, as well as relativistic gravitational effects such as event horizons, gravitational lensing and Hawking radiation. The basis for analog gravity experiments is light propagation through an effective moving medium obtained via the nonlinear response of the material. So far, analogue gravity kinematics was reproduced in scalar optical wave propagation test models. Multimode and spatiotemporal nonlinear interactions exhibit a rich spectrum of excitations, which may substantially expand the range of rogue wave phenomena, and lead to novel space-time analogies, for example with multi-particle interactions. By injecting two colliding and modulated pumps with orthogonal states of polarization in a randomly birefringent telecommunication optical fiber, we provide the first experimental demonstration of an optical dark rogue wave. We also introduce the concept of multi-component analog gravity, whereby localized spatiotemporal horizons are associated with the dark rogue wave solution of the two-component nonlinear Schrödinger system. PMID:26864099

  7. A multimodal interface device for online board games designed for sight-impaired people.

    PubMed

    Caporusso, Nicholas; Mkrtchyan, Lusine; Badia, Leonardo

    2010-03-01

    Online games between remote opponents playing over computer networks are becoming a common activity of everyday life. However, computer interfaces for board games are usually based on the visual channel. For example, they require players to check their moves on a video display and interact by using pointing devices such as a mouse. Hence, they are not suitable for visually impaired people. The present paper discusses a multipurpose system that allows especially blind and deafblind people playing chess or other board games over a network, therefore reducing their disability barrier. We describe and benchmark a prototype of a special interactive haptic device for online gaming providing a dual tactile feedback. The novel interface of this proposed device is able to guarantee not only a better game experience for everyone but also an improved quality of life for sight-impaired people.

  8. Multimodal flexible cystoscopy for creating co-registered panoramas of the bladder urothelium

    NASA Astrophysics Data System (ADS)

    Seibel, Eric J.; Soper, Timothy D.; Burkhardt, Matthew R.; Porter, Michael P.; Yoon, W. Jong

    2012-02-01

    Bladder cancer is the most expensive cancer to treat due to the high rate of recurrence. Though white light cystoscopy is the gold standard for bladder cancer surveillance, the advent of fluorescence biomarkers provides an opportunity to improve sensitivity for early detection and reduced recurrence resulting from more accurate excision. Ideally, fluorescence information could be combined with standard reflectance images to provide multimodal views of the bladder wall. The scanning fiber endoscope (SFE) of 1.2mm in diameter is able to acquire wide-field multimodal video from a bladder phantom with fluorescence cancer "hot-spots". The SFE generates images by scanning red, green, and blue (RGB) laser light and detects the backscatter signal for reflectance video of 500-line resolution at 30 frames per second. We imaged a bladder phantom with painted vessels and mimicked fluorescent lesions by applying green fluorescent microspheres to the surface. By eliminating the green laser illumination, simultaneous reflectance and fluorescence images can be acquired at the same field of view, resolution, and frame rate. Moreover, the multimodal SFE is combined with a robotic steering mechanism and image stitching software as part of a fully automated bladder surveillance system. Using this system, the SFE can be reliably articulated over the entire 360° bladder surface. Acquired images can then be stitched into a multimodal 3D panorama of the bladder using software developed in our laboratory. In each panorama, the fluorescence images are exactly co-registered with RGB reflectance.

  9. Using multimedia information and communication technology (ICT) to provide added value to reminiscence therapy for people with dementia : Lessons learned from three field studies.

    PubMed

    Bejan, Alexander; Gündogdu, Ramazan; Butz, Katherina; Müller, Nadine; Kunze, Christophe; König, Peter

    2018-01-01

    In the care of people with dementia (PwD), occupational therapies and activities aiming at maintaining the quality of life of PwD, such as reminiscence therapy (RT), are taking on a more and more important role. Information and communication technology (ICT) has the potential to improve and to facilitate RT by facilitating access to and selection of biographical information and related contents or by providing novel multimodal interaction forms to trigger memories; however, interactive multimedia technology is barely used in practice. This article presents three exploratory field studies that evaluated different aspects of RT technology use for PwD in care homes, including the utilization of online movie databases, interactive surface touch computers as well as natural user interfaces allowing gestures and haptic interaction. In these studies, the usage of prototype systems was observed in occupational sessions by 5, 12 and 16 PwD. The results indicate positive effects of technology use, e. g. in the form of verbally elicited reminiscence statements, expressed joy and playful interaction. Lessons learned for the design of technology-based RT interventions are presented and discussed.

  10. SETI meets a social intelligence: Dolphins as a model for real-time interaction and communication with a sentient species

    NASA Astrophysics Data System (ADS)

    Herzing, Denise L.

    2010-12-01

    In the past SETI has focused on the reception and deciphering of radio signals from potential remote civilizations. It is conceivable that real-time contact and interaction with a social intelligence may occur in the future. A serious look at the development of relationship, and deciphering of communication signals within and between a non-terrestrial, non-primate sentient species is relevant. Since 1985 a resident community of free-ranging Atlantic spotted dolphins has been observed regularly in the Bahamas. Life history, relationships, regular interspecific interactions with bottlenose dolphins, and multi-modal underwater communication signals have been documented. Dolphins display social communication signals modified for water, their body types, and sensory systems. Like anthropologists, human researchers engage in benign observation in the water and interact with these dolphins to develop rapport and trust. Many individual dolphins have been known for over 20 years. Learning the culturally appropriate etiquette has been important in the relationship with this alien society. To engage humans in interaction the dolphins often initiate spontaneous displays, mimicry, imitation, and synchrony. These elements may be emergent/universal features of one intelligent species contacting another for the intention of initiating interaction. This should be a consideration for real-time contact and interaction for future SETI work.

  11. Development of a Multi-modal Tissue Diagnostic System Combining High Frequency Ultrasound and Photoacoustic Imaging with Lifetime Fluorescence Spectroscopy

    PubMed Central

    Sun, Yang; Stephens, Douglas N.; Park, Jesung; Sun, Yinghua; Marcu, Laura; Cannata, Jonathan M.; Shung, K. Kirk

    2010-01-01

    We report the development and validate a multi-modal tissue diagnostic technology, which combines three complementary techniques into one system including ultrasound backscatter microscopy (UBM), photoacoustic imaging (PAI), and time-resolved laser-induced fluorescence spectroscopy (TR-LIFS). UBM enables the reconstruction of the tissue microanatomy. PAI maps the optical absorption heterogeneity of the tissue associated with structure information and has the potential to provide functional imaging of the tissue. Examination of the UBM and PAI images allows for localization of regions of interest for TR-LIFS evaluation of the tissue composition. The hybrid probe consists of a single element ring transducer with concentric fiber optics for multi-modal data acquisition. Validation and characterization of the multi-modal system and ultrasonic, photoacoustic, and spectroscopic data coregistration were conducted in a physical phantom with properties of ultrasound scattering, optical absorption, and fluorescence. The UBM system with the 41 MHz ring transducer can reach the axial and lateral resolution of 30 and 65 μm, respectively. The PAI system with 532 nm excitation light from a Nd:YAG laser shows great contrast for the distribution of optical absorbers. The TR-LIFS system records the fluorescence decay with the time resolution of ~300 ps and a high sensitivity of nM concentration range. Biological phantom constructed with different types of tissues (tendon and fat) was used to demonstrate the complementary information provided by the three modalities. Fluorescence spectra and lifetimes were compared to differentiate chemical composition of tissues at the regions of interest determined by the coregistered high resolution UBM and PAI image. Current results demonstrate that the fusion of these techniques enables sequentially detection of functional, morphological, and compositional features of biological tissue, suggesting potential applications in diagnosis of tumors and atherosclerotic plaques. PMID:21894259

  12. Development of a Multi-modal Tissue Diagnostic System Combining High Frequency Ultrasound and Photoacoustic Imaging with Lifetime Fluorescence Spectroscopy.

    PubMed

    Sun, Yang; Stephens, Douglas N; Park, Jesung; Sun, Yinghua; Marcu, Laura; Cannata, Jonathan M; Shung, K Kirk

    2008-01-01

    We report the development and validate a multi-modal tissue diagnostic technology, which combines three complementary techniques into one system including ultrasound backscatter microscopy (UBM), photoacoustic imaging (PAI), and time-resolved laser-induced fluorescence spectroscopy (TR-LIFS). UBM enables the reconstruction of the tissue microanatomy. PAI maps the optical absorption heterogeneity of the tissue associated with structure information and has the potential to provide functional imaging of the tissue. Examination of the UBM and PAI images allows for localization of regions of interest for TR-LIFS evaluation of the tissue composition. The hybrid probe consists of a single element ring transducer with concentric fiber optics for multi-modal data acquisition. Validation and characterization of the multi-modal system and ultrasonic, photoacoustic, and spectroscopic data coregistration were conducted in a physical phantom with properties of ultrasound scattering, optical absorption, and fluorescence. The UBM system with the 41 MHz ring transducer can reach the axial and lateral resolution of 30 and 65 μm, respectively. The PAI system with 532 nm excitation light from a Nd:YAG laser shows great contrast for the distribution of optical absorbers. The TR-LIFS system records the fluorescence decay with the time resolution of ~300 ps and a high sensitivity of nM concentration range. Biological phantom constructed with different types of tissues (tendon and fat) was used to demonstrate the complementary information provided by the three modalities. Fluorescence spectra and lifetimes were compared to differentiate chemical composition of tissues at the regions of interest determined by the coregistered high resolution UBM and PAI image. Current results demonstrate that the fusion of these techniques enables sequentially detection of functional, morphological, and compositional features of biological tissue, suggesting potential applications in diagnosis of tumors and atherosclerotic plaques.

  13. Combined multimodal photoacoustic tomography, optical coherence tomography (OCT) and OCT based angiography system for in vivo imaging of multiple skin disorders in human(Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Liu, Mengyang; Chen, Zhe; Sinz, Christoph; Rank, Elisabet; Zabihian, Behrooz; Zhang, Edward Z.; Beard, Paul C.; Kittler, Harald; Drexler, Wolfgang

    2017-02-01

    All optical photoacoustic tomography (PAT) using a planar Fabry-Perot interferometer polymer film sensor has been demonstrated for in vivo human palm imaging with an imaging penetration depth of 5 mm. The relatively larger vessels in the superficial plexus and the vessels in the dermal plexus are visible in PAT. However, due to both resolution and sensitivity limits, all optical PAT cannot reveal the smaller vessels such as capillary loops and venules. Melanin absorption also sometimes causes difficulties in PAT to resolve vessels. Optical coherence tomography (OCT) based angiography, on the other hand, has been proven suitable for microvasculature visualization in the first couple millimeters in human. In our work, we combine an all optical PAT system with an OCT system featuring a phase stable akinetic swept source. This multimodal PAT/OCT/OCT-angiography system provides us co-registered human skin vasculature information as well as the structural information of cutaneous. The scanning units of the sub-systems are assembled into one probe, which is then mounted onto a portable rack. The probe and rack design gives six degrees of freedom, allowing the multimodal optical imaging probe to access nearly all regions of human body. Utilizing this probe, we perform imaging on patients with various skin disorders as well as on healthy controls. Fused PAT/OCT-angiography volume shows the complete blood vessel network in human skin, which is further embedded in the morphology provided by OCT. A comparison between the results from the disordered regions and the normal regions demonstrates the clinical translational value of this multimodal optical imaging system in dermatology.

  14. Using the Web as Input and Discourse Interactions for the Construction of Meaning and the Acquisition of Lexical Units in University Level English as a Foreign Language

    ERIC Educational Resources Information Center

    Mora Piedra, Marco Antonio

    2013-01-01

    The purpose of this mixed-methods study was to examine the impact of Web multimodality plus dialogical interactions in the acquisition and retention of novel lexical items among EFL students under a social constructionist framework. The lexical acquisition of 107 1st-year English majors at the University of Costa Rica was analyzed through…

  15. Who Says What's Correct and How Do You Say It? Multimodal Management of Oral Peer-Assessment in a Grammar Boardgame in a Foreign Language Classroom

    ERIC Educational Resources Information Center

    Konzett, Carmen

    2015-01-01

    This paper describes how a small group of students in a foreign language classroom manage the interactional task of orally assessing the correctness of verb forms while playing a board game aimed at revising verb conjugation. In their interaction, the students orient to the institutional context of this activity as a language learning exercise by…

  16. Criteria for Evaluating a Game-Based CALL Platform

    ERIC Educational Resources Information Center

    Ní Chiaráin, Neasa; Ní Chasaide, Ailbhe

    2017-01-01

    Game-based Computer-Assisted Language Learning (CALL) is an area that currently warrants attention, as task-based, interactive, multimodal games increasingly show promise for language learning. This area is inherently multidisciplinary--theories from second language acquisition, games, and psychology must be explored and relevant concepts from…

  17. Walking and Talking with Living Texts: Breathing Life against Static Standardisation

    ERIC Educational Resources Information Center

    Phillips, Louise Gwenneth; Willis, Linda-Dianne

    2014-01-01

    Current educational reform, policy and public discourse emphasise standardisation of testing, curricula and professional practice, yet the landscape of literacy practices today is fluid, interactive, multimodal, ever-changing, adaptive and collaborative. How then can English and literacy educators negotiate these conflicting terrains? The nature…

  18. Multimode optomechanical system in the quantum regime

    PubMed Central

    Nielsen, William Hvidtfelt Padkær; Tsaturyan, Yeghishe; Møller, Christoffer Bo; Polzik, Eugene S.; Schliesser, Albert

    2017-01-01

    We realize a simple and robust optomechanical system with a multitude of long-lived (Q > 107) mechanical modes in a phononic-bandgap shielded membrane resonator. An optical mode of a compact Fabry–Perot resonator detects these modes’ motion with a measurement rate (96 kHz) that exceeds the mechanical decoherence rates already at moderate cryogenic temperatures (10 K). Reaching this quantum regime entails, inter alia, quantum measurement backaction exceeding thermal forces and thus strong optomechanical quantum correlations. In particular, we observe ponderomotive squeezing of the output light mediated by a multitude of mechanical resonator modes, with quantum noise suppression up to −2.4 dB (−3.6 dB if corrected for detection losses) and bandwidths ≲90 kHz. The multimode nature of the membrane and Fabry–Perot resonators will allow multimode entanglement involving electromagnetic, mechanical, and spin degrees of freedom. PMID:27999182

  19. [Acute inpatient conservative multimodal treatment of complex and multifactorial orthopedic diseases in the ANOA concept].

    PubMed

    Psczolla, M

    2013-10-01

    In Germany there is a clear deficit in the non-operative treatment of chronic and complex diseases and pain disorders in acute care hospitals. Only about 20 % of the treatments are carried out in orthopedic hospitals. Hospitals specialized in manual medicine have therefore formed a working group on non-operative orthopedic manual medicine acute care clinics (ANOA). The ANOA has developed a multimodal assessment procedure called the OPS 8-977 which describes the structure and process quality of multimodal and interdisciplinary diagnosis and treatment of the musculoskeletal system. Patients are treated according to clinical pathways oriented on the clinical findings. The increased duration of treatment in the German diagnosis-related groups (DRG) system is compensated for with a supplemental remuneration. Thus, complex and multifactorial orthopedic diseases and pain disorders are conservatively and appropriately treated as inpatient departments of acute care hospitals.

  20. Integrated photoacoustic microscopy, optical coherence tomography, and fluorescence microscopy for multimodal chorioretinal imaging

    NASA Astrophysics Data System (ADS)

    Tian, Chao; Zhang, Wei; Nguyen, Van Phuc; Huang, Ziyi; Wang, Xueding; Paulus, Yannis M.

    2018-02-01

    Current clinical available retinal imaging techniques have limitations, including limited depth of penetration or requirement for the invasive injection of exogenous contrast agents. Here, we developed a novel multimodal imaging system for high-speed, high-resolution retinal imaging of larger animals, such as rabbits. The system integrates three state-of-the-art imaging modalities, including photoacoustic microscopy (PAM), optical coherence tomography (OCT), and fluorescence microscopy (FM). In vivo experimental results of rabbit eyes show that the PAM is able to visualize laser-induced retinal burns and distinguish individual eye blood vessels using a laser exposure dose of 80 nJ, which is well below the American National Standards Institute (ANSI) safety limit 160 nJ. The OCT can discern different retinal layers and visualize laser burns and choroidal detachments. The novel multi-modal imaging platform holds great promise in ophthalmic imaging.

  1. Effects of Multimodal Mandala Yoga on Social and Emotional Skills for Youth with Autism Spectrum Disorder: An Exploratory Study

    PubMed Central

    Litchke, Lyn Gorbett; Liu, Ting; Castro, Stephanie

    2018-01-01

    Context: Youth with autism spectrum disorder (ASD) demonstrates impairment in the ability to socially and emotionally relate to others that can limit participation in groups, interaction with peers, and building successful life relationships. Aims: The aim of this exploratory study was to examine the effects of a novel multimodal Mandala yoga program on social and emotional skills for youth with ASD. Subjects and Methods: Five males with ASD attended 1 h yoga sessions, twice a week for 4 weeks. Multimodal Mandala yoga comprised 26 circular partner/group poses, color and tracing sheets, rhythmic chanting, yoga cards, and games. Treatment and Research Institute for ASD Social Skills Assessment (TSSA) scores were collected before and after the eight yoga sessions. The Modified Facial Mood Scale (MFMS) was used to observe mood changes before and after each yoga class. Paired sample t-tests were conducted on TSSA and MFMS scores to compare social and emotional differences post the 4-week camp. Narrative field notes were documented after each of the eight yoga sessions. Results: A significant improvement from pre- to post-test was found in overall TSSA (t(4) = −5.744, P = 0.005) and on respondent to initiation (t(4) = −3.726, P = 0.020), initiating interaction (t(4) = −8.5, P = 0.039), and affective understanding and perspective taking subscales (t(4) = −5.171 P = 0.007). Youth's MFMS scores increased from 80% to 100% at the end of eight yoga sessions demonstrating a pleasant or positive mood. Thematic analysis of the narrative notes identified three key factors associated with the yoga experience: (a) enhanced mood and emotional expression, (b) increased empathy toward others, and (c) improved teamwork skills. Conclusion: This multimodal Mandala yoga training has implication for developing positive social and emotional skills for youth with ASD. PMID:29343932

  2. Tactile feedback for relief of deafferentation pain using virtual reality system: a pilot study.

    PubMed

    Sano, Yuko; Wake, Naoki; Ichinose, Akimichi; Osumi, Michihiro; Oya, Reishi; Sumitani, Masahiko; Kumagaya, Shin-Ichiro; Kuniyoshi, Yasuo

    2016-06-28

    Previous studies have tried to relieve deafferentation pain (DP) by using virtual reality rehabilitation systems. However, the effectiveness of multimodal sensory feedback was not validated. The objective of this study is to relieve DP by neurorehabilitation using a virtual reality system with multimodal sensory feedback and to validate the efficacy of tactile feedback on immediate pain reduction. We have developed a virtual reality rehabilitation system with multimodal sensory feedback and applied it to seven patients with DP caused by brachial plexus avulsion or arm amputation. The patients executed a reaching task using the virtual phantom limb manipulated by their real intact limb. The reaching task was conducted under two conditions: one with tactile feedback on the intact hand and one without. The pain intensity was evaluated through a questionnaire. We found that the task with the tactile feedback reduced DP more (41.8 ± 19.8 %) than the task without the tactile feedback (28.2 ± 29.5 %), which was supported by a Wilcoxon signed-rank test result (p < 0.05). Overall, our findings indicate that the tactile feedback improves the immediate pain intensity through rehabilitation using our virtual reality system.

  3. Multimodal Translation System Using Texture-Mapped Lip-Sync Images for Video Mail and Automatic Dubbing Applications

    NASA Astrophysics Data System (ADS)

    Morishima, Shigeo; Nakamura, Satoshi

    2004-12-01

    We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.

  4. Multimodal biometric approach for cancelable face template generation

    NASA Astrophysics Data System (ADS)

    Paul, Padma Polash; Gavrilova, Marina

    2012-06-01

    Due to the rapid growth of biometric technology, template protection becomes crucial to secure integrity of the biometric security system and prevent unauthorized access. Cancelable biometrics is emerging as one of the best solutions to secure the biometric identification and verification system. We present a novel technique for robust cancelable template generation algorithm that takes advantage of the multimodal biometric using feature level fusion. Feature level fusion of different facial features is applied to generate the cancelable template. A proposed algorithm based on the multi-fold random projection and fuzzy communication scheme is used for this purpose. In cancelable template generation, one of the main difficulties is keeping interclass variance of the feature. We have found that interclass variations of the features that are lost during multi fold random projection can be recovered using fusion of different feature subsets and projecting in a new feature domain. Applying the multimodal technique in feature level, we enhance the interclass variability hence improving the performance of the system. We have tested the system for classifier fusion for different feature subset and different cancelable template fusion. Experiments have shown that cancelable template improves the performance of the biometric system compared with the original template.

  5. Application of fuzzy neural network technologies in management of transport and logistics processes in Arctic

    NASA Astrophysics Data System (ADS)

    Levchenko, N. G.; Glushkov, S. V.; Sobolevskaya, E. Yu; Orlov, A. P.

    2018-05-01

    The method of modeling the transport and logistics process using fuzzy neural network technologies has been considered. The analysis of the implemented fuzzy neural network model of the information management system of transnational multimodal transportation of the process showed the expediency of applying this method to the management of transport and logistics processes in the Arctic and Subarctic conditions. The modular architecture of this model can be expanded by incorporating additional modules, since the working conditions in the Arctic and the subarctic themselves will present more and more realistic tasks. The architecture allows increasing the information management system, without affecting the system or the method itself. The model has a wide range of application possibilities, including: analysis of the situation and behavior of interacting elements; dynamic monitoring and diagnostics of management processes; simulation of real events and processes; prediction and prevention of critical situations.

  6. Obstacle traversal and self-righting of bio-inspired robots reveal the physics of multi-modal locomotion

    NASA Astrophysics Data System (ADS)

    Li, Chen; Fearing, Ronald; Full, Robert

    Most animals move in nature in a variety of locomotor modes. For example, to traverse obstacles like dense vegetation, cockroaches can climb over, push across, reorient their bodies to maneuver through slits, or even transition among these modes forming diverse locomotor pathways; if flipped over, they can also self-right using wings or legs to generate body pitch or roll. By contrast, most locomotion studies have focused on a single mode such as running, walking, or jumping, and robots are still far from capable of life-like, robust, multi-modal locomotion in the real world. Here, we present two recent studies using bio-inspired robots, together with new locomotion energy landscapes derived from locomotor-environment interaction physics, to begin to understand the physics of multi-modal locomotion. (1) Our experiment of a cockroach-inspired legged robot traversing grass-like beam obstacles reveals that, with a terradynamically ``streamlined'' rounded body like that of the insect, robot traversal becomes more probable by accessing locomotor pathways that overcome lower potential energy barriers. (2) Our experiment of a cockroach-inspired self-righting robot further suggests that body vibrations are crucial for exploring locomotion energy landscapes and reaching lower barrier pathways. Finally, we posit that our new framework of locomotion energy landscapes holds promise to better understand and predict multi-modal biological and robotic movement.

  7. Multimodal human communication--targeting facial expressions, speech content and prosody.

    PubMed

    Regenbogen, Christina; Schneider, Daniel A; Gur, Raquel E; Schneider, Frank; Habel, Ute; Kellermann, Thilo

    2012-05-01

    Human communication is based on a dynamic information exchange of the communication channels facial expressions, prosody, and speech content. This fMRI study elucidated the impact of multimodal emotion processing and the specific contribution of each channel on behavioral empathy and its prerequisites. Ninety-six video clips displaying actors who told self-related stories were presented to 27 healthy participants. In two conditions, all channels uniformly transported only emotional or neutral information. Three conditions selectively presented two emotional channels and one neutral channel. Subjects indicated the actors' emotional valence and their own while fMRI was recorded. Activation patterns of tri-channel emotional communication reflected multimodal processing and facilitative effects for empathy. Accordingly, subjects' behavioral empathy rates significantly deteriorated once one source was neutral. However, emotionality expressed via two of three channels yielded activation in a network associated with theory-of-mind-processes. This suggested participants' effort to infer mental states of their counterparts and was accompanied by a decline of behavioral empathy, driven by the participants' emotional responses. Channel-specific emotional contributions were present in modality-specific areas. The identification of different network-nodes associated with human interactions constitutes a prerequisite for understanding dynamics that underlie multimodal integration and explain the observed decline in empathy rates. This task might also shed light on behavioral deficits and neural changes that accompany psychiatric diseases. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. A multimodal high-value curriculum affects drivers of utilization and performance on the high-value care component of the internal medicine in-training exam.

    PubMed

    Chau, Tom; Loertscher, Laura

    2018-01-01

    Background : Teaching the practice of high-value care (HVC) is an increasingly important function of graduate medical education but best practices and long-term outcomes remain unknown. Objective : Whether a multimodal curriculum designed to address specific drivers of low-value care would affect resident attitudes, skills, and performance of HVC as tested by the Internal Medicine In-Training Exam (ITE). Methods : In 2012, we performed a baseline needs assessment among internal medicine residents at a community program regarding drivers of healthcare utilization. We then created a multimodal curriculum with online interactive worksheets, lectures, and faculty buy-in to target specific skills, knowledge, and culture deficiencies. Perceived drivers of care and performance on the Internal Medicine ITE were assessed yearly through 2016. Results : Fourteen of 27 (52%) residents completed the initial needs assessment while the curriculum was eventually seen by at least 24 of 27 (89%). The ITE was taken by every resident every year. Long-term, 3-year follow-up demonstrated persistent improvement in many drivers of utilization (patient requests, reliance on subspecialists, defensive medicine, and academic curiosity) and improvement with sustained high performance on the high-value component of the ITE. Conclusion : A multimodal curriculum targeting specific drivers of low-value care can change culture and lead to sustained improvement in the practice of HVC.

  9. Analyzing multimodality tomographic images and associated regions of interest with MIDAS

    NASA Astrophysics Data System (ADS)

    Tsui, Wai-Hon; Rusinek, Henry; Van Gelder, Peter; Lebedev, Sergey

    2001-07-01

    This paper outlines the design and features incorporated in a software package for analyzing multi-modality tomographic images. The package MIDAS has been evolving for the past 15 years and is in wide use by researchers at New York University School of Medicine and a number of collaborating research sites. It was written in the C language and runs on Sun workstations and Intel PCs under the Solaris operating system. A unique strength of the MIDAS package lies in its ability to generate, manipulate and analyze a practically unlimited number of regions of interest (ROIs). These regions are automatically saved in an efficient data structure and linked to associated images. A wide selection of set theoretical (e.g. union, xor, difference), geometrical (e.g. move, rotate) and morphological (grow, peel) operators can be applied to an arbitrary selection of ROIs. ROIs are constructed as a result of image segmentation algorithms incorporated in MIDAS; they also can be drawn interactively. These ROI editing operations can be applied in either 2D or 3D mode. ROI statistics generated by MIDAS include means, standard deviations, centroids and histograms. Other image manipulation tools incorporated in MIDAS are multimodality and within modality coregistration methods (including landmark matching, surface fitting and Woods' correlation methods) and image reformatting methods (using nearest-neighbor, tri-linear or sinc interpolation). Applications of MIDAS include: (1) neuroanatomy research: marking anatomical structures in one orientation, reformatting marks to another orientation; (2) tissue volume measurements: brain structures (PET, MRI, CT), lung nodules (low dose CT), breast density (MRI); (3) analysis of functional (SPECT, PET) experiments by overlaying corresponding structural scans; (4) longitudinal studies: regional measurement of atrophy.

  10. Multimodality Inferring of Human Cognitive States Based on Integration of Neuro-Fuzzy Network and Information Fusion Techniques

    NASA Astrophysics Data System (ADS)

    Yang, G.; Lin, Y.; Bhattacharya, P.

    2007-12-01

    To achieve an effective and safe operation on the machine system where the human interacts with the machine mutually, there is a need for the machine to understand the human state, especially cognitive state, when the human's operation task demands an intensive cognitive activity. Due to a well-known fact with the human being, a highly uncertain cognitive state and behavior as well as expressions or cues, the recent trend to infer the human state is to consider multimodality features of the human operator. In this paper, we present a method for multimodality inferring of human cognitive states by integrating neuro-fuzzy network and information fusion techniques. To demonstrate the effectiveness of this method, we take the driver fatigue detection as an example. The proposed method has, in particular, the following new features. First, human expressions are classified into four categories: (i) casual or contextual feature, (ii) contact feature, (iii) contactless feature, and (iv) performance feature. Second, the fuzzy neural network technique, in particular Takagi-Sugeno-Kang (TSK) model, is employed to cope with uncertain behaviors. Third, the sensor fusion technique, in particular ordered weighted aggregation (OWA), is integrated with the TSK model in such a way that cues are taken as inputs to the TSK model, and then the outputs of the TSK are fused by the OWA which gives outputs corresponding to particular cognitive states under interest (e.g., fatigue). We call this method TSK-OWA. Validation of the TSK-OWA, performed in the Northeastern University vehicle drive simulator, has shown that the proposed method is promising to be a general tool for human cognitive state inferring and a special tool for the driver fatigue detection.

  11. [Implementation of interdisciplinary multimodal pain therapy according to OPS 8‑918 : Recommendations of the ad hoc commission for interdisciplinary multimodal pain therapy of the German Pain Association].

    PubMed

    Arnold, B; Böger, A; Brinkschmidt, T; Casser, H-R; Irnich, D; Kaiser, U; Klimczyk, K; Lutz, J; Pfingsten, M; Sabatowski, R; Schiltenwolf, M; Söllner, W

    2018-02-01

    With the implementation of the German diagnosis-related groups (DRG) reimbursement system in hospitals, interdisciplinary multimodal pain therapy was incorporated into the associated catalogue of procedures (OPS 8‑918). Yet, the presented criteria describing the procedure of interdisciplinary multimodal pain therapy are neither precise nor unambiguous. This has led to discrepancies in the interpretation regarding the handling of the procedure-making it difficult for medical services of health insurance companies to evaluate the accordance between the delivered therapy and the required criteria. Since the number of pain units has increased in recent years, the number of examinations by the medical service of health insurance companies has increased. This article, published by the ad hoc commission for interdisciplinary multimodal pain therapy of the German Pain Association, provides specific recommendations for correct implementation of interdisciplinary multimodal pain therapy in routine care. The aim is to achieve a maximum level of accordance between health care providers and the requirements of the medical examiners from health insurance companies. More extensive criteria regarding interdisciplinary multimodal pain treatment in an in-patient setting, especially for patients with chronic and complex pain, are obviously needed. Thus, the authors further discuss specific aspects towards further development of the OPS-code. However, the application of the OPS-code still leaves room regarding treatment intensity and process quality. Therefore, the delivery of pain management in sufficient quantity and quality still remains the responsibility of each health care provider.

  12. DBSAR's First Multimode Flight Campaign

    NASA Technical Reports Server (NTRS)

    Rincon, Rafael F.; Vega, Manuel; Buenfil, Manuel; Geist, Alessandro; Hilliard, Lawrence; Racette, Paul

    2010-01-01

    The Digital Beamforming SAR (DBSAR) is an airborne imaging radar system that combines phased array technology, reconfigurable on-board processing and waveform generation, and advances in signal processing to enable techniques not possible with conventional SARs. The system exploits the versatility inherently in phased-array technology with a state-of-the-art data acquisition and real-time processor in order to implement multi-mode measurement techniques in a single radar system. Operational modes include scatterometry over multiple antenna beams, Synthetic Aperture Radar (SAR) over several antenna beams, or Altimetry. The radar was flight tested in October 2008 on board of the NASA P3 aircraft over the Delmarva Peninsula, MD. The results from the DBSAR system performance is presented.

  13. Cross-Cultural Evidence for Multimodal Motherese: Asian-Indian Mothers’ Adaptive Use of Synchronous Words and Gestures

    PubMed Central

    Maganti, Madhavilatha; Bahrick, Lorraine E.

    2014-01-01

    In a quasi-experimental study, twenty-four Asian-Indian mothers were asked to teach novel (target) names for two objects and two actions to their children of three different levels of lexical-mapping development, pre-lexical (5–8 months), early-lexical (9–17 months), and advanced-lexical (20–43 months). Target (N = 1482) and non-target (other, N = 2411) naming was coded for synchronous spoken words and object motion (multimodal motherese) and other naming styles. Indian mothers abundantly used multimodal motherese with target words to highlight novel word-referent relations, paralleling earlier findings from American mothers (Gogate, Bahrick, & Watson, 2000). They used it with target words more often for pre-lexical infants than advanced-lexical children, and to name target actions later into children’s development. Unlike American mothers, Indian mothers also abundantly used multimodal motherese to name target objects later into children’s development. Finally, monolingual mothers who spoke a verb-dominant Indian language used multimodal motherese more often than bilingual mothers who also spoke noun-dominant English to their child. The findings suggest that within a dynamic and reciprocal mother-infant communication system, multimodal motherese adapts to unify novel words and referents across cultures. It adapts to children’s level of lexical development and to ambient language-specific lexical-dominance hierarchies. PMID:25285369

  14. Multimodal microscopy and the stepwise multi-photon activation fluorescence of melanin

    NASA Astrophysics Data System (ADS)

    Lai, Zhenhua

    The author's work is divided into three aspects: multimodal microscopy, stepwise multi-photon activation fluorescence (SMPAF) of melanin, and customized-profile lenses (CPL) for on-axis laser scanners, which will be introduced respectively. A multimodal microscope provides the ability to image samples with multiple modalities on the same stage, which incorporates the benefits of all modalities. The multimodal microscopes developed in this dissertation are the Keck 3D fusion multimodal microscope 2.0 (3DFM 2.0), upgraded from the old 3DFM with improved performance and flexibility, and the multimodal microscope for targeting small particles (the "Target" system). The control systems developed for both microscopes are low-cost and easy-to-build, with all components off-the-shelf. The control system have not only significantly decreased the complexity and size of the microscope, but also increased the pixel resolution and flexibility. The SMPAF of melanin, activated by a continuous-wave (CW) mode near-infrared (NIR) laser, has potential applications for a low-cost and reliable method of detecting melanin. The photophysics of melanin SMPAF has been studied by theoretical analysis of the excitation process and investigation of the spectra, activation threshold, and photon number absorption of melanin SMPAF. SMPAF images of melanin in mouse hair and skin, mouse melanoma, and human black and white hairs are compared with images taken by conventional multi-photon fluorescence microscopy (MPFM) and confocal reflectance microscopy (CRM). SMPAF images significantly increase specificity and demonstrate the potential to increase sensitivity for melanin detection compared to MPFM images and CRM images. Employing melanin SMPAF imaging to detect melanin inside human skin in vivo has been demonstrated, which proves the effectiveness of melanin detection using SMPAF for medical purposes. Selective melanin ablation with micrometer resolution has been presented using the Target system. Compared to the traditional selective photothermolysis, this method demonstrates higher precision, higher specificity and deeper penetration. Therefore, the SMPAF guided selective ablation of melanin is a promising tool of removing melanin for both medical and cosmetic purposes. Three CPLs have been designed for low-cost linear-motion scanners, low-cost fast spinning scanners and high-precision fast spinning scanners. Each design has been tailored to the industrial manufacturing ability and market demands.

  15. Bodies in Composition: Teaching Writing through Kinesthetic Performance

    ERIC Educational Resources Information Center

    Butler, Janine

    2017-01-01

    This article calls on composition instructors to reflect consciously on how we can use our bodies kinesthetically to perform multimodal writing processes through gestural, visual, and spatial modes. Teaching writing through kinesthetic performance can show students that our bodies are being constructed via interaction with audiences, akin to the…

  16. Patterns in Teachers' Instructional Design When Integrating Apps in Middle School Content-Area Teaching

    ERIC Educational Resources Information Center

    Karchmer-Klein, Rachel; Mouza, Chrystalla; Harlow Shinas, Valerie; Park, Sohee

    2017-01-01

    The purpose of this study was to examine patterns evident in the ways middle school teachers, who value technology integration, design instruction that leverages educational applications (app) affordances. Using the pedagogy of multiliteracies (Cope & Kalantzis, 2015) and app affordances of multimodality, collaboration, and interactivity as…

  17. Multimodal Brain Imaging in Autism Spectrum Disorder and the Promise of Twin Research

    ERIC Educational Resources Information Center

    Mevel, Katell; Fransson, Peter; Bölte, Sven

    2015-01-01

    Current evidence suggests the phenotype of autism spectrum disorder to be driven by a complex interaction of genetic and environmental factors impacting onto brain maturation, synaptic function, and cortical networks. However, findings are heterogeneous, and the exact neurobiological pathways of autism spectrum disorder still remain poorly…

  18. The Joint Organization of Interaction within a Multimodal CSCL Medium

    ERIC Educational Resources Information Center

    Cakir, Murat Perit; Zemel, Alan; Stahl, Gerry

    2009-01-01

    In order to collaborate effectively in group discourse on a topic like mathematical patterns, group participants must organize their activities in ways that share the significance of their utterances, inscriptions, and behaviors. Here, we report the results of a ethnomethodological case study of collaborative math problem-solving activities…

  19. Multiliteracies, Social Futures, and Writing Centers

    ERIC Educational Resources Information Center

    Trimbur, John

    2010-01-01

    In this article, the author mentions his experience at Worcester Polytechnic Institute (WPI) because he thinks it is fairly indicative of recent trends in writing center theory and practice to see literacy as a multimodal activity in which oral, written, and visual communication intertwine and interact. He says this notion of multiliteracies has…

  20. Adapting Books: Ready, Set, Read!: EAT: Equipment, Adaptations, and Technology

    ERIC Educational Resources Information Center

    Schoonover, Judith; Norton-Darr, Sally

    2016-01-01

    Developing multimodal materials to introduce or extend literacy experiences sets the stage for literacy success. Alternative ways to organize, display and arrange, interact and respond to information produces greater understanding of concepts. Adaptations include making books easier to use (turning pages or holding), and text easier to read…

  1. Brief Report: Group Social-Multimodal Intervention for HFASD

    ERIC Educational Resources Information Center

    Bauminger, Nirit

    2007-01-01

    Current study is the second part of a 2-year cognitive-behavioral-ecological (CB-E) intervention for high-functioning (HF) children with autism spectrum disorder (ASD). We examined the utility of a group-centered intervention on children's ability to interact cooperatively with peers during structured and non-structured social situations. Direct…

  2. Multimodal optical imaging system for in vivo investigation of cerebral oxygen delivery and energy metabolism

    PubMed Central

    Yaseen, Mohammad A.; Srinivasan, Vivek J.; Gorczynska, Iwona; Fujimoto, James G.; Boas, David A.; Sakadžić, Sava

    2015-01-01

    Improving our understanding of brain function requires novel tools to observe multiple physiological parameters with high resolution in vivo. We have developed a multimodal imaging system for investigating multiple facets of cerebral blood flow and metabolism in small animals. The system was custom designed and features multiple optical imaging capabilities, including 2-photon and confocal lifetime microscopy, optical coherence tomography, laser speckle imaging, and optical intrinsic signal imaging. Here, we provide details of the system’s design and present in vivo observations of multiple metrics of cerebral oxygen delivery and energy metabolism, including oxygen partial pressure, microvascular blood flow, and NADH autofluorescence. PMID:26713212

  3. Multi-State Vibronic Interactions in Fluorinated Benzene Radical Cations.

    NASA Astrophysics Data System (ADS)

    Faraji, S.; Köppel, H.

    2009-06-01

    Conical intersections of potential energy surfaces have emerged as paradigms for signalling strong nonadiabatic coupling effects. An important class of systems where some of these effects have been analyzed in the literature, are the benzene and benzenoid cations, where the electronic structure, spectroscopy, and dynamics have received great attention in the literature. In the present work a brief overview is given over our theoretical treatments of multi-mode and multi-state vibronic interactions in the benzene radical cation and some of its fluorinated derivatives. The fluorobenzene derivatives are of systematic interest for at least two different reasons. (1) The reduction of symmetry by incomplete fluorination leads to a disappearance of the Jahn-Teller effect present in the parent cation. (2) A specific, more chemical effect of fluorination consists in the energetic increase of the lowest σ-type electronic states of the radical cations. The multi-mode multi-state vibronic interactions between the five lowest electronic states of the fluorobenzene radical cations are investigated theoretically, based on ab initio electronic structure data, and employing the well-established linear vibronic coupling model, augmented by quadratic coupling terms for the totally symmetric vibrational modes. Low-energy conical intersections, and strong vibronic couplings are found to prevail within the set of tilde{X}-tilde{A} and tilde{B}-tilde{C}-tilde{D} cationic states, while the interactions between these two sets of states are found to be weaker and depend on the particular isomer. This is attributed to the different location of the minima of the various conical intersections occurring in these systems. Wave-packet dynamical simulations for these coupled potential energy surfaces, utilizing the powerful multi-configuration time-dependent Hartree method are performed. Ultrafast internal conversion processes and the analysis of the MATI and photo-electron spectra shed new light on the spectroscopy and fluorescence dynamics of these species. W. Domcke, D. R. Yarkony, and H. Köppel, Advanced Series in Physical Chemistry, World Scientific, Singapore (2004). M. H. Beck and A. Jäckle and G. A. Worth and H. -D. Meyer, Phys. Rep. 324, 1 (2000). S. Faraji, H. Köppel, (Part I) ; S. Faraji, H. Köppel, H.-D. Meyer, (Part II) J. Chem. Phys. 129, 074310 (2008).

  4. Perceived Synchrony of Frog Multimodal Signal Components Is Influenced by Content and Order.

    PubMed

    Taylor, Ryan C; Page, Rachel A; Klein, Barrett A; Ryan, Michael J; Hunter, Kimberly L

    2017-10-01

    Multimodal signaling is common in communication systems. Depending on the species, individual signal components may be produced synchronously as a result of physiological constraint (fixed) or each component may be produced independently (fluid) in time. For animals that rely on fixed signals, a basic prediction is that asynchrony between the components should degrade the perception of signal salience, reducing receiver response. Male túngara frogs, Physalaemus pustulosus, produce a fixed multisensory courtship signal by vocalizing with two call components (whines and chucks) and inflating a vocal sac (visual component). Using a robotic frog, we tested female responses to variation in the temporal arrangement between acoustic and visual components. When the visual component lagged a complex call (whine + chuck), females largely rejected this asynchronous multisensory signal in favor of the complex call absent the visual cue. When the chuck component was removed from one call, but the robofrog inflation lagged the complex call, females responded strongly to the asynchronous multimodal signal. When the chuck component was removed from both calls, females reversed preference and responded positively to the asynchronous multisensory signal. When the visual component preceded the call, females responded as often to the multimodal signal as to the call alone. These data show that asynchrony of a normally fixed signal does reduce receiver responsiveness. The magnitude and overall response, however, depend on specific temporal interactions between the acoustic and visual components. The sensitivity of túngara frogs to lagging visual cues, but not leading ones, and the influence of acoustic signal content on the perception of visual asynchrony is similar to those reported in human psychophysics literature. Virtually all acoustically communicating animals must conduct auditory scene analyses and identify the source of signals. Our data suggest that some basic audiovisual neural integration processes may be at work in the vertebrate brain. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology 2017. This work is written by US Government employees and is in the public domain in the US.

  5. Multimodal fiber source for nonlinear microscopy based on a dissipative soliton laser

    PubMed Central

    Lamb, Erin S.; Wise, Frank W.

    2015-01-01

    Recent developments in high energy femtosecond fiber lasers have enabled robust and lower-cost sources for multiphoton-fluorescence and harmonic-generation imaging. However, picosecond pulses are better suited for Raman scattering microscopy, so the ideal multimodal source for nonlinear microcopy needs to provide both durations. Here we present spectral compression of a high-power femtosecond fiber laser as a route to producing transform-limited picosecond pulses. These pulses pump a fiber optical parametric oscillator to yield a robust fiber source capable of providing the synchronized picosecond pulse trains needed for Raman scattering microscopy. Thus, this system can be used as a multimodal platform for nonlinear microscopy techniques. PMID:26417497

  6. Natural Language Based Multimodal Interface for UAV Mission Planning

    NASA Technical Reports Server (NTRS)

    Chandarana, Meghan; Meszaros, Erica L.; Trujillo, Anna; Allen, B. Danette

    2017-01-01

    As the number of viable applications for unmanned aerial vehicle (UAV) systems increases at an exponential rate, interfaces that reduce the reliance on highly skilled engineers and pilots must be developed. Recent work aims to make use of common human communication modalities such as speech and gesture. This paper explores a multimodal natural language interface that uses a combination of speech and gesture input modalities to build complex UAV flight paths by defining trajectory segment primitives. Gesture inputs are used to define the general shape of a segment while speech inputs provide additional geometric information needed to fully characterize a trajectory segment. A user study is conducted in order to evaluate the efficacy of the multimodal interface.

  7. Multimode optical dermoscopy (SkinSpect) analysis for skin with melanocytic nevus

    NASA Astrophysics Data System (ADS)

    Vasefi, Fartash; MacKinnon, Nicholas; Saager, Rolf; Kelly, Kristen M.; Maly, Tyler; Chave, Robert; Booth, Nicholas; Durkin, Anthony J.; Farkas, Daniel L.

    2016-04-01

    We have developed a multimode dermoscope (SkinSpect™) capable of illuminating human skin samples in-vivo with spectrally-programmable linearly-polarized light at 33 wavelengths between 468nm and 857 nm. Diffusely reflected photons are separated into collinear and cross-polarized image paths and images captured for each illumination wavelength. In vivo human skin nevi (N = 20) were evaluated with the multimode dermoscope and melanin and hemoglobin concentrations were compared with Spatially Modulated Quantitative Spectroscopy (SMoQS) measurements. Both systems show low correlation between their melanin and hemoglobin concentrations, demonstrating the ability of the SkinSpect™ to separate these molecular signatures and thus act as a biologically plausible device capable of early onset melanoma detection.

  8. Techniques for efficient, real-time, 3D visualization of multi-modality cardiac data using consumer graphics hardware.

    PubMed

    Levin, David; Aladl, Usaf; Germano, Guido; Slomka, Piotr

    2005-09-01

    We exploit consumer graphics hardware to perform real-time processing and visualization of high-resolution, 4D cardiac data. We have implemented real-time, realistic volume rendering, interactive 4D motion segmentation of cardiac data, visualization of multi-modality cardiac data and 3D display of multiple series cardiac MRI. We show that an ATI Radeon 9700 Pro can render a 512x512x128 cardiac Computed Tomography (CT) study at 0.9 to 60 frames per second (fps) depending on rendering parameters and that 4D motion based segmentation can be performed in real-time. We conclude that real-time rendering and processing of cardiac data can be implemented on consumer graphics cards.

  9. Xi-CAM v1.2.3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    PANDOLFI, RONALD; KUMAR, DINESH; VENKATAKRISHNAN, SINGANALLUR

    Xi-CAM aims to provide a community driven platform for multimodal analysis in synchrotron science. The platform core provides a robust plugin infrastructure for extensibility, allowing continuing development to simply add further functionality. Current modules include tools for characterization with (GI)SAXS, Tomography, and XAS. This will continue to serve as a development base as algorithms for multimodal analysis develop. Seamless remote data access, visualization and analysis are key elements of Xi-CAM, and will become critical to synchrotron data infrastructure as expectations for future data volume and acquisition rates rise with continuously increasing throughputs. The highly interactive design elements of Xi-cam willmore » similarly support a generation of users which depend on immediate data quality feedback during high-throughput or burst acquisition modes.« less

  10. Augmented Robotics Dialog System for Enhancing Human–Robot Interaction

    PubMed Central

    Alonso-Martín, Fernando; Castro-González, Aívaro; de Gorostiza Luengo, Francisco Javier Fernandez; Salichs, Miguel Ángel

    2015-01-01

    Augmented reality, augmented television and second screen are cutting edge technologies that provide end users extra and enhanced information related to certain events in real time. This enriched information helps users better understand such events, at the same time providing a more satisfactory experience. In the present paper, we apply this main idea to human–robot interaction (HRI), to how users and robots interchange information. The ultimate goal of this paper is to improve the quality of HRI, developing a new dialog manager system that incorporates enriched information from the semantic web. This work presents the augmented robotic dialog system (ARDS), which uses natural language understanding mechanisms to provide two features: (i) a non-grammar multimodal input (verbal and/or written) text; and (ii) a contextualization of the information conveyed in the interaction. This contextualization is achieved by information enrichment techniques that link the extracted information from the dialog with extra information about the world available in semantic knowledge bases. This enriched or contextualized information (information enrichment, semantic enhancement or contextualized information are used interchangeably in the rest of this paper) offers many possibilities in terms of HRI. For instance, it can enhance the robot's pro-activeness during a human–robot dialog (the enriched information can be used to propose new topics during the dialog, while ensuring a coherent interaction). Another possibility is to display additional multimedia content related to the enriched information on a visual device. This paper describes the ARDS and shows a proof of concept of its applications. PMID:26151202

  11. Augmented Robotics Dialog System for Enhancing Human-Robot Interaction.

    PubMed

    Alonso-Martín, Fernando; Castro-González, Aĺvaro; Luengo, Francisco Javier Fernandez de Gorostiza; Salichs, Miguel Ángel

    2015-07-03

    Augmented reality, augmented television and second screen are cutting edge technologies that provide end users extra and enhanced information related to certain events in real time. This enriched information helps users better understand such events, at the same time providing a more satisfactory experience. In the present paper, we apply this main idea to human-robot interaction (HRI), to how users and robots interchange information. The ultimate goal of this paper is to improve the quality of HRI, developing a new dialog manager system that incorporates enriched information from the semantic web. This work presents the augmented robotic dialog system (ARDS), which uses natural language understanding mechanisms to provide two features: (i) a non-grammar multimodal input (verbal and/or written) text; and (ii) a contextualization of the information conveyed in the interaction. This contextualization is achieved by information enrichment techniques that link the extracted information from the dialog with extra information about the world available in semantic knowledge bases. This enriched or contextualized information (information enrichment, semantic enhancement or contextualized information are used interchangeably in the rest of this paper) offers many possibilities in terms of HRI. For instance, it can enhance the robot's pro-activeness during a human-robot dialog (the enriched information can be used to propose new topics during the dialog, while ensuring a coherent interaction). Another possibility is to display additional multimedia content related to the enriched information on a visual device. This paper describes the ARDS and shows a proof of concept of its applications.

  12. A Novel Multifunctional Theranostic Liposome Drug Delivery System: Construction, Characterization, and Multimodality MR, Near-infrared Fluorescent and Nuclear Imaging

    PubMed Central

    Li, Shihong; Goins, Beth; Zhang, Lujun; Bao, Ande

    2012-01-01

    Liposomes are effective lipid nanoparticle drug delivery systems, which can also be functionalized with non-invasive multimodality imaging agents with each modality providing distinct information and having synergistic advantages in diagnosis, monitoring of disease treatment, and evaluation of liposomal drug pharmacokinetics. We designed and constructed a multifunctional theranostic liposomal drug delivery system, which integrated multimodality magnetic resonance (MR), near-infrared (NIR) fluorescent and nuclear imaging of liposomal drug delivery, and therapy monitoring and prediction. The pre-manufactured liposomes were composed of DSPC/cholesterol/Gd-DOTADSPE/DOTA-DSPE with the molar ratio of 39:35:25:1 and having ammonium sulfate/pH gradient. A lipidized NIR fluorescent tracer, IRDye-DSPE, was effectively post-inserted into the pre-manufactured liposomes. Doxorubicin could be effectively post-loaded into the multifunctional liposomes. The multifunctional doxorubicin-liposomes could also be stably radiolabeled with 99mTc or 64Cu for single photon emission computed tomography (SPECT) or positron emission tomography (PET) imaging, respectively. MR images displayed the high resolution micro-intratumoral distribution of the liposomes in squamous cell carcinoma of head and neck (SCCHN) tumor xenografts in nude rats after intratumoral injection. NIR fluorescent, SPECT and PET images also clearly showed either the high intratumoral retention or distribution of the multifunctional liposomes. This multifunctional drug carrying liposome system is promising for disease theranostics allowing non-invasive multimodality NIR fluorescent, MR, SPECT and PET imaging of their in vivo behavior and capitalizing on the inherent advantages of each modality. PMID:22577859

  13. Optical characterisation and analysis of multi-mode pixels for use in future far infrared telescopes

    NASA Astrophysics Data System (ADS)

    McCarthy, Darragh; Trappe, Neil; Murphy, J. Anthony; Doherty, Stephen; Gradziel, Marcin; O'Sullivan, Créidhe; Audley, Michael D.; de Lange, Gert; van der Vorst, Maarten

    2016-07-01

    In this paper we present the development and verification of feed horn simulation code based on the mode- matching technique to simulate the electromagnetic performance of waveguide based structures of rectangular cross-section. This code is required to model multi-mode pyramidal horns which may be required for future far infrared (far IR) space missions where wavelengths in the range of 30 to 200 µm will be analysed. Multi-mode pyramidal horns can be used effectively to couple radiation to sensitive superconducting devices like Kinetic Inductance Detectors (KIDs) or Transition Edge Sensor (TES) detectors. These detectors could be placed in integrating cavities (to further increase the efficiency) with an absorbing layer used to couple to the radiation. The developed code is capable of modelling each of these elements, and so will allow full optical characterisation of such pixels and allow an optical efficiency to be calculated effectively. As the signals being measured at these short wavelengths are at an extremely low level, the throughput of the system must be maximised and so multi-mode systems are proposed. To this end, the focal planes of future far IR missions may consist of an array of multi-mode rectangular feed horns feeding an array of, for example, TES devices contained in individual integrating cavities. Such TES arrays have been fabricated by SRON Groningen and are currently undergoing comprehensive optical, electrical and thermal verification. In order to fully understand and validate the optical performance of the receiver system, it is necessary to develop comprehensive and robust optical models in parallel. We outline the development and verification of this optical modelling software by means of applying it to a representative multi-mode system operating at 150 GHz in order to obtain sufficiently short execution times so as to comprehensively test the code. SAFARI (SPICA FAR infrared Instrument) is a far infrared imaging grating spectrometer, to be proposed as an ESA M5 mission. It is planned for this mission to be launched on board the proposed SPICA (SPace Infrared telescope for Cosmology and Astrophysics) mission, in collaboration with JAXA. SAFARI is planned to operate in the 1.5-10 THz band, focussing on the formation and evolution of galaxies, stars and planetary systems. The pixel that drove the development of the techniques presented in this paper is typical of one option that could be implemented in the SAFARI focal plane, and so the ability to accurately understand and characterise such pixels is critical in the design phase of the next generation of far IR telescopes.

  14. Multimodal Virtual Environments: MAGIC Toolkit and Visual-Haptic Interaction Paradigms

    DTIC Science & Technology

    1998-01-01

    2.7.3 Load/Save Options ..... 2.7.4 Information Display .... 2.8 Library Files. 2.9 Evaluation .............. 3 Visual-Haptic Interactions 3.1...Northwestern University[ Colgate , 1994]. It is possible for a user to touch one side of a thin object and be propelled out the opposite side, because...when there is a high correlation in motion and force between the visual and haptic realms. * Chapter 7 concludes with an evaluation of the application

  15. Quantum trajectory analysis of multimode subsystem-bath dynamics.

    PubMed

    Wyatt, Robert E; Na, Kyungsun

    2002-01-01

    The dynamics of a swarm of quantum trajectories is investigated for systems involving the interaction of an active mode (the subsystem) with an M-mode harmonic reservoir (the bath). Equations of motion for the position, velocity, and action function for elements of the probability fluid are integrated in the Lagrangian (moving with the fluid) picture of quantum hydrodynamics. These fluid elements are coupled through the Bohm quantum potential and as a result evolve as a correlated ensemble. Wave function synthesis along the trajectories permits an exact description of the quantum dynamics for the evolving probability fluid. The approach is fully quantum mechanical and does not involve classical or semiclassical approximations. Computational results are presented for three systems involving the interaction on an active mode with M=1, 10, and 15 bath modes. These results include configuration space trajectory evolution, flux analysis of the evolving ensemble, wave function synthesis along trajectories, and energy partitioning along specific trajectories. These results demonstrate the feasibility of using a small number of quantum trajectories to obtain accurate quantum results on some types of open quantum systems that are not amenable to standard quantum approaches involving basis set expansions or Eulerian space-fixed grids.

  16. The Intersection of Multimodality and Critical Perspective: Multimodality as Subversion

    ERIC Educational Resources Information Center

    Huang, Shin-ying

    2015-01-01

    This study explores the relevance of multimodality to critical media literacy. It is based on the understanding that communication is intrinsically multimodal and multimodal communication is inherently social and ideological. By analysing two English-language learners' multimodal ensembles, the study reports on how multimodality contributes to a…

  17. Image-guided thoracic surgery in the hybrid operation room.

    PubMed

    Ujiie, Hideki; Effat, Andrew; Yasufuku, Kazuhiro

    2017-01-01

    There has been an increase in the use of image-guided technology to facilitate minimally invasive therapy. The next generation of minimally invasive therapy is focused on advancement and translation of novel image-guided technologies in therapeutic interventions, including surgery, interventional pulmonology, radiation therapy, and interventional laser therapy. To establish the efficacy of different minimally invasive therapies, we have developed a hybrid operating room, known as the guided therapeutics operating room (GTx OR) at the Toronto General Hospital. The GTx OR is equipped with multi-modality image-guidance systems, which features a dual source-dual energy computed tomography (CT) scanner, a robotic cone-beam CT (CBCT)/fluoroscopy, high-performance endobronchial ultrasound system, endoscopic surgery system, near-infrared (NIR) fluorescence imaging system, and navigation tracking systems. The novel multimodality image-guidance systems allow physicians to quickly, and accurately image patients while they are on the operating table. This yield improved outcomes since physicians are able to use image guidance during their procedures, and carry out innovative multi-modality therapeutics. Multiple preclinical translational studies pertaining to innovative minimally invasive technology is being developed in our guided therapeutics laboratory (GTx Lab). The GTx Lab is equipped with similar technology, and multimodality image-guidance systems as the GTx OR, and acts as an appropriate platform for translation of research into human clinical trials. Through the GTx Lab, we are able to perform basic research, such as the development of image-guided technologies, preclinical model testing, as well as preclinical imaging, and then translate that research into the GTx OR. This OR allows for the utilization of new technologies in cancer therapy, including molecular imaging, and other innovative imaging modalities, and therefore enables a better quality of life for patients, both during and after the procedure. In this article, we describe capabilities of the GTx systems, and discuss the first-in-human technologies used, and evaluated in GTx OR.

  18. Watch-and-Comment as an Approach to Collaboratively Annotate Points of Interest in Video and Interactive-TV Programs

    NASA Astrophysics Data System (ADS)

    Pimentel, Maria Da Graça C.; Cattelan, Renan G.; Melo, Erick L.; Freitas, Giliard B.; Teixeira, Cesar A.

    In earlier work we proposed the Watch-and-Comment (WaC) paradigm as the seamless capture of multimodal comments made by one or more users while watching a video, resulting in the automatic generation of multimedia documents specifying annotated interactive videos. The aim is to allow services to be offered by applying document engineering techniques to the multimedia document generated automatically. The WaC paradigm was demonstrated with a WaCTool prototype application which supports multimodal annotation over video frames and segments, producing a corresponding interactive video. In this chapter, we extend the WaC paradigm to consider contexts in which several viewers may use their own mobile devices while watching and commenting on an interactive-TV program. We first review our previous work. Next, we discuss scenarios in which mobile users can collaborate via the WaC paradigm. We then present a new prototype application which allows users to employ their mobile devices to collaboratively annotate points of interest in video and interactive-TV programs. We also detail the current software infrastructure which supports our new prototype; the infrastructure extends the Ginga middleware for the Brazilian Digital TV with an implementation of the UPnP protocol - the aim is to provide the seamless integration of the users' mobile devices into the TV environment. As a result, the work reported in this chapter defines the WaC paradigm for the mobile-user as an approach to allow the collaborative annotation of the points of interest in video and interactive-TV programs.

  19. Effective spin physics in two-dimensional cavity QED arrays

    NASA Astrophysics Data System (ADS)

    Minář, Jiří; Güneş Söyler, Şebnem; Rotondo, Pietro; Lesanovsky, Igor

    2017-06-01

    We investigate a strongly correlated system of light and matter in two-dimensional cavity arrays. We formulate a multimode Tavis-Cummings (TC) Hamiltonian for two-level atoms coupled to cavity modes and driven by an external laser field which reduces to an effective spin Hamiltonian in the dispersive regime. In one-dimension we provide an exact analytical solution. In two-dimensions, we perform mean-field study and large scale quantum Monte Carlo simulations of both the TC and the effective spin models. We discuss the phase diagram and the parameter regime which gives rise to frustrated interactions between the spins. We provide a quantitative description of the phase transitions and correlation properties featured by the system and we discuss graph-theoretical properties of the ground states in terms of graph colourings using Pólya’s enumeration theorem.

  20. A Multimodal System with Synergistic Effects of Magneto-Mechanical, Photothermal, Photodynamic and Chemo Therapies of Cancer in Graphene-Quantum Dot-Coated Hollow Magnetic Nanospheres

    PubMed Central

    Wo, Fangjie; Xu, Rujiao; Shao, Yuxiang; Zhang, Zheyu; Chu, Maoquan; Shi, Donglu; Liu, Shupeng

    2016-01-01

    In this study, a multimodal therapeutic system was shown to be much more lethal in cancer cell killing compared to a single means of nano therapy, be it photothermal or photodynamic. Hollow magnetic nanospheres (HMNSs) were designed and synthesized for the synergistic effects of both magneto-mechanical and photothermal cancer therapy. By these combined stimuli, the cancer cells were structurally and physically destroyed with the morphological characteristics distinctively different from those by other therapeutics. HMNSs were also coated with the silica shells and conjugated with carboxylated graphene quantum dots (GQDs) as a core-shell composite: HMNS/SiO2/GQDs. The composite was further loaded with an anticancer drug doxorubicin (DOX) and stabilized with liposomes. The multimodal system was able to kill cancer cells with four different therapeutic mechanisms in a synergetic and multilateral fashion, namely, the magnetic field-mediated mechanical stimulation, photothermal damage, photodynamic toxicity, and chemotherapy. The unique nanocomposites with combined mechanical, chemo, and physical effects will provide an alternative strategy for highly improved cancer therapy efficiency. PMID:26941842

Top