Liu, Sheng; Angelaki, Dora E.
2009-01-01
Visual and vestibular signals converge onto the dorsal medial superior temporal area (MSTd) of the macaque extrastriate visual cortex, which is thought to be involved in multisensory heading perception for spatial navigation. Peripheral otolith information, however, is ambiguous and cannot distinguish linear accelerations experienced during self-motion from those due to changes in spatial orientation relative to gravity. Here we show that, unlike peripheral vestibular sensors but similar to lobules 9 and 10 of the cerebellar vermis (nodulus and uvula), MSTd neurons respond selectively to heading and not to changes in orientation relative to gravity. In support of a role in heading perception, MSTd vestibular responses are also dominated by velocity-like temporal dynamics, which might optimize sensory integration with visual motion information. Unlike the cerebellar vermis, however, MSTd neurons also carry a spatial orientation-independent rotation signal from the semicircular canals, which could be useful in compensating for the effects of head rotation on the processing of optic flow. These findings show that vestibular signals in MSTd are appropriately processed to support a functional role in multisensory heading perception. PMID:19605631
Perception of Upright: Multisensory Convergence and the Role of Temporo-Parietal Cortex
Kheradmand, Amir; Winnick, Ariel
2017-01-01
We inherently maintain a stable perception of the world despite frequent changes in the head, eye, and body positions. Such “orientation constancy” is a prerequisite for coherent spatial perception and sensorimotor planning. As a multimodal sensory reference, perception of upright represents neural processes that subserve orientation constancy through integration of sensory information encoding the eye, head, and body positions. Although perception of upright is distinct from perception of body orientation, they share similar neural substrates within the cerebral cortical networks involved in perception of spatial orientation. These cortical networks, mainly within the temporo-parietal junction, are crucial for multisensory processing and integration that generate sensory reference frames for coherent perception of self-position and extrapersonal space transformations. In this review, we focus on these neural mechanisms and discuss (i) neurobehavioral aspects of orientation constancy, (ii) sensory models that address the neurophysiology underlying perception of upright, and (iii) the current evidence for the role of cerebral cortex in perception of upright and orientation constancy, including findings from the neurological disorders that affect cortical function. PMID:29118736
Effect of eye position during human visual-vestibular integration of heading perception.
Crane, Benjamin T
2017-09-01
Visual and inertial stimuli provide heading discrimination cues. Integration of these multisensory stimuli has been demonstrated to depend on their relative reliability. However, the reference frame of visual stimuli is eye centered while inertia is head centered, and it remains unclear how these are reconciled with combined stimuli. Seven human subjects completed a heading discrimination task consisting of a 2-s translation with a peak velocity of 16 cm/s. Eye position was varied between 0° and ±25° left/right. Experiments were done with inertial motion, visual motion, or a combined visual-inertial motion. Visual motion coherence varied between 35% and 100%. Subjects reported whether their perceived heading was left or right of the midline in a forced-choice task. With the inertial stimulus the eye position had an effect such that the point of subjective equality (PSE) shifted 4.6 ± 2.4° in the gaze direction. With the visual stimulus the PSE shift was 10.2 ± 2.2° opposite the gaze direction, consistent with retinotopic coordinates. Thus with eccentric eye positions the perceived inertial and visual headings were offset ~15°. During the visual-inertial conditions the PSE varied consistently with the relative reliability of these stimuli such that at low visual coherence the PSE was similar to that of the inertial stimulus and at high coherence it was closer to the visual stimulus. On average, the inertial stimulus was weighted near Bayesian ideal predictions, but there was significant deviation from ideal in individual subjects. These findings support visual and inertial cue integration occurring in independent coordinate systems. NEW & NOTEWORTHY In multiple cortical areas visual heading is represented in retinotopic coordinates while inertial heading is in body coordinates. It remains unclear whether multisensory integration occurs in a common coordinate system. The experiments address this using a multisensory integration task with eccentric gaze positions making the effect of coordinate systems clear. The results indicate that the coordinate systems remain separate to the perceptual level and that during the multisensory task the perception depends on relative stimulus reliability. Copyright © 2017 the American Physiological Society.
Yang, Yun; Liu, Sheng; Chowdhury, Syed A.; DeAngelis, Gregory C.; Angelaki, Dora E.
2012-01-01
Many neurons in the dorsal medial superior temporal (MSTd) and ventral intraparietal (VIP) areas of the macaque brain are multisensory, responding to both optic flow and vestibular cues to self-motion. The heading tuning of visual and vestibular responses can be either congruent or opposite, but only congruent cells have been implicated in cue integration for heading perception. Because of the geometric properties of motion parallax, however, both congruent and opposite cells could be involved in coding self-motion when observers fixate a world-fixed target during translation, if congruent cells prefer near disparities and opposite cells prefer far disparities. We characterized the binocular disparity selectivity and heading tuning of MSTd and VIP cells using random-dot stimuli. Most (70%) MSTd neurons were disparity-selective with monotonic tuning, and there was no consistent relationship between depth preference and congruency of visual and vestibular heading tuning. One-third of disparity-selective MSTd cells reversed their depth preference for opposite directions of motion (direction-dependent disparity tuning, DDD), but most of these cells were unisensory with no tuning for vestibular stimuli. Inconsistent with previous reports, the direction preferences of most DDD neurons do not reverse with disparity. By comparison to MSTd, VIP contains fewer disparity-selective neurons (41%) and very few DDD cells. On average, VIP neurons also preferred higher speeds and nearer disparities than MSTd cells. Our findings are inconsistent with the hypothesis that visual/vestibular congruency is linked to depth preference, and also suggest that DDD cells are not involved in multisensory integration for heading perception. PMID:22159105
Fetsch, Christopher R; Deangelis, Gregory C; Angelaki, Dora E
2010-05-01
The perception of self-motion is crucial for navigation, spatial orientation and motor control. In particular, estimation of one's direction of translation, or heading, relies heavily on multisensory integration in most natural situations. Visual and nonvisual (e.g., vestibular) information can be used to judge heading, but each modality alone is often insufficient for accurate performance. It is not surprising, then, that visual and vestibular signals converge frequently in the nervous system, and that these signals interact in powerful ways at the level of behavior and perception. Early behavioral studies of visual-vestibular interactions consisted mainly of descriptive accounts of perceptual illusions and qualitative estimation tasks, often with conflicting results. In contrast, cue integration research in other modalities has benefited from the application of rigorous psychophysical techniques, guided by normative models that rest on the foundation of ideal-observer analysis and Bayesian decision theory. Here we review recent experiments that have attempted to harness these so-called optimal cue integration models for the study of self-motion perception. Some of these studies used nonhuman primate subjects, enabling direct comparisons between behavioral performance and simultaneously recorded neuronal activity. The results indicate that humans and monkeys can integrate visual and vestibular heading cues in a manner consistent with optimal integration theory, and that single neurons in the dorsal medial superior temporal area show striking correlates of the behavioral effects. This line of research and other applications of normative cue combination models should continue to shed light on mechanisms of self-motion perception and the neuronal basis of multisensory integration.
Dokka, Kalpana; DeAngelis, Gregory C.
2015-01-01
Humans and animals are fairly accurate in judging their direction of self-motion (i.e., heading) from optic flow when moving through a stationary environment. However, an object moving independently in the world alters the optic flow field and may bias heading perception if the visual system cannot dissociate object motion from self-motion. We investigated whether adding vestibular self-motion signals to optic flow enhances the accuracy of heading judgments in the presence of a moving object. Macaque monkeys were trained to report their heading (leftward or rightward relative to straight-forward) when self-motion was specified by vestibular, visual, or combined visual-vestibular signals, while viewing a display in which an object moved independently in the (virtual) world. The moving object induced significant biases in perceived heading when self-motion was signaled by either visual or vestibular cues alone. However, this bias was greatly reduced when visual and vestibular cues together signaled self-motion. In addition, multisensory heading discrimination thresholds measured in the presence of a moving object were largely consistent with the predictions of an optimal cue integration strategy. These findings demonstrate that multisensory cues facilitate the perceptual dissociation of self-motion and object motion, consistent with computational work that suggests that an appropriate decoding of multisensory visual-vestibular neurons can estimate heading while discounting the effects of object motion. SIGNIFICANCE STATEMENT Objects that move independently in the world alter the optic flow field and can induce errors in perceiving the direction of self-motion (heading). We show that adding vestibular (inertial) self-motion signals to optic flow almost completely eliminates the errors in perceived heading induced by an independently moving object. Furthermore, this increased accuracy occurs without a substantial loss in the precision. Our results thus demonstrate that vestibular signals play a critical role in dissociating self-motion from object motion. PMID:26446214
Development of Multisensory Spatial Integration and Perception in Humans
ERIC Educational Resources Information Center
Neil, Patricia A.; Chee-Ruiter, Christine; Scheier, Christian; Lewkowicz, David J.; Shimojo, Shinsuke
2006-01-01
Previous studies have shown that adults respond faster and more reliably to bimodal compared to unimodal localization cues. The current study investigated for the first time the development of audiovisual (A-V) integration in spatial localization behavior in infants between 1 and 10 months of age. We observed infants' head and eye movements in…
Vestibular signals in primate cortex for self-motion perception.
Gu, Yong
2018-04-21
The vestibular peripheral organs in our inner ears detect transient motion of the head in everyday life. This information is sent to the central nervous system for automatic processes such as vestibulo-ocular reflexes, balance and postural control, and higher cognitive functions including perception of self-motion and spatial orientation. Recent neurophysiological studies have discovered a prominent vestibular network in the primate cerebral cortex. Many of the areas involved are multisensory: their neurons are modulated by both vestibular signals and visual optic flow, potentially facilitating more robust heading estimation through cue integration. Combining psychophysics, computation, physiological recording and causal manipulation techniques, recent work has addressed both the encoding and decoding of vestibular signals for self-motion perception. Copyright © 2018. Published by Elsevier Ltd.
Vestibular system: the many facets of a multimodal sense.
Angelaki, Dora E; Cullen, Kathleen E
2008-01-01
Elegant sensory structures in the inner ear have evolved to measure head motion. These vestibular receptors consist of highly conserved semicircular canals and otolith organs. Unlike other senses, vestibular information in the central nervous system becomes immediately multisensory and multimodal. There is no overt, readily recognizable conscious sensation from these organs, yet vestibular signals contribute to a surprising range of brain functions, from the most automatic reflexes to spatial perception and motor coordination. Critical to these diverse, multimodal functions are multiple computationally intriguing levels of processing. For example, the need for multisensory integration necessitates vestibular representations in multiple reference frames. Proprioceptive-vestibular interactions, coupled with corollary discharge of a motor plan, allow the brain to distinguish actively generated from passive head movements. Finally, nonlinear interactions between otolith and canal signals allow the vestibular system to function as an inertial sensor and contribute critically to both navigation and spatial orientation.
Multisensory integration mechanisms during aging
Freiherr, Jessica; Lundström, Johan N.; Habel, Ute; Reetz, Kathrin
2013-01-01
The rapid demographical shift occurring in our society implies that understanding of healthy aging and age-related diseases is one of our major future challenges. Sensory impairments have an enormous impact on our lives and are closely linked to cognitive functioning. Due to the inherent complexity of sensory perceptions, we are commonly presented with a complex multisensory stimulation and the brain integrates the information from the individual sensory channels into a unique and holistic percept. The cerebral processes involved are essential for our perception of sensory stimuli and becomes especially important during the perception of emotional content. Despite ongoing deterioration of the individual sensory systems during aging, there is evidence for an increase in, or maintenance of, multisensory integration processing in aging individuals. Within this comprehensive literature review on multisensory integration we aim to highlight basic mechanisms and potential compensatory strategies the human brain utilizes to help maintain multisensory integration capabilities during healthy aging to facilitate a broader understanding of age-related pathological conditions. Further our goal was to identify where further research is needed. PMID:24379773
Ventral and dorsal streams processing visual motion perception (FDG-PET study)
2012-01-01
Background Earlier functional imaging studies on visually induced self-motion perception (vection) disclosed a bilateral network of activations within primary and secondary visual cortex areas which was combined with signal decreases, i.e., deactivations, in multisensory vestibular cortex areas. This finding led to the concept of a reciprocal inhibitory interaction between the visual and vestibular systems. In order to define areas involved in special aspects of self-motion perception such as intensity and duration of the perceived circular vection (CV) or the amount of head tilt, correlation analyses of the regional cerebral glucose metabolism, rCGM (measured by fluorodeoxyglucose positron-emission tomography, FDG-PET) and these perceptual covariates were performed in 14 healthy volunteers. For analyses of the visual-vestibular interaction, the CV data were compared to a random dot motion stimulation condition (not inducing vection) and a control group at rest (no stimulation at all). Results Group subtraction analyses showed that the visual-vestibular interaction was modified during CV, i.e., the activations within the cerebellar vermis and parieto-occipital areas were enhanced. The correlation analysis between the rCGM and the intensity of visually induced vection, experienced as body tilt, showed a relationship for areas of the multisensory vestibular cortical network (inferior parietal lobule bilaterally, anterior cingulate gyrus), the medial parieto-occipital cortex, the frontal eye fields and the cerebellar vermis. The “earlier” multisensory vestibular areas like the parieto-insular vestibular cortex and the superior temporal gyrus did not appear in the latter analysis. The duration of perceived vection after stimulus stop was positively correlated with rCGM in medial temporal lobe areas bilaterally, which included the (para-)hippocampus, known to be involved in various aspects of memory processing. The amount of head tilt was found to be positively correlated with the rCGM of bilateral basal ganglia regions responsible for the control of motor function of the head. Conclusions Our data gave further insights into subfunctions within the complex cortical network involved in the processing of visual-vestibular interaction during CV. Specific areas of this cortical network could be attributed to the ventral stream (“what” pathway) responsible for the duration after stimulus stop and to the dorsal stream (“where/how” pathway) responsible for intensity aspects. PMID:22800430
Multisensory architectures for action-oriented perception
NASA Astrophysics Data System (ADS)
Alba, L.; Arena, P.; De Fiore, S.; Listán, J.; Patané, L.; Salem, A.; Scordino, G.; Webb, B.
2007-05-01
In order to solve the navigation problem of a mobile robot in an unstructured environment a versatile sensory system and efficient locomotion control algorithms are necessary. In this paper an innovative sensory system for action-oriented perception applied to a legged robot is presented. An important problem we address is how to utilize a large variety and number of sensors, while having systems that can operate in real time. Our solution is to use sensory systems that incorporate analog and parallel processing, inspired by biological systems, to reduce the required data exchange with the motor control layer. In particular, as concerns the visual system, we use the Eye-RIS v1.1 board made by Anafocus, which is based on a fully parallel mixed-signal array sensor-processor chip. The hearing sensor is inspired by the cricket hearing system and allows efficient localization of a specific sound source with a very simple analog circuit. Our robot utilizes additional sensors for touch, posture, load, distance, and heading, and thus requires customized and parallel processing for concurrent acquisition. Therefore a Field Programmable Gate Array (FPGA) based hardware was used to manage the multi-sensory acquisition and processing. This choice was made because FPGAs permit the implementation of customized digital logic blocks that can operate in parallel allowing the sensors to be driven simultaneously. With this approach the multi-sensory architecture proposed can achieve real time capabilities.
Multisensory flavor perception.
Spence, Charles
2015-03-26
The perception of flavor is perhaps the most multisensory of our everyday experiences. The latest research by psychologists and cognitive neuroscientists increasingly reveals the complex multisensory interactions that give rise to the flavor experiences we all know and love, demonstrating how they rely on the integration of cues from all of the human senses. This Perspective explores the contributions of distinct senses to our perception of food and the growing realization that the same rules of multisensory integration that have been thoroughly explored in interactions between audition, vision, and touch may also explain the combination of the (admittedly harder to study) flavor senses. Academic advances are now spilling out into the real world, with chefs and food industry increasingly taking the latest scientific findings on board in their food design. Copyright © 2015 Elsevier Inc. All rights reserved.
Gaudio, Santino; Brooks, Samantha Jane; Riva, Giuseppe
2014-01-01
Background Body image distortion is a central symptom of Anorexia Nervosa (AN). Even if corporeal awareness is multisensory majority of AN studies mainly investigated visual misperception. We systematically reviewed AN studies that have investigated different nonvisual sensory inputs using an integrative multisensory approach to body perception. We also discussed the findings in the light of AN neuroimaging evidence. Methods PubMed and PsycINFO were searched until March, 2014. To be included in the review, studies were mainly required to: investigate a sample of patients with current or past AN and a control group and use tasks that directly elicited one or more nonvisual sensory domains. Results Thirteen studies were included. They studied a total of 223 people with current or past AN and 273 control subjects. Overall, results show impairment in tactile and proprioceptive domains of body perception in AN patients. Interoception and multisensory integration have been poorly explored directly in AN patients. A limitation of this review is the relatively small amount of literature available. Conclusions Our results showed that AN patients had a multisensory impairment of body perception that goes beyond visual misperception and involves tactile and proprioceptive sensory components. Furthermore, impairment of tactile and proprioceptive components may be associated with parietal cortex alterations in AN patients. Interoception and multisensory integration have been weakly explored directly. Further research, using multisensory approaches as well as neuroimaging techniques, is needed to better define the complexity of body image distortion in AN. Key Findings The review suggests an altered capacity of AN patients in processing and integration of bodily signals: body parts are experienced as dissociated from their holistic and perceptive dimensions. Specifically, it is likely that not only perception but memory, and in particular sensorimotor/proprioceptive memory, probably shapes bodily experience in patients with AN. PMID:25303480
I feel your voice. Cultural differences in the multisensory perception of emotion.
Tanaka, Akihiro; Koizumi, Ai; Imai, Hisato; Hiramatsu, Saori; Hiramoto, Eriko; de Gelder, Beatrice
2010-09-01
Cultural differences in emotion perception have been reported mainly for facial expressions and to a lesser extent for vocal expressions. However, the way in which the perceiver combines auditory and visual cues may itself be subject to cultural variability. Our study investigated cultural differences between Japanese and Dutch participants in the multisensory perception of emotion. A face and a voice, expressing either congruent or incongruent emotions, were presented on each trial. Participants were instructed to judge the emotion expressed in one of the two sources. The effect of to-be-ignored voice information on facial judgments was larger in Japanese than in Dutch participants, whereas the effect of to-be-ignored face information on vocal judgments was smaller in Japanese than in Dutch participants. This result indicates that Japanese people are more attuned than Dutch people to vocal processing in the multisensory perception of emotion. Our findings provide the first evidence that multisensory integration of affective information is modulated by perceivers' cultural background.
Multisensory Speech Perception in Children with Autism Spectrum Disorders
ERIC Educational Resources Information Center
Woynaroski, Tiffany G.; Kwakye, Leslie D.; Foss-Feig, Jennifer H.; Stevenson, Ryan A.; Stone, Wendy L.; Wallace, Mark T.
2013-01-01
This study examined unisensory and multisensory speech perception in 8-17 year old children with autism spectrum disorders (ASD) and typically developing controls matched on chronological age, sex, and IQ. Consonant-vowel syllables were presented in visual only, auditory only, matched audiovisual, and mismatched audiovisual ("McGurk")…
Parietal disruption alters audiovisual binding in the sound-induced flash illusion.
Kamke, Marc R; Vieth, Harrison E; Cottrell, David; Mattingley, Jason B
2012-09-01
Selective attention and multisensory integration are fundamental to perception, but little is known about whether, or under what circumstances, these processes interact to shape conscious awareness. Here, we used transcranial magnetic stimulation (TMS) to investigate the causal role of attention-related brain networks in multisensory integration between visual and auditory stimuli in the sound-induced flash illusion. The flash illusion is a widely studied multisensory phenomenon in which a single flash of light is falsely perceived as multiple flashes in the presence of irrelevant sounds. We investigated the hypothesis that extrastriate regions involved in selective attention, specifically within the right parietal cortex, exert an influence on the multisensory integrative processes that cause the flash illusion. We found that disruption of the right angular gyrus, but not of the adjacent supramarginal gyrus or of a sensory control site, enhanced participants' veridical perception of the multisensory events, thereby reducing their susceptibility to the illusion. Our findings suggest that the same parietal networks that normally act to enhance perception of attended events also play a role in the binding of auditory and visual stimuli in the sound-induced flash illusion. Copyright © 2012 Elsevier Inc. All rights reserved.
Modality-specific selective attention attenuates multisensory integration.
Mozolic, Jennifer L; Hugenschmidt, Christina E; Peiffer, Ann M; Laurienti, Paul J
2008-01-01
Stimuli occurring in multiple sensory modalities that are temporally synchronous or spatially coincident can be integrated together to enhance perception. Additionally, the semantic content or meaning of a stimulus can influence cross-modal interactions, improving task performance when these stimuli convey semantically congruent or matching information, but impairing performance when they contain non-matching or distracting information. Attention is one mechanism that is known to alter processing of sensory stimuli by enhancing perception of task-relevant information and suppressing perception of task-irrelevant stimuli. It is not known, however, to what extent attention to a single sensory modality can minimize the impact of stimuli in the unattended sensory modality and reduce the integration of stimuli across multiple sensory modalities. Our hypothesis was that modality-specific selective attention would limit processing of stimuli in the unattended sensory modality, resulting in a reduction of performance enhancements produced by semantically matching multisensory stimuli, and a reduction in performance decrements produced by semantically non-matching multisensory stimuli. The results from two experiments utilizing a cued discrimination task demonstrate that selective attention to a single sensory modality prevents the integration of matching multisensory stimuli that is normally observed when attention is divided between sensory modalities. Attention did not reliably alter the amount of distraction caused by non-matching multisensory stimuli on this task; however, these findings highlight a critical role for modality-specific selective attention in modulating multisensory integration.
ERIC Educational Resources Information Center
Stevenson, Ryan A.; Zemtsov, Raquel K.; Wallace, Mark T.
2012-01-01
Human multisensory systems are known to bind inputs from the different sensory modalities into a unified percept, a process that leads to measurable behavioral benefits. This integrative process can be observed through multisensory illusions, including the McGurk effect and the sound-induced flash illusion, both of which demonstrate the ability of…
On the role of crossmodal prediction in audiovisual emotion perception.
Jessen, Sarah; Kotz, Sonja A
2013-01-01
Humans rely on multiple sensory modalities to determine the emotional state of others. In fact, such multisensory perception may be one of the mechanisms explaining the ease and efficiency by which others' emotions are recognized. But how and when exactly do the different modalities interact? One aspect in multisensory perception that has received increasing interest in recent years is the concept of cross-modal prediction. In emotion perception, as in most other settings, visual information precedes the auditory information. Thereby, leading in visual information can facilitate subsequent auditory processing. While this mechanism has often been described in audiovisual speech perception, so far it has not been addressed in audiovisual emotion perception. Based on the current state of the art in (a) cross-modal prediction and (b) multisensory emotion perception research, we propose that it is essential to consider the former in order to fully understand the latter. Focusing on electroencephalographic (EEG) and magnetoencephalographic (MEG) studies, we provide a brief overview of the current research in both fields. In discussing these findings, we suggest that emotional visual information may allow more reliable predicting of auditory information compared to non-emotional visual information. In support of this hypothesis, we present a re-analysis of a previous data set that shows an inverse correlation between the N1 EEG response and the duration of visual emotional, but not non-emotional information. If the assumption that emotional content allows more reliable predicting can be corroborated in future studies, cross-modal prediction is a crucial factor in our understanding of multisensory emotion perception.
Butler, Andrew J; James, Thomas W; James, Karin Harman
2011-11-01
Everyday experience affords us many opportunities to learn about objects through multiple senses using physical interaction. Previous work has shown that active motor learning of unisensory items enhances memory and leads to the involvement of motor systems during subsequent perception. However, the impact of active motor learning on subsequent perception and recognition of associations among multiple senses has not been investigated. Twenty participants were included in an fMRI study that explored the impact of active motor learning on subsequent processing of unisensory and multisensory stimuli. Participants were exposed to visuo-motor associations between novel objects and novel sounds either through self-generated actions on the objects or by observing an experimenter produce the actions. Immediately after exposure, accuracy, RT, and BOLD fMRI measures were collected with unisensory and multisensory stimuli in associative perception and recognition tasks. Response times during audiovisual associative and unisensory recognition were enhanced by active learning, as was accuracy during audiovisual associative recognition. The difference in motor cortex activation between old and new associations was greater for the active than the passive group. Furthermore, functional connectivity between visual and motor cortices was stronger after active learning than passive learning. Active learning also led to greater activation of the fusiform gyrus during subsequent unisensory visual perception. Finally, brain regions implicated in audiovisual integration (e.g., STS) showed greater multisensory gain after active learning than after passive learning. Overall, the results show that active motor learning modulates the processing of multisensory associations.
Lewkowicz, David J; Minar, Nicholas J; Tift, Amy H; Brandon, Melissa
2015-02-01
To investigate the developmental emergence of the perception of the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8- to 10-, and 12- to 14-month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor 8- to 10-month-old infants exhibited audiovisual matching in that they did not look longer at the matching monologue. In contrast, the 12- to 14-month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, perceived the multisensory coherence of native-language monologues earlier in the test trials than that of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12- to 14-month-olds did not depend on audiovisual synchrony, whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audiovisual synchrony cues are more important in the perception of the multisensory coherence of non-native speech than that of native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing. Copyright © 2014 Elsevier Inc. All rights reserved.
Multisensory effects on somatosensation: a trimodal visuo-vestibular-tactile interaction
Kaliuzhna, Mariia; Ferrè, Elisa Raffaella; Herbelin, Bruno; Blanke, Olaf; Haggard, Patrick
2016-01-01
Vestibular information about self-motion is combined with other sensory signals. Previous research described both visuo-vestibular and vestibular-tactile bilateral interactions, but the simultaneous interaction between all three sensory modalities has not been explored. Here we exploit a previously reported visuo-vestibular integration to investigate multisensory effects on tactile sensitivity in humans. Tactile sensitivity was measured during passive whole body rotations alone or in conjunction with optic flow, creating either purely vestibular or visuo-vestibular sensations of self-motion. Our results demonstrate that tactile sensitivity is modulated by perceived self-motion, as provided by a combined visuo-vestibular percept, and not by the visual and vestibular cues independently. We propose a hierarchical multisensory interaction that underpins somatosensory modulation: visual and vestibular cues are first combined to produce a multisensory self-motion percept. Somatosensory processing is then enhanced according to the degree of perceived self-motion. PMID:27198907
Multisensory constraints on awareness
Deroy, Ophelia; Chen, Yi-Chuan; Spence, Charles
2014-01-01
Given that multiple senses are often stimulated at the same time, perceptual awareness is most likely to take place in multisensory situations. However, theories of awareness are based on studies and models established for a single sense (mostly vision). Here, we consider the methodological and theoretical challenges raised by taking a multisensory perspective on perceptual awareness. First, we consider how well tasks designed to study unisensory awareness perform when used in multisensory settings, stressing that studies using binocular rivalry, bistable figure perception, continuous flash suppression, the attentional blink, repetition blindness and backward masking can demonstrate multisensory influences on unisensory awareness, but fall short of tackling multisensory awareness directly. Studies interested in the latter phenomenon rely on a method of subjective contrast and can, at best, delineate conditions under which individuals report experiencing a multisensory object or two unisensory objects. As there is not a perfect match between these conditions and those in which multisensory integration and binding occur, the link between awareness and binding advocated for visual information processing needs to be revised for multisensory cases. These challenges point at the need to question the very idea of multisensory awareness. PMID:24639579
Balz, Johanna; Keil, Julian; Roa Romero, Yadira; Mekle, Ralf; Schubert, Florian; Aydin, Semiha; Ittermann, Bernd; Gallinat, Jürgen; Senkowski, Daniel
2016-01-15
In everyday life we are confronted with inputs of multisensory stimuli that need to be integrated across our senses. Individuals vary considerably in how they integrate multisensory information, yet the neurochemical foundations underlying this variability are not well understood. Neural oscillations, especially in the gamma band (>30Hz) play an important role in multisensory processing. Furthermore, gamma-aminobutyric acid (GABA) neurotransmission contributes to the generation of gamma band oscillations (GBO), which can be sustained by activation of metabotropic glutamate receptors. Hence, differences in the GABA and glutamate systems might contribute to individual differences in multisensory processing. In this combined magnetic resonance spectroscopy and electroencephalography study, we examined the relationships between GABA and glutamate concentrations in the superior temporal sulcus (STS), source localized GBO, and illusion rate in the sound-induced flash illusion (SIFI). In 39 human volunteers we found robust relationships between GABA concentration, GBO power, and the SIFI perception rate (r-values=0.44 to 0.53). The correlation between GBO power and SIFI perception rate was about twofold higher when the modulating influence of the GABA level was included in the analysis as compared to when it was excluded. No significant effects were obtained for glutamate concentration. Our study suggests that the GABA level shapes individual differences in audiovisual perception through its modulating influence on GBO. GABA neurotransmission could be a promising target for treatment interventions of multisensory processing deficits in clinical populations, such as schizophrenia or autism. Copyright © 2015 Elsevier Inc. All rights reserved.
Assessing the Role of the 'Unity Assumption' on Multisensory Integration: A Review.
Chen, Yi-Chuan; Spence, Charles
2017-01-01
There has been longstanding interest from both experimental psychologists and cognitive neuroscientists in the potential modulatory role of various top-down factors on multisensory integration/perception in humans. One such top-down influence, often referred to in the literature as the 'unity assumption,' is thought to occur in those situations in which an observer considers that various of the unisensory stimuli that they have been presented with belong to one and the same object or event (Welch and Warren, 1980). Here, we review the possible factors that may lead to the emergence of the unity assumption. We then critically evaluate the evidence concerning the consequences of the unity assumption from studies of the spatial and temporal ventriloquism effects, from the McGurk effect, and from the Colavita visual dominance paradigm. The research that has been published to date using these tasks provides support for the claim that the unity assumption influences multisensory perception under at least a subset of experimental conditions. We then consider whether the notion has been superseded in recent years by the introduction of priors in Bayesian causal inference models of human multisensory perception. We suggest that the prior of common cause (that is, the prior concerning whether multisensory signals originate from the same source or not) offers the most useful way to quantify the unity assumption as a continuous cognitive variable.
Multisensory speech perception without the left superior temporal sulcus.
Baum, Sarah H; Martin, Randi C; Hamilton, A Cris; Beauchamp, Michael S
2012-09-01
Converging evidence suggests that the left superior temporal sulcus (STS) is a critical site for multisensory integration of auditory and visual information during speech perception. We report a patient, SJ, who suffered a stroke that damaged the left tempo-parietal area, resulting in mild anomic aphasia. Structural MRI showed complete destruction of the left middle and posterior STS, as well as damage to adjacent areas in the temporal and parietal lobes. Surprisingly, SJ demonstrated preserved multisensory integration measured with two independent tests. First, she perceived the McGurk effect, an illusion that requires integration of auditory and visual speech. Second, her perception of morphed audiovisual speech with ambiguous auditory or visual information was significantly influenced by the opposing modality. To understand the neural basis for this preserved multisensory integration, blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) was used to examine brain responses to audiovisual speech in SJ and 23 healthy age-matched controls. In controls, bilateral STS activity was observed. In SJ, no activity was observed in the damaged left STS but in the right STS, more cortex was active in SJ than in any of the normal controls. Further, the amplitude of the BOLD response in right STS response to McGurk stimuli was significantly greater in SJ than in controls. The simplest explanation of these results is a reorganization of SJ's cortical language networks such that the right STS now subserves multisensory integration of speech. Copyright © 2012 Elsevier Inc. All rights reserved.
Lewkowicz, David J.; Minar, Nicholas J.; Tift, Amy H.; Brandon, Melissa
2014-01-01
To investigate the developmental emergence of the ability to perceive the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8–10, and 12–14 month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor the 8–10 month-old infants exhibited audio-visual matching in that neither group exhibited greater looking at the matching monologue. In contrast, the 12–14 month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, they perceived the multisensory coherence of native-language monologues earlier in the test trials than of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12–14 month olds did not depend on audio-visual synchrony whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audio-visual synchrony cues are more important in the perception of the multisensory coherence of non-native than native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing. PMID:25462038
Multisensory Speech Perception Without the Left Superior Temporal Sulcus
Baum, Sarah H.; Martin, Randi C.; Hamilton, A. Cris; Beauchamp, Michael S.
2012-01-01
Converging evidence suggests that the left superior temporal sulcus (STS) is a critical site for multisensory integration of auditory and visual information during speech perception. We report a patient, SJ, who suffered a stroke that damaged the left tempo-parietal area, resulting in mild anomic aphasia. Structural MRI showed complete destruction of the left middle and posterior STS, as well as damage to adjacent areas in the temporal and parietal lobes. Surprisingly, SJ demonstrated preserved multisensory integration measured with two independent tests. First, she perceived the McGurk effect, an illusion that requires integration of auditory and visual speech. Second, her perception of morphed audiovisual speech with ambiguous auditory or visual information was significantly influenced by the opposing modality. To understand the neural basis for this preserved multisensory integration, blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) was used to examine brain responses to audiovisual speech in SJ and 23 healthy age-matched controls. In controls, bilateral STS activity was observed. In SJ, no activity was observed in the damaged left STS but in the right STS, more cortex was active in SJ than in any of the normal controls. Further, the amplitude of the BOLD response in right STS response to McGurk stimuli was significantly greater in SJ than in controls. The simplest explanation of these results is a reorganization of SJ's cortical language networks such that the right STS now subserves multisensory integration of speech. PMID:22634292
Decentralized Multisensory Information Integration in Neural Systems.
Zhang, Wen-Hao; Chen, Aihua; Rasch, Malte J; Wu, Si
2016-01-13
How multiple sensory cues are integrated in neural circuitry remains a challenge. The common hypothesis is that information integration might be accomplished in a dedicated multisensory integration area receiving feedforward inputs from the modalities. However, recent experimental evidence suggests that it is not a single multisensory brain area, but rather many multisensory brain areas that are simultaneously involved in the integration of information. Why many mutually connected areas should be needed for information integration is puzzling. Here, we investigated theoretically how information integration could be achieved in a distributed fashion within a network of interconnected multisensory areas. Using biologically realistic neural network models, we developed a decentralized information integration system that comprises multiple interconnected integration areas. Studying an example of combining visual and vestibular cues to infer heading direction, we show that such a decentralized system is in good agreement with anatomical evidence and experimental observations. In particular, we show that this decentralized system can integrate information optimally. The decentralized system predicts that optimally integrated information should emerge locally from the dynamics of the communication between brain areas and sheds new light on the interpretation of the connectivity between multisensory brain areas. To extract information reliably from ambiguous environments, the brain integrates multiple sensory cues, which provide different aspects of information about the same entity of interest. Here, we propose a decentralized architecture for multisensory integration. In such a system, no processor is in the center of the network topology and information integration is achieved in a distributed manner through reciprocally connected local processors. Through studying the inference of heading direction with visual and vestibular cues, we show that the decentralized system can integrate information optimally, with the reciprocal connections between processers determining the extent of cue integration. Our model reproduces known multisensory integration behaviors observed in experiments and sheds new light on our understanding of how information is integrated in the brain. Copyright © 2016 Zhang et al.
Decentralized Multisensory Information Integration in Neural Systems
Zhang, Wen-hao; Chen, Aihua
2016-01-01
How multiple sensory cues are integrated in neural circuitry remains a challenge. The common hypothesis is that information integration might be accomplished in a dedicated multisensory integration area receiving feedforward inputs from the modalities. However, recent experimental evidence suggests that it is not a single multisensory brain area, but rather many multisensory brain areas that are simultaneously involved in the integration of information. Why many mutually connected areas should be needed for information integration is puzzling. Here, we investigated theoretically how information integration could be achieved in a distributed fashion within a network of interconnected multisensory areas. Using biologically realistic neural network models, we developed a decentralized information integration system that comprises multiple interconnected integration areas. Studying an example of combining visual and vestibular cues to infer heading direction, we show that such a decentralized system is in good agreement with anatomical evidence and experimental observations. In particular, we show that this decentralized system can integrate information optimally. The decentralized system predicts that optimally integrated information should emerge locally from the dynamics of the communication between brain areas and sheds new light on the interpretation of the connectivity between multisensory brain areas. SIGNIFICANCE STATEMENT To extract information reliably from ambiguous environments, the brain integrates multiple sensory cues, which provide different aspects of information about the same entity of interest. Here, we propose a decentralized architecture for multisensory integration. In such a system, no processor is in the center of the network topology and information integration is achieved in a distributed manner through reciprocally connected local processors. Through studying the inference of heading direction with visual and vestibular cues, we show that the decentralized system can integrate information optimally, with the reciprocal connections between processers determining the extent of cue integration. Our model reproduces known multisensory integration behaviors observed in experiments and sheds new light on our understanding of how information is integrated in the brain. PMID:26758843
Honma, Motoyasu; Plass, John; Brang, David; Florczak, Susan M; Grabowecky, Marcia; Paller, Ken A
2016-01-01
Plasticity is essential in body perception so that physical changes in the body can be accommodated and assimilated. Multisensory integration of visual, auditory, tactile, and proprioceptive signals contributes both to conscious perception of the body's current state and to associated learning. However, much is unknown about how novel information is assimilated into body perception networks in the brain. Sleep-based consolidation can facilitate various types of learning via the reactivation of networks involved in prior encoding or through synaptic down-scaling. Sleep may likewise contribute to perceptual learning of bodily information by providing an optimal time for multisensory recalibration. Here we used methods for targeted memory reactivation (TMR) during slow-wave sleep to examine the influence of sleep-based reactivation of experimentally induced alterations in body perception. The rubber-hand illusion was induced with concomitant auditory stimulation in 24 healthy participants on 3 consecutive days. While each participant was sleeping in his or her own bed during intervening nights, electrophysiological detection of slow-wave sleep prompted covert stimulation with either the sound heard during illusion induction, a counterbalanced novel sound, or neither. TMR systematically enhanced feelings of bodily ownership after subsequent inductions of the rubber-hand illusion. TMR also enhanced spatial recalibration of perceived hand location in the direction of the rubber hand. This evidence for a sleep-based facilitation of a body-perception illusion demonstrates that the spatial recalibration of multisensory signals can be altered overnight to stabilize new learning of bodily representations. Sleep-based memory processing may thus constitute a fundamental component of body-image plasticity.
The COGs (context, object, and goals) in multisensory processing.
ten Oever, Sanne; Romei, Vincenzo; van Atteveldt, Nienke; Soto-Faraco, Salvador; Murray, Micah M; Matusz, Pawel J
2016-05-01
Our understanding of how perception operates in real-world environments has been substantially advanced by studying both multisensory processes and "top-down" control processes influencing sensory processing via activity from higher-order brain areas, such as attention, memory, and expectations. As the two topics have been traditionally studied separately, the mechanisms orchestrating real-world multisensory processing remain unclear. Past work has revealed that the observer's goals gate the influence of many multisensory processes on brain and behavioural responses, whereas some other multisensory processes might occur independently of these goals. Consequently, other forms of top-down control beyond goal dependence are necessary to explain the full range of multisensory effects currently reported at the brain and the cognitive level. These forms of control include sensitivity to stimulus context as well as the detection of matches (or lack thereof) between a multisensory stimulus and categorical attributes of naturalistic objects (e.g. tools, animals). In this review we discuss and integrate the existing findings that demonstrate the importance of such goal-, object- and context-based top-down control over multisensory processing. We then put forward a few principles emerging from this literature review with respect to the mechanisms underlying multisensory processing and discuss their possible broader implications.
Temporal processing deficit leads to impaired multisensory binding in schizophrenia.
Zvyagintsev, Mikhail; Parisi, Carmen; Mathiak, Klaus
2017-09-01
Schizophrenia has been characterised by neurodevelopmental dysconnectivity resulting in cognitive and perceptual dysmetria. Hence patients with schizophrenia may be impaired to detect the temporal relationship between stimuli in different sensory modalities. However, only a few studies described deficit in perception of temporally asynchronous multisensory stimuli in schizophrenia. We examined the perceptual bias and the processing time of synchronous and delayed sounds in the streaming-bouncing illusion in 16 patients with schizophrenia and a matched control group of 18 participants. Equal for patients and controls, the synchronous sound biased the percept of two moving squares towards bouncing as opposed to the more frequent streaming percept in the condition without sound. In healthy controls, a delay of the sound presentation significantly reduced the bias and led to prolonged processing time whereas patients with schizophrenia did not differentiate between this condition and the condition with synchronous sound. Schizophrenia leads to a prolonged window of simultaneity for audiovisual stimuli. Therefore, temporal processing deficit in schizophrenia can lead to hyperintegration of temporally unmatched multisensory stimuli.
Ross, Lars A; Del Bene, Victor A; Molholm, Sophie; Jae Woo, Young; Andrade, Gizely N; Abrahams, Brett S; Foxe, John J
2017-11-01
Three lines of evidence motivated this study. 1) CNTNAP2 variation is associated with autism risk and speech-language development. 2) CNTNAP2 variations are associated with differences in white matter (WM) tracts comprising the speech-language circuitry. 3) Children with autism show impairment in multisensory speech perception. Here, we asked whether an autism risk-associated CNTNAP2 single nucleotide polymorphism in neurotypical adults was associated with multisensory speech perception performance, and whether such a genotype-phenotype association was mediated through white matter tract integrity in speech-language circuitry. Risk genotype at rs7794745 was associated with decreased benefit from visual speech and lower fractional anisotropy (FA) in several WM tracts (right precentral gyrus, left anterior corona radiata, right retrolenticular internal capsule). These structural connectivity differences were found to mediate the effect of genotype on audiovisual speech perception, shedding light on possible pathogenic pathways in autism and biological sources of inter-individual variation in audiovisual speech processing in neurotypicals. Copyright © 2017 Elsevier Inc. All rights reserved.
Deconstructing the McGurk-MacDonald Illusion
ERIC Educational Resources Information Center
Soto-Faraco, Salvador; Alsius, Agnes
2009-01-01
Cross-modal illusions such as the McGurk-MacDonald effect have been used to illustrate the automatic, encapsulated nature of multisensory integration. This characterization is based in the widespread assumption that the illusory percept arising from intersensory conflict reflects only the end-product of the multisensory integration process, with…
Audiovisual perception in amblyopia: A review and synthesis.
Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F
2018-05-17
Amblyopia is a common developmental sensory disorder that has been extensively and systematically investigated as a unisensory visual impairment. However, its effects are increasingly recognized to extend beyond vision to the multisensory domain. Indeed, amblyopia is associated with altered cross-modal interactions in audiovisual temporal perception, audiovisual spatial perception, and audiovisual speech perception. Furthermore, although the visual impairment in amblyopia is typically unilateral, the multisensory abnormalities tend to persist even when viewing with both eyes. Knowledge of the extent and mechanisms of the audiovisual impairments in amblyopia, however, remains in its infancy. This work aims to review our current understanding of audiovisual processing and integration deficits in amblyopia, and considers the possible mechanisms underlying these abnormalities. Copyright © 2018. Published by Elsevier Ltd.
Behavioural benefits of multisensory processing in ferrets.
Hammond-Kenny, Amy; Bajo, Victoria M; King, Andrew J; Nodal, Fernando R
2017-01-01
Enhanced detection and discrimination, along with faster reaction times, are the most typical behavioural manifestations of the brain's capacity to integrate multisensory signals arising from the same object. In this study, we examined whether multisensory behavioural gains are observable across different components of the localization response that are potentially under the command of distinct brain regions. We measured the ability of ferrets to localize unisensory (auditory or visual) and spatiotemporally coincident auditory-visual stimuli of different durations that were presented from one of seven locations spanning the frontal hemifield. During the localization task, we recorded the head movements made following stimulus presentation, as a metric for assessing the initial orienting response of the ferrets, as well as the subsequent choice of which target location to approach to receive a reward. Head-orienting responses to auditory-visual stimuli were more accurate and faster than those made to visual but not auditory targets, suggesting that these movements were guided principally by sound alone. In contrast, approach-to-target localization responses were more accurate and faster to spatially congruent auditory-visual stimuli throughout the frontal hemifield than to either visual or auditory stimuli alone. Race model inequality analysis of head-orienting reaction times and approach-to-target response times indicates that different processes, probability summation and neural integration, respectively, are likely to be responsible for the effects of multisensory stimulation on these two measures of localization behaviour. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Kim, HyungGoo R.; Pitkow, Xaq; Angelaki, Dora E.
2016-01-01
Sensory input reflects events that occur in the environment, but multiple events may be confounded in sensory signals. For example, under many natural viewing conditions, retinal image motion reflects some combination of self-motion and movement of objects in the world. To estimate one stimulus event and ignore others, the brain can perform marginalization operations, but the neural bases of these operations are poorly understood. Using computational modeling, we examine how multisensory signals may be processed to estimate the direction of self-motion (i.e., heading) and to marginalize out effects of object motion. Multisensory neurons represent heading based on both visual and vestibular inputs and come in two basic types: “congruent” and “opposite” cells. Congruent cells have matched heading tuning for visual and vestibular cues and have been linked to perceptual benefits of cue integration during heading discrimination. Opposite cells have mismatched visual and vestibular heading preferences and are ill-suited for cue integration. We show that decoding a mixed population of congruent and opposite cells substantially reduces errors in heading estimation caused by object motion. In addition, we present a general formulation of an optimal linear decoding scheme that approximates marginalization and can be implemented biologically by simple reinforcement learning mechanisms. We also show that neural response correlations induced by task-irrelevant variables may greatly exceed intrinsic noise correlations. Overall, our findings suggest a general computational strategy by which neurons with mismatched tuning for two different sensory cues may be decoded to perform marginalization operations that dissociate possible causes of sensory inputs. PMID:27334948
Role of multisensory stimuli in vigilance enhancement- a single trial event related potential study.
Abbasi, Nida Itrat; Bodala, Indu Prasad; Bezerianos, Anastasios; Yu Sun; Al-Nashash, Hasan; Thakor, Nitish V
2017-07-01
Development of interventions to prevent vigilance decrement has important applications in sensitive areas like transportation and defence. The objective of this work is to use multisensory (visual and haptic) stimuli for cognitive enhancement during mundane tasks. Two different epoch intervals representing sensory perception and motor response were analysed using minimum variance distortionless response (MVDR) based single trial ERP estimation to understand the performance dependency on both factors. Bereitschaftspotential (BP) latency L3 (r=0.6 in phase 1 (visual) and r=0.71 in phase 2 (visual and haptic)) was significantly correlated with reaction time as compared to that of sensory ERP latency L2 (r=0.1 in both phase 1 and phase 2). This implies that low performance in monotonous tasks is predominantly dependent on the prolonged neural interaction with the muscles to initiate movement. Further, negative relationship was found between the ERP latencies related to sensory perception and Bereitschaftspotential (BP) and occurrence of epochs when multisensory cues are provided. This means that vigilance decrement is reduced with the help of multisensory stimulus presentation in prolonged monotonous tasks.
Salomon, Roy; Noel, Jean-Paul; Łukowska, Marta; Faivre, Nathan; Metzinger, Thomas; Serino, Andrea; Blanke, Olaf
2017-09-01
Recent studies have highlighted the role of multisensory integration as a key mechanism of self-consciousness. In particular, integration of bodily signals within the peripersonal space (PPS) underlies the experience of the self in a body we own (self-identification) and that is experienced as occupying a specific location in space (self-location), two main components of bodily self-consciousness (BSC). Experiments investigating the effects of multisensory integration on BSC have typically employed supra-threshold sensory stimuli, neglecting the role of unconscious sensory signals in BSC, as tested in other consciousness research. Here, we used psychophysical techniques to test whether multisensory integration of bodily stimuli underlying BSC also occurs for multisensory inputs presented below the threshold of conscious perception. Our results indicate that visual stimuli rendered invisible through continuous flash suppression boost processing of tactile stimuli on the body (Exp. 1), and enhance the perception of near-threshold tactile stimuli (Exp. 2), only once they entered PPS. We then employed unconscious multisensory stimulation to manipulate BSC. Participants were presented with tactile stimulation on their body and with visual stimuli on a virtual body, seen at a distance, which were either visible or rendered invisible. We found that participants reported higher self-identification with the virtual body in the synchronous visuo-tactile stimulation (as compared to asynchronous stimulation; Exp. 3), and shifted their self-location toward the virtual body (Exp.4), even if stimuli were fully invisible. Our results indicate that multisensory inputs, even outside of awareness, are integrated and affect the phenomenological content of self-consciousness, grounding BSC firmly in the field of psychophysical consciousness studies. Copyright © 2017 Elsevier B.V. All rights reserved.
Stone, David B.; Urrea, Laura J.; Aine, Cheryl J.; Bustillo, Juan R.; Clark, Vincent P.; Stephen, Julia M.
2011-01-01
In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder. PMID:21807011
Multisensory speech perception in autism spectrum disorder: From phoneme to whole-word perception.
Stevenson, Ryan A; Baum, Sarah H; Segers, Magali; Ferber, Susanne; Barense, Morgan D; Wallace, Mark T
2017-07-01
Speech perception in noisy environments is boosted when a listener can see the speaker's mouth and integrate the auditory and visual speech information. Autistic children have a diminished capacity to integrate sensory information across modalities, which contributes to core symptoms of autism, such as impairments in social communication. We investigated the abilities of autistic and typically-developing (TD) children to integrate auditory and visual speech stimuli in various signal-to-noise ratios (SNR). Measurements of both whole-word and phoneme recognition were recorded. At the level of whole-word recognition, autistic children exhibited reduced performance in both the auditory and audiovisual modalities. Importantly, autistic children showed reduced behavioral benefit from multisensory integration with whole-word recognition, specifically at low SNRs. At the level of phoneme recognition, autistic children exhibited reduced performance relative to their TD peers in auditory, visual, and audiovisual modalities. However, and in contrast to their performance at the level of whole-word recognition, both autistic and TD children showed benefits from multisensory integration for phoneme recognition. In accordance with the principle of inverse effectiveness, both groups exhibited greater benefit at low SNRs relative to high SNRs. Thus, while autistic children showed typical multisensory benefits during phoneme recognition, these benefits did not translate to typical multisensory benefit of whole-word recognition in noisy environments. We hypothesize that sensory impairments in autistic children raise the SNR threshold needed to extract meaningful information from a given sensory input, resulting in subsequent failure to exhibit behavioral benefits from additional sensory information at the level of whole-word recognition. Autism Res 2017. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. Autism Res 2017, 10: 1280-1290. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.
An intact action-perception coupling depends on the integrity of the cerebellum.
Christensen, Andrea; Giese, Martin A; Sultan, Fahad; Mueller, Oliver M; Goericke, Sophia L; Ilg, Winfried; Timmann, Dagmar
2014-05-07
It is widely accepted that action and perception in humans functionally interact on multiple levels. Moreover, areas originally suggested to be predominantly motor-related, as the cerebellum, are also involved in action observation. However, as yet, few studies provided unequivocal evidence that the cerebellum is involved in the action perception coupling (APC), specifically in the integration of motor and multisensory information for perception. We addressed this question studying patients with focal cerebellar lesions in a virtual-reality paradigm measuring the effect of action execution on action perception presenting self-generated movements as point lights. We measured the visual sensitivity to the point light stimuli based on signal detection theory. Compared with healthy controls cerebellar patients showed no beneficial influence of action execution on perception indicating deficits in APC. Applying lesion symptom mapping, we identified distinct areas in the dentate nucleus and the lateral cerebellum of both hemispheres that are causally involved in APC. Lesions of the right ventral dentate, the ipsilateral motor representations (lobules V/VI), and most interestingly the contralateral posterior cerebellum (lobule VII) impede the benefits of motor execution on perception. We conclude that the cerebellum establishes time-dependent multisensory representations on different levels, relevant for motor control as well as supporting action perception. Ipsilateral cerebellar motor representations are thought to support the somatosensory state estimate of ongoing movements, whereas the ventral dentate and the contralateral posterior cerebellum likely support sensorimotor integration in the cerebellar-parietal loops. Both the correct somatosensory as well as the multisensory state representations are vital for an intact APC.
Mapping multisensory parietal face and body areas in humans.
Huang, Ruey-Song; Chen, Ching-fu; Tran, Alyssa T; Holstein, Katie L; Sereno, Martin I
2012-10-30
Detection and avoidance of impending obstacles is crucial to preventing head and body injuries in daily life. To safely avoid obstacles, locations of objects approaching the body surface are usually detected via the visual system and then used by the motor system to guide defensive movements. Mediating between visual input and motor output, the posterior parietal cortex plays an important role in integrating multisensory information in peripersonal space. We used functional MRI to map parietal areas that see and feel multisensory stimuli near or on the face and body. Tactile experiments using full-body air-puff stimulation suits revealed somatotopic areas of the face and multiple body parts forming a higher-level homunculus in the superior posterior parietal cortex. Visual experiments using wide-field looming stimuli revealed retinotopic maps that overlap with the parietal face and body areas in the postcentral sulcus at the most anterior border of the dorsal visual pathway. Starting at the parietal face area and moving medially and posteriorly into the lower-body areas, the median of visual polar-angle representations in these somatotopic areas gradually shifts from near the horizontal meridian into the lower visual field. These results suggest the parietal face and body areas fuse multisensory information in peripersonal space to guard an individual from head to toe.
Audio-visual temporal perception in children with restored hearing.
Gori, Monica; Chilosi, Anna; Forli, Francesca; Burr, David
2017-05-01
It is not clear how audio-visual temporal perception develops in children with restored hearing. In this study we measured temporal discrimination thresholds with an audio-visual temporal bisection task in 9 deaf children with restored audition, and 22 typically hearing children. In typically hearing children, audition was more precise than vision, with no gain in multisensory conditions (as previously reported in Gori et al. (2012b)). However, deaf children with restored audition showed similar thresholds for audio and visual thresholds and some evidence of gain in audio-visual temporal multisensory conditions. Interestingly, we found a strong correlation between auditory weighting of multisensory signals and quality of language: patients who gave more weight to audition had better language skills. Similarly, auditory thresholds for the temporal bisection task were also a good predictor of language skills. This result supports the idea that the temporal auditory processing is associated with language development. Copyright © 2017. Published by Elsevier Ltd.
Multisensory Technology for Flavor Augmentation: A Mini Review.
Velasco, Carlos; Obrist, Marianna; Petit, Olivia; Spence, Charles
2018-01-01
There is growing interest in the development of new technologies that capitalize on our emerging understanding of the multisensory influences on flavor perception in order to enhance human-food interaction design. This review focuses on the role of (extrinsic) visual, auditory, and haptic/tactile elements in modulating flavor perception and more generally, our food and drink experiences. We review some of the most exciting examples of recent multisensory technologies for augmenting such experiences. Here, we discuss applications for these technologies, for example, in the field of food experience design, in the support of healthy eating, and in the rapidly growing world of sensory marketing. However, as the review makes clear, while there are many opportunities for novel human-food interaction design, there are also a number of challenges that will need to be tackled before new technologies can be meaningfully integrated into our everyday food and drink experiences.
Multisensory Technology for Flavor Augmentation: A Mini Review
Velasco, Carlos; Obrist, Marianna; Petit, Olivia; Spence, Charles
2018-01-01
There is growing interest in the development of new technologies that capitalize on our emerging understanding of the multisensory influences on flavor perception in order to enhance human–food interaction design. This review focuses on the role of (extrinsic) visual, auditory, and haptic/tactile elements in modulating flavor perception and more generally, our food and drink experiences. We review some of the most exciting examples of recent multisensory technologies for augmenting such experiences. Here, we discuss applications for these technologies, for example, in the field of food experience design, in the support of healthy eating, and in the rapidly growing world of sensory marketing. However, as the review makes clear, while there are many opportunities for novel human–food interaction design, there are also a number of challenges that will need to be tackled before new technologies can be meaningfully integrated into our everyday food and drink experiences. PMID:29441030
Ash, April; Palmisano, Stephen
2012-01-01
We examined the vection induced by consistent and conflicting multisensory information about self-motion. Observers viewed displays simulating constant-velocity self-motion in depth while physically oscillating their heads left-right or back-forth in time with a metronome. Their tracked head movements were either ignored or incorporated directly into the self-motion display (as an added simulated self-acceleration). When this head oscillation was updated into displays, sensory conflict was generated by simulating oscillation along: (i) an orthogonal axis to the head movement; or (ii) the same axis, but in a non-ecological direction. Simulated head oscillation always produced stronger vection than 'no display oscillation'--even when the axis/direction of this display motion was inconsistent with the physical head motion. When head-and-display oscillation occurred along the same axis: (i) consistent (in-phase) horizontal display oscillation produced stronger vection than conflicting (out-of-phase) horizontal display oscillation; however, (ii) consistent and conflicting depth oscillation conditions did not induce significantly different vection. Overall, orthogonal-axis oscillation was found to produce very similar vection to same-axis oscillation. Thus, we conclude that while vection appears to be very robust to sensory conflict, there are situations where sensory consistency improves vection.
Audio-tactile integration and the influence of musical training.
Kuchenbuch, Anja; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Pantev, Christo
2014-01-01
Perception of our environment is a multisensory experience; information from different sensory systems like the auditory, visual and tactile is constantly integrated. Complex tasks that require high temporal and spatial precision of multisensory integration put strong demands on the underlying networks but it is largely unknown how task experience shapes multisensory processing. Long-term musical training is an excellent model for brain plasticity because it shapes the human brain at functional and structural levels, affecting a network of brain areas. In the present study we used magnetoencephalography (MEG) to investigate how audio-tactile perception is integrated in the human brain and if musicians show enhancement of the corresponding activation compared to non-musicians. Using a paradigm that allowed the investigation of combined and separate auditory and tactile processing, we found a multisensory incongruency response, generated in frontal, cingulate and cerebellar regions, an auditory mismatch response generated mainly in the auditory cortex and a tactile mismatch response generated in frontal and cerebellar regions. The influence of musical training was seen in the audio-tactile as well as in the auditory condition, indicating enhanced higher-order processing in musicians, while the sources of the tactile MMN were not influenced by long-term musical training. Consistent with the predictive coding model, more basic, bottom-up sensory processing was relatively stable and less affected by expertise, whereas areas for top-down models of multisensory expectancies were modulated by training.
Stone, David B; Urrea, Laura J; Aine, Cheryl J; Bustillo, Juan R; Clark, Vincent P; Stephen, Julia M
2011-10-01
In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder. Copyright © 2011 Elsevier Ltd. All rights reserved.
Cortical Hierarchies Perform Bayesian Causal Inference in Multisensory Perception
Rohe, Tim; Noppeney, Uta
2015-01-01
To form a veridical percept of the environment, the brain needs to integrate sensory signals from a common source but segregate those from independent sources. Thus, perception inherently relies on solving the “causal inference problem.” Behaviorally, humans solve this problem optimally as predicted by Bayesian Causal Inference; yet, the underlying neural mechanisms are unexplored. Combining psychophysics, Bayesian modeling, functional magnetic resonance imaging (fMRI), and multivariate decoding in an audiovisual spatial localization task, we demonstrate that Bayesian Causal Inference is performed by a hierarchy of multisensory processes in the human brain. At the bottom of the hierarchy, in auditory and visual areas, location is represented on the basis that the two signals are generated by independent sources (= segregation). At the next stage, in posterior intraparietal sulcus, location is estimated under the assumption that the two signals are from a common source (= forced fusion). Only at the top of the hierarchy, in anterior intraparietal sulcus, the uncertainty about the causal structure of the world is taken into account and sensory signals are combined as predicted by Bayesian Causal Inference. Characterizing the computational operations of signal interactions reveals the hierarchical nature of multisensory perception in human neocortex. It unravels how the brain accomplishes Bayesian Causal Inference, a statistical computation fundamental for perception and cognition. Our results demonstrate how the brain combines information in the face of uncertainty about the underlying causal structure of the world. PMID:25710328
'Finding the person the disease has'--the case for multisensory environments.
Hope, K W; Easby, R; Waterman, H
2004-10-01
Education about, and exposure to, the utilization of a multisensory environment (MSE) was provided to clinical staff as a response to findings from the problem identification stage of an action research study. Feedback was obtained about their experience and perceptions of its use. Through the auspices of focus groups and one-to-one interviews, respondents commented on the impact that using the MSE had on their perception of their clients and on subsequent care. The case is made that MSEs afford an opportunity to impact on care through their mediating influence on formal carers' perceptions of their clients and, as such represent a significant but as yet unrealized potential for improving the quality of care of older people with dementia.
Attention and multisensory modulation argue against total encapsulation.
de Haas, Benjamin; Schwarzkopf, Dietrich Samuel; Rees, Geraint
2016-01-01
Firestone & Scholl (F&S) postulate that vision proceeds without any direct interference from cognition. We argue that this view is extreme and not in line with the available evidence. Specifically, we discuss two well-established counterexamples: Attention directly affects core aspects of visual processing, and multisensory modulations of vision originate on multiple levels, some of which are unlikely to fall "within perception."
Understanding Freshness Perception from the Cognitive Mechanisms of Flavor: The Case of Beverages
Roque, Jérémy; Auvray, Malika; Lafraire, Jérémie
2018-01-01
Freshness perception has received recent consideration in the field of consumer science mainly because of its hedonic dimension, which is assumed to influence consumers’ preference and behavior. However, most studies have considered freshness as a multisensory attribute of food and beverage products without investigating the cognitive mechanisms at hand. In the present review, we endorse a slightly different perspective on freshness. We focus on (i) the multisensory integration processes that underpin freshness perception, and (ii) the top–down factors that influence the explicit attribution of freshness to a product by consumers. To do so, we exploit the recent literature on the cognitive underpinnings of flavor perception as a heuristic to better characterize the mechanisms of freshness perception in the particular case of beverages. We argue that the lack of consideration of particular instances of flavor, such as freshness, has resulted in a lack of consensus about the content and structure of different types of flavor representations. We then enrich these theoretical analyses, with a review of the cognitive mechanisms of flavor perception: from multisensory integration processes to the influence of top–down factors (e.g., attentional and semantic). We conclude that similarly to flavor, freshness perception is characterized by hybrid content, both perceptual and semantic, but that freshness has a higher-degree of specificity than flavor. In particular, contrary to flavor, freshness is characterized by specific functions (e.g., alleviation of oropharyngeal symptoms) and likely differs from flavor with respect to the weighting of each sensory contributor, as well as to its subjective location. Finally, we provide a comprehensive model of the cognitive mechanisms that underlie freshness perception. This model paves the way for further empirical research on particular instances of flavor, and will enable advances in the field of food and beverage cognition. PMID:29375453
Understanding Freshness Perception from the Cognitive Mechanisms of Flavor: The Case of Beverages.
Roque, Jérémy; Auvray, Malika; Lafraire, Jérémie
2017-01-01
Freshness perception has received recent consideration in the field of consumer science mainly because of its hedonic dimension, which is assumed to influence consumers' preference and behavior. However, most studies have considered freshness as a multisensory attribute of food and beverage products without investigating the cognitive mechanisms at hand. In the present review, we endorse a slightly different perspective on freshness. We focus on (i) the multisensory integration processes that underpin freshness perception, and (ii) the top-down factors that influence the explicit attribution of freshness to a product by consumers. To do so, we exploit the recent literature on the cognitive underpinnings of flavor perception as a heuristic to better characterize the mechanisms of freshness perception in the particular case of beverages. We argue that the lack of consideration of particular instances of flavor, such as freshness, has resulted in a lack of consensus about the content and structure of different types of flavor representations. We then enrich these theoretical analyses, with a review of the cognitive mechanisms of flavor perception: from multisensory integration processes to the influence of top-down factors (e.g., attentional and semantic). We conclude that similarly to flavor, freshness perception is characterized by hybrid content, both perceptual and semantic, but that freshness has a higher-degree of specificity than flavor . In particular, contrary to flavor, freshness is characterized by specific functions (e.g., alleviation of oropharyngeal symptoms) and likely differs from flavor with respect to the weighting of each sensory contributor, as well as to its subjective location. Finally, we provide a comprehensive model of the cognitive mechanisms that underlie freshness perception. This model paves the way for further empirical research on particular instances of flavor, and will enable advances in the field of food and beverage cognition.
Balz, Johanna; Roa Romero, Yadira; Keil, Julian; Krebber, Martin; Niedeggen, Michael; Gallinat, Jürgen; Senkowski, Daniel
2016-01-01
Recent behavioral and neuroimaging studies have suggested multisensory processing deficits in patients with schizophrenia (SCZ). Thus far, the neural mechanisms underlying these deficits are not well understood. Previous studies with unisensory stimulation have shown altered neural oscillations in SCZ. As such, altered oscillations could contribute to aberrant multisensory processing in this patient group. To test this assumption, we conducted an electroencephalography (EEG) study in 15 SCZ and 15 control participants in whom we examined neural oscillations and event-related potentials (ERPs) in the sound-induced flash illusion (SIFI). In the SIFI multiple auditory stimuli that are presented alongside a single visual stimulus can induce the illusory percept of multiple visual stimuli. In SCZ and control participants we compared ERPs and neural oscillations between trials that induced an illusion and trials that did not induce an illusion. On the behavioral level, SCZ (55.7%) and control participants (55.4%) did not significantly differ in illusion rates. The analysis of ERPs revealed diminished amplitudes and altered multisensory processing in SCZ compared to controls around 135 ms after stimulus onset. Moreover, the analysis of neural oscillations revealed altered 25–35 Hz power after 100 to 150 ms over occipital scalp for SCZ compared to controls. Our findings extend previous observations of aberrant neural oscillations in unisensory perception paradigms. They suggest that altered ERPs and altered occipital beta/gamma band power reflect aberrant multisensory processing in SCZ. PMID:27999553
The multisensory function of the human primary visual cortex.
Murray, Micah M; Thelen, Antonia; Thut, Gregor; Romei, Vincenzo; Martuzzi, Roberto; Matusz, Pawel J
2016-03-01
It has been nearly 10 years since Ghazanfar and Schroeder (2006) proposed that the neocortex is essentially multisensory in nature. However, it is only recently that sufficient and hard evidence that supports this proposal has accrued. We review evidence that activity within the human primary visual cortex plays an active role in multisensory processes and directly impacts behavioural outcome. This evidence emerges from a full pallet of human brain imaging and brain mapping methods with which multisensory processes are quantitatively assessed by taking advantage of particular strengths of each technique as well as advances in signal analyses. Several general conclusions about multisensory processes in primary visual cortex of humans are supported relatively solidly. First, haemodynamic methods (fMRI/PET) show that there is both convergence and integration occurring within primary visual cortex. Second, primary visual cortex is involved in multisensory processes during early post-stimulus stages (as revealed by EEG/ERP/ERFs as well as TMS). Third, multisensory effects in primary visual cortex directly impact behaviour and perception, as revealed by correlational (EEG/ERPs/ERFs) as well as more causal measures (TMS/tACS). While the provocative claim of Ghazanfar and Schroeder (2006) that the whole of neocortex is multisensory in function has yet to be demonstrated, this can now be considered established in the case of the human primary visual cortex. Copyright © 2015 Elsevier Ltd. All rights reserved.
The associations between multisensory temporal processing and symptoms of schizophrenia.
Stevenson, Ryan A; Park, Sohee; Cochran, Channing; McIntosh, Lindsey G; Noel, Jean-Paul; Barense, Morgan D; Ferber, Susanne; Wallace, Mark T
2017-01-01
Recent neurobiological accounts of schizophrenia have included an emphasis on changes in sensory processing. These sensory and perceptual deficits can have a cascading effect onto higher-level cognitive processes and clinical symptoms. One form of sensory dysfunction that has been consistently observed in schizophrenia is altered temporal processing. In this study, we investigated temporal processing within and across the auditory and visual modalities in individuals with schizophrenia (SCZ) and age-matched healthy controls. Individuals with SCZ showed auditory and visual temporal processing abnormalities, as well as multisensory temporal processing dysfunction that extended beyond that attributable to unisensory processing dysfunction. Most importantly, these multisensory temporal deficits were associated with the severity of hallucinations. This link between atypical multisensory temporal perception and clinical symptomatology suggests that clinical symptoms of schizophrenia may be at least partly a result of cascading effects from (multi)sensory disturbances. These results are discussed in terms of underlying neural bases and the possible implications for remediation. Copyright © 2016 Elsevier B.V. All rights reserved.
Goulard, Roman; Julien-Laferriere, Alice; Fleuriet, Jérome; Vercher, Jean-Louis; Viollet, Stéphane
2015-12-01
The ability of hoverflies to control their head orientation with respect to their body contributes importantly to their agility and their autonomous navigation abilities. Many tasks performed by this insect during flight, especially while hovering, involve a head stabilization reflex. This reflex, which is mediated by multisensory channels, prevents the visual processing from being disturbed by motion blur and maintains a consistent perception of the visual environment. The so-called dorsal light response (DLR) is another head control reflex, which makes insects sensitive to the brightest part of the visual field. In this study, we experimentally validate and quantify the control loop driving the head roll with respect to the horizon in hoverflies. The new approach developed here consisted of using an upside-down horizon in a body roll paradigm. In this unusual configuration, tethered flying hoverflies surprisingly no longer use purely vision-based control for head stabilization. These results shed new light on the role of neck proprioceptor organs in head and body stabilization with respect to the horizon. Based on the responses obtained with male and female hoverflies, an improved model was then developed in which the output signals delivered by the neck proprioceptor organs are combined with the visual error in the estimated position of the body roll. An internal estimation of the body roll angle with respect to the horizon might explain the extremely accurate flight performances achieved by some hovering insects. © 2015. Published by The Company of Biologists Ltd.
The vestibular system: a spatial reference for bodily self-consciousness
Pfeiffer, Christian; Serino, Andrea; Blanke, Olaf
2014-01-01
Self-consciousness is the remarkable human experience of being a subject: the “I”. Self-consciousness is typically bound to a body, and particularly to the spatial dimensions of the body, as well as to its location and displacement in the gravitational field. Because the vestibular system encodes head position and movement in three-dimensional space, vestibular cortical processing likely contributes to spatial aspects of bodily self-consciousness. We review here recent data showing vestibular effects on first-person perspective (the feeling from where “I” experience the world) and self-location (the feeling where “I” am located in space). We compare these findings to data showing vestibular effects on mental spatial transformation, self-motion perception, and body representation showing vestibular contributions to various spatial representations of the body with respect to the external world. Finally, we discuss the role for four posterior brain regions that process vestibular and other multisensory signals to encode spatial aspects of bodily self-consciousness: temporoparietal junction, parietoinsular vestibular cortex, ventral intraparietal region, and medial superior temporal region. We propose that vestibular processing in these cortical regions is critical in linking multisensory signals from the body (personal and peripersonal space) with external (extrapersonal) space. Therefore, the vestibular system plays a critical role for neural representations of spatial aspects of bodily self-consciousness. PMID:24860446
Examining the Effect of Age on Visual-Vestibular Self-Motion Perception Using a Driving Paradigm.
Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L
2017-05-01
Previous psychophysical research has examined how younger adults and non-human primates integrate visual and vestibular cues to perceive self-motion. However, there is much to be learned about how multisensory self-motion perception changes with age, and how these changes affect performance on everyday tasks involving self-motion. Evidence suggests that older adults display heightened multisensory integration compared with younger adults; however, few previous studies have examined this for visual-vestibular integration. To explore age differences in the way that visual and vestibular cues contribute to self-motion perception, we had younger and older participants complete a basic driving task containing visual and vestibular cues. We compared their performance against a previously established control group that experienced visual cues alone. Performance measures included speed, speed variability, and lateral position. Vestibular inputs resulted in more precise speed control among older adults, but not younger adults, when traversing curves. Older adults demonstrated more variability in lateral position when vestibular inputs were available versus when they were absent. These observations align with previous evidence of age-related differences in multisensory integration and demonstrate that they may extend to visual-vestibular integration. These findings may have implications for vehicle and simulator design when considering older users.
Wallace, Mark T.; Stevenson, Ryan A.
2014-01-01
Behavior, perception and cognition are strongly shaped by the synthesis of information across the different sensory modalities. Such multisensory integration often results in performance and perceptual benefits that reflect the additional information conferred by having cues from multiple senses providing redundant or complementary information. The spatial and temporal relationships of these cues provide powerful statistical information about how these cues should be integrated or “bound” in order to create a unified perceptual representation. Much recent work has examined the temporal factors that are integral in multisensory processing, with many focused on the construct of the multisensory temporal binding window – the epoch of time within which stimuli from different modalities is likely to be integrated and perceptually bound. Emerging evidence suggests that this temporal window is altered in a series of neurodevelopmental disorders, including autism, dyslexia and schizophrenia. In addition to their role in sensory processing, these deficits in multisensory temporal function may play an important role in the perceptual and cognitive weaknesses that characterize these clinical disorders. Within this context, focus on improving the acuity of multisensory temporal function may have important implications for the amelioration of the “higher-order” deficits that serve as the defining features of these disorders. PMID:25128432
David, Nicole; R Schneider, Till; Vogeley, Kai; Engel, Andreas K
2011-10-01
Individuals suffering from autism spectrum disorders (ASD) often show a tendency for detail- or feature-based perception (also referred to as "local processing bias") instead of more holistic stimulus processing typical for unaffected people. This local processing bias has been demonstrated for the visual and auditory domains and there is evidence that multisensory processing may also be affected in ASD. Most multisensory processing paradigms used social-communicative stimuli, such as human speech or faces, probing the processing of simultaneously occuring sensory signals. Multisensory processing, however, is not limited to simultaneous stimulation. In this study, we investigated whether multisensory processing deficits in ASD persist when semantically complex but nonsocial stimuli are presented in succession. Fifteen adult individuals with Asperger syndrome and 15 control persons participated in a visual-audio priming task, which required the classification of sounds that were either primed by semantically congruent or incongruent preceding pictures of objects. As expected, performance on congruent trials was faster and more accurate compared with incongruent trials (crossmodal priming effect). The Asperger group, however, did not differ significantly from the control group. Our results do not support a general multisensory processing deficit, which is universal to the entire autism spectrum. Copyright © 2011, International Society for Autism Research, Wiley-Liss, Inc.
Multisensory integration in complete unawareness: evidence from audiovisual congruency priming.
Faivre, Nathan; Mudrik, Liad; Schwartz, Naama; Koch, Christof
2014-11-01
Multisensory integration is thought to require conscious perception. Although previous studies have shown that an invisible stimulus could be integrated with an audible one, none have demonstrated integration of two subliminal stimuli of different modalities. Here, pairs of identical or different audiovisual target letters (the sound /b/ with the written letter "b" or "m," respectively) were preceded by pairs of masked identical or different audiovisual prime digits (the sound /6/ with the written digit "6" or "8," respectively). In three experiments, awareness of the audiovisual digit primes was manipulated, such that participants were either unaware of the visual digit, the auditory digit, or both. Priming of the semantic relations between the auditory and visual digits was found in all experiments. Moreover, a further experiment showed that unconscious multisensory integration was not obtained when participants did not undergo prior conscious training of the task. This suggests that following conscious learning, unconscious processing suffices for multisensory integration. © The Author(s) 2014.
Congruent and Opposite Neurons as Partners in Multisensory Integration and Segregation
NASA Astrophysics Data System (ADS)
Zhang, Wen-Hao; Wong, K. Y. Michael; Wang, He; Wu, Si
Experiments revealed that where visual and vestibular cues are integrated to infer heading direction in the brain, there are two types of neurons with roughly the same number. Respectively, congruent and opposite cells respond similarly and oppositely to visual and vestibular cues. Congruent neurons are known to be responsible for cue integration, but the computational role of opposite neurons remains largely unknown. We propose that opposite neurons may serve to encode the disparity information between cues necessary for multisensory segregation. We build a computational model composed of two reciprocally coupled modules, each consisting of groups of congruent and opposite neurons. Our model reproduces the characteristics of congruent and opposite neurons, and demonstrates that in each module, congruent and opposite neurons can jointly achieve optimal multisensory information integration and segregation. This study sheds light on our understanding of how the brain implements optimal multisensory integration and segregation concurrently in a distributed manner. This work is supported by the Research Grants Council of Hong Kong (N _HKUST606/12, 605813, and 16322616) and National Basic Research Program of China (2014CB846101) and the Natural Science Foundation of China (31261160495).
Shifts in Audiovisual Processing in Healthy Aging.
Baum, Sarah H; Stevenson, Ryan
2017-09-01
The integration of information across sensory modalities into unified percepts is a fundamental sensory process upon which a multitude of cognitive processes are based. We review the body of literature exploring aging-related changes in audiovisual integration published over the last five years. Specifically, we review the impact of changes in temporal processing, the influence of the effectiveness of sensory inputs, the role of working memory, and the newer studies of intra-individual variability during these processes. Work in the last five years on bottom-up influences of sensory perception has garnered significant attention. Temporal processing, a driving factors of multisensory integration, has now been shown to decouple with multisensory integration in aging, despite their co-decline with aging. The impact of stimulus effectiveness also changes with age, where older adults show maximal benefit from multisensory gain at high signal-to-noise ratios. Following sensory decline, high working memory capacities have now been shown to be somewhat of a protective factor against age-related declines in audiovisual speech perception, particularly in noise. Finally, newer research is emerging focusing on the general intra-individual variability observed with aging. Overall, the studies of the past five years have replicated and expanded on previous work that highlights the role of bottom-up sensory changes with aging and their influence on audiovisual integration, as well as the top-down influence of working memory.
Pasluosta, Cristian; Kiele, Patrick; Stieglitz, Thomas
2018-04-01
The somatosensory system contributes substantially to the integration of multiple sensor modalities into perception. Tactile sensations, proprioception and even temperature perception are integrated to perceive embodiment of our limbs. Damage of somatosensory networks can severely affect the execution of daily life activities. Peripheral injuries are optimally corrected via direct interfacing of the peripheral nerves. Recent advances in implantable devices, stimulation paradigms, and biomimetic sensors enabled the restoration of natural sensations after amputation of the limb. The refinement of stimulation patterns to deliver natural feedback that can be interpreted intuitively such to prescind from long-learning sessions is crucial to function restoration. For this review, we collected state-of-the-art knowledge on the evolution of stimulation paradigms from single fiber stimulation to the eliciting of multisensory sensations. Data from the literature are structured into six sections: (a) physiology of the somatosensory system; (b) stimulation of single fibers; (c) restoral of multisensory percepts; (d) closure of the control loop in hand prostheses; (e) sensory restoration and the sense of embodiment, and (f) methodologies to assess stimulation outcomes. Full functional recovery demands further research on multisensory integration and brain plasticity, which will bring new paradigms for intuitive sensory feedback in the next generation of limb prostheses. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Cuppini, Cristiano; Ursino, Mauro; Magosso, Elisa; Ross, Lars A.; Foxe, John J.; Molholm, Sophie
2017-01-01
Failure to appropriately develop multisensory integration (MSI) of audiovisual speech may affect a child's ability to attain optimal communication. Studies have shown protracted development of MSI into late-childhood and identified deficits in MSI in children with an autism spectrum disorder (ASD). Currently, the neural basis of acquisition of this ability is not well understood. Here, we developed a computational model informed by neurophysiology to analyze possible mechanisms underlying MSI maturation, and its delayed development in ASD. The model posits that strengthening of feedforward and cross-sensory connections, responsible for the alignment of auditory and visual speech sound representations in posterior superior temporal gyrus/sulcus, can explain behavioral data on the acquisition of MSI. This was simulated by a training phase during which the network was exposed to unisensory and multisensory stimuli, and projections were crafted by Hebbian rules of potentiation and depression. In its mature architecture, the network also reproduced the well-known multisensory McGurk speech effect. Deficits in audiovisual speech perception in ASD were well accounted for by fewer multisensory exposures, compatible with a lack of attention, but not by reduced synaptic connectivity or synaptic plasticity. PMID:29163099
The role of multisensory interplay in enabling temporal expectations.
Ball, Felix; Michels, Lara E; Thiele, Carsten; Noesselt, Toemme
2018-01-01
Temporal regularities can guide our attention to focus on a particular moment in time and to be especially vigilant just then. Previous research provided evidence for the influence of temporal expectation on perceptual processing in unisensory auditory, visual, and tactile contexts. However, in real life we are often exposed to a complex and continuous stream of multisensory events. Here we tested - in a series of experiments - whether temporal expectations can enhance perception in multisensory contexts and whether this enhancement differs from enhancements in unisensory contexts. Our discrimination paradigm contained near-threshold targets (subject-specific 75% discrimination accuracy) embedded in a sequence of distractors. The likelihood of target occurrence (early or late) was manipulated block-wise. Furthermore, we tested whether spatial and modality-specific target uncertainty (i.e. predictable vs. unpredictable target position or modality) would affect temporal expectation (TE) measured with perceptual sensitivity (d ' ) and response times (RT). In all our experiments, hidden temporal regularities improved performance for expected multisensory targets. Moreover, multisensory performance was unaffected by spatial and modality-specific uncertainty, whereas unisensory TE effects on d ' but not RT were modulated by spatial and modality-specific uncertainty. Additionally, the size of the temporal expectation effect, i.e. the increase in perceptual sensitivity and decrease of RT, scaled linearly with the likelihood of expected targets. Finally, temporal expectation effects were unaffected by varying target position within the stream. Together, our results strongly suggest that participants quickly adapt to novel temporal contexts, that they benefit from multisensory (relative to unisensory) stimulation and that multisensory benefits are maximal if the stimulus-driven uncertainty is highest. We propose that enhanced informational content (i.e. multisensory stimulation) enables the robust extraction of temporal regularities which in turn boost (uni-)sensory representations. Copyright © 2017 Elsevier B.V. All rights reserved.
Heading Tuning in Macaque Area V6.
Fan, Reuben H; Liu, Sheng; DeAngelis, Gregory C; Angelaki, Dora E
2015-12-16
Cortical areas, such as the dorsal subdivision of the medial superior temporal area (MSTd) and the ventral intraparietal area (VIP), have been shown to integrate visual and vestibular self-motion signals. Area V6 is interconnected with areas MSTd and VIP, allowing for the possibility that V6 also integrates visual and vestibular self-motion cues. An alternative hypothesis in the literature is that V6 does not use these sensory signals to compute heading but instead discounts self-motion signals to represent object motion. However, the responses of V6 neurons to visual and vestibular self-motion cues have never been studied, thus leaving the functional roles of V6 unclear. We used a virtual reality system to examine the 3D heading tuning of macaque V6 neurons in response to optic flow and inertial motion stimuli. We found that the majority of V6 neurons are selective for heading defined by optic flow. However, unlike areas MSTd and VIP, V6 neurons are almost universally unresponsive to inertial motion in the absence of optic flow. We also explored the spatial reference frames of heading signals in V6 by measuring heading tuning for different eye positions, and we found that the visual heading tuning of most V6 cells was eye-centered. Similar to areas MSTd and VIP, the population of V6 neurons was best able to discriminate small variations in heading around forward and backward headings. Our findings support the idea that V6 is involved primarily in processing visual motion signals and does not appear to play a role in visual-vestibular integration for self-motion perception. To understand how we successfully navigate our world, it is important to understand which parts of the brain process cues used to perceive our direction of self-motion (i.e., heading). Cortical area V6 has been implicated in heading computations based on human neuroimaging data, but direct measurements of heading selectivity in individual V6 neurons have been lacking. We provide the first demonstration that V6 neurons carry 3D visual heading signals, which are represented in an eye-centered reference frame. In contrast, we found almost no evidence for vestibular heading signals in V6, indicating that V6 is unlikely to contribute to multisensory integration of heading signals, unlike other cortical areas. These findings provide important constraints on the roles of V6 in self-motion perception. Copyright © 2015 the authors 0270-6474/15/3516303-12$15.00/0.
Kilteni, Konstantina; Maselli, Antonella; Kording, Konrad P.; Slater, Mel
2015-01-01
Which is my body and how do I distinguish it from the bodies of others, or from objects in the surrounding environment? The perception of our own body and more particularly our sense of body ownership is taken for granted. Nevertheless, experimental findings from body ownership illusions (BOIs), show that under specific multisensory conditions, we can experience artificial body parts or fake bodies as our own body parts or body, respectively. The aim of the present paper is to discuss how and why BOIs are induced. We review several experimental findings concerning the spatial, temporal, and semantic principles of crossmodal stimuli that have been applied to induce BOIs. On the basis of these principles, we discuss theoretical approaches concerning the underlying mechanism of BOIs. We propose a conceptualization based on Bayesian causal inference for addressing how our nervous system could infer whether an object belongs to our own body, using multisensory, sensorimotor, and semantic information, and we discuss how this can account for several experimental findings. Finally, we point to neural network models as an implementational framework within which the computational problem behind BOIs could be addressed in the future. PMID:25852524
Liu, Juan; Ando, Hiroshi
2016-01-01
Most real-world events stimulate multiple sensory modalities simultaneously. Usually, the stiffness of an object is perceived haptically. However, auditory signals also contain stiffness-related information, and people can form impressions of stiffness from the different impact sounds of metal, wood, or glass. To understand whether there is any interaction between auditory and haptic stiffness perception, and if so, whether the inferred material category is the most relevant auditory information, we conducted experiments using a force-feedback device and the modal synthesis method to present haptic stimuli and impact sound in accordance with participants’ actions, and to modulate low-level acoustic parameters, i.e., frequency and damping, without changing the inferred material categories of sound sources. We found that metal sounds consistently induced an impression of stiffer surfaces than did drum sounds in the audio-only condition, but participants haptically perceived surfaces with modulated metal sounds as significantly softer than the same surfaces with modulated drum sounds, which directly opposes the impression induced by these sounds alone. This result indicates that, although the inferred material category is strongly associated with audio-only stiffness perception, low-level acoustic parameters, especially damping, are more tightly integrated with haptic signals than the material category is. Frequency played an important role in both audio-only and audio-haptic conditions. Our study provides evidence that auditory information influences stiffness perception differently in unisensory and multisensory tasks. Furthermore, the data demonstrated that sounds with higher frequency and/or shorter decay time tended to be judged as stiffer, and contact sounds of stiff objects had no effect on the haptic perception of soft surfaces. We argue that the intrinsic physical relationship between object stiffness and acoustic parameters may be applied as prior knowledge to achieve robust estimation of stiffness in multisensory perception. PMID:27902718
To crash or not to crash: how do hoverflies cope with free-fall situations and weightlessness?
Goulard, Roman; Vercher, Jean-Louis; Viollet, Stéphane
2016-08-15
Insects' aptitude to perform hovering, automatic landing and tracking tasks involves accurately controlling their head and body roll and pitch movements, but how this attitude control depends on an internal estimation of gravity orientation is still an open question. Gravity perception in flying insects has mainly been studied in terms of grounded animals' tactile orientation responses, but it has not yet been established whether hoverflies use gravity perception cues to detect a nearly weightless state at an early stage. Ground-based microgravity simulators provide biologists with useful tools for studying the effects of changes in gravity. However, in view of the cost and the complexity of these set-ups, an alternative Earth-based free-fall procedure was developed with which flying insects can be briefly exposed to microgravity under various visual conditions. Hoverflies frequently initiated wingbeats in response to an imposed free fall in all the conditions tested, but managed to avoid crashing only in variably structured visual environments, and only episodically in darkness. Our results reveal that the crash-avoidance performance of these insects in various visual environments suggests the existence of a multisensory control system based mainly on vision rather than gravity perception. © 2016. Published by The Company of Biologists Ltd.
ERIC Educational Resources Information Center
Cardini, Flavia; Haggard, Patrick; Ladavas, Elisabetta
2013-01-01
We have investigated the relation between visuo-tactile interactions and the self-other distinction. In the Visual Enhancement of Touch (VET) effect, non-informative vision of one's own hand improves tactile spatial perception. Previous studies suggested that looking at "another"person's hand could also enhance tactile perception, but did not…
Auditorily-induced illusory self-motion: a review.
Väljamäe, Aleksander
2009-10-01
The aim of this paper is to provide a first review of studies related to auditorily-induced self-motion (vection). These studies have been scarce and scattered over the years and over several research communities including clinical audiology, multisensory perception of self-motion and its neural correlates, ergonomics, and virtual reality. The reviewed studies provide evidence that auditorily-induced vection has behavioral, physiological and neural correlates. Although the sound contribution to self-motion perception appears to be weaker than the visual modality, specific acoustic cues appear to be instrumental for a number of domains including posture prosthesis, navigation in unusual gravitoinertial environments (in the air, in space, or underwater), non-visual navigation, and multisensory integration during self-motion. A number of open research questions are highlighted opening avenue for more active and systematic studies in this area.
Zaidel, Adam; Goin-Kochel, Robin P.; Angelaki, Dora E.
2015-01-01
Perceptual processing in autism spectrum disorder (ASD) is marked by superior low-level task performance and inferior complex-task performance. This observation has led to theories of defective integration in ASD of local parts into a global percept. Despite mixed experimental results, this notion maintains widespread influence and has also motivated recent theories of defective multisensory integration in ASD. Impaired ASD performance in tasks involving classic random dot visual motion stimuli, corrupted by noise as a means to manipulate task difficulty, is frequently interpreted to support this notion of global integration deficits. By manipulating task difficulty independently of visual stimulus noise, here we test the hypothesis that heightened sensitivity to noise, rather than integration deficits, may characterize ASD. We found that although perception of visual motion through a cloud of dots was unimpaired without noise, the addition of stimulus noise significantly affected adolescents with ASD, more than controls. Strikingly, individuals with ASD demonstrated intact multisensory (visual–vestibular) integration, even in the presence of noise. Additionally, when vestibular motion was paired with pure visual noise, individuals with ASD demonstrated a different strategy than controls, marked by reduced flexibility. This result could be simulated by using attenuated (less reliable) and inflexible (not experience-dependent) Bayesian priors in ASD. These findings question widespread theories of impaired global and multisensory integration in ASD. Rather, they implicate increased sensitivity to sensory noise and less use of prior knowledge in ASD, suggesting increased reliance on incoming sensory information. PMID:25941373
Cappagli, Giulia; Finocchietti, Sara; Baud-Bovy, Gabriel; Cocchi, Elena; Gori, Monica
2017-01-01
Since it has been shown that spatial development can be delayed in blind children, focused sensorimotor trainings that associate auditory and motor information might be used to prevent the risk of spatial-related developmental delays or impairments from an early age. With this aim, we proposed a new technological device based on the implicit link between action and perception: ABBI (Audio Bracelet for Blind Interaction) is an audio bracelet that produces a sound when a movement occurs by allowing the substitution of the visuo-motor association with a new audio-motor association. In this study, we assessed the effects of an extensive but entertaining sensorimotor training with ABBI on the development of spatial hearing in a group of seven 3–5 years old children with congenital blindness (n = 2; light perception or no perception of light) or low vision (n = 5; visual acuity range 1.1–1.7 LogMAR). The training required the participants to play several spatial games individually and/or together with the psychomotor therapist 1 h per week for 3 months: the spatial games consisted of exercises meant to train their ability to associate visual and motor-related signals from their body, in order to foster the development of multisensory processes. We measured spatial performance by asking participants to indicate the position of one single fixed (static condition) or moving (dynamic condition) sound source on a vertical sensorized surface. We found that spatial performance of congenitally blind but not low vision children is improved after the training, indicating that early interventions with the use of science-driven devices based on multisensory capabilities can provide consistent advancements in therapeutic interventions, improving the quality of life of children with visual disability. PMID:29097987
Multisensory integration of colors and scents: insights from bees and flowers.
Leonard, Anne S; Masek, Pavel
2014-06-01
Karl von Frisch's studies of bees' color vision and chemical senses opened a window into the perceptual world of a species other than our own. A century of subsequent research on bees' visual and olfactory systems has developed along two productive but independent trajectories, leaving the questions of how and why bees use these two senses in concert largely unexplored. Given current interest in multimodal communication and recently discovered interplay between olfaction and vision in humans and Drosophila, understanding multisensory integration in bees is an opportunity to advance knowledge across fields. Using a classic ethological framework, we formulate proximate and ultimate perspectives on bees' use of multisensory stimuli. We discuss interactions between scent and color in the context of bee cognition and perception, focusing on mechanistic and functional approaches, and we highlight opportunities to further explore the development and evolution of multisensory integration. We argue that although the visual and olfactory worlds of bees are perhaps the best-studied of any non-human species, research focusing on the interactions between these two sensory modalities is vitally needed.
The Slow Learner in Mathematics: Aids and Activities
ERIC Educational Resources Information Center
Maletsky, Evan M.
1973-01-01
Specific examples of effective use of multisensory aids are given. All can easily and inexpensively be made by the teacher or the students. Examples are grouped under the following major headings: number patterns, arithmetic skills, geometric concepts, algebraic concepts, and models. (LS)
Bidelman, Gavin M
2016-10-01
Musical training is associated with behavioral and neurophysiological enhancements in auditory processing for both musical and nonmusical sounds (e.g., speech). Yet, whether the benefits of musicianship extend beyond enhancements to auditory-specific skills and impact multisensory (e.g., audiovisual) processing has yet to be fully validated. Here, we investigated multisensory integration of auditory and visual information in musicians and nonmusicians using a double-flash illusion, whereby the presentation of multiple auditory stimuli (beeps) concurrent with a single visual object (flash) induces an illusory perception of multiple flashes. We parametrically varied the onset asynchrony between auditory and visual events (leads and lags of ±300 ms) to quantify participants' "temporal window" of integration, i.e., stimuli in which auditory and visual cues were fused into a single percept. Results show that musically trained individuals were both faster and more accurate at processing concurrent audiovisual cues than their nonmusician peers; nonmusicians had a higher susceptibility for responding to audiovisual illusions and perceived double flashes over an extended range of onset asynchronies compared to trained musicians. Moreover, temporal window estimates indicated that musicians' windows (<100 ms) were ~2-3× shorter than nonmusicians' (~200 ms), suggesting more refined multisensory integration and audiovisual binding. Collectively, findings indicate a more refined binding of auditory and visual cues in musically trained individuals. We conclude that experience-dependent plasticity of intensive musical experience extends beyond simple listening skills, improving multimodal processing and the integration of multiple sensory systems in a domain-general manner.
Senkowski, Daniel; Saint-Amour, Dave; Kelly, Simon P; Foxe, John J
2007-07-01
In everyday life, we continuously and effortlessly integrate the multiple sensory inputs from objects in motion. For instance, the sound and the visual percept of vehicles in traffic provide us with complementary information about the location and motion of vehicles. Here, we used high-density electrical mapping and local auto-regressive average (LAURA) source estimation to study the integration of multisensory objects in motion as reflected in event-related potentials (ERPs). A randomized stream of naturalistic multisensory-audiovisual (AV), unisensory-auditory (A), and unisensory-visual (V) "splash" clips (i.e., a drop falling and hitting a water surface) was presented among non-naturalistic abstract motion stimuli. The visual clip onset preceded the "splash" onset by 100 ms for multisensory stimuli. For naturalistic objects early multisensory integration effects beginning 120-140 ms after sound onset were observed over posterior scalp, with distributed sources localized to occipital cortex, temporal lobule, insular, and medial frontal gyrus (MFG). These effects, together with longer latency interactions (210-250 and 300-350 ms) found in a widespread network of occipital, temporal, and frontal areas, suggest that naturalistic objects in motion are processed at multiple stages of multisensory integration. The pattern of integration effects differed considerably for non-naturalistic stimuli. Unlike naturalistic objects, no early interactions were found for non-naturalistic objects. The earliest integration effects for non-naturalistic stimuli were observed 210-250 ms after sound onset including large portions of the inferior parietal cortex (IPC). As such, there were clear differences in the cortical networks activated by multisensory motion stimuli as a consequence of the semantic relatedness (or lack thereof) of the constituent sensory elements.
Extending body space in immersive virtual reality: a very long arm illusion.
Kilteni, Konstantina; Normand, Jean-Marie; Sanchez-Vives, Maria V; Slater, Mel
2012-01-01
Recent studies have shown that a fake body part can be incorporated into human body representation through synchronous multisensory stimulation on the fake and corresponding real body part - the most famous example being the Rubber Hand Illusion. However, the extent to which gross asymmetries in the fake body can be assimilated remains unknown. Participants experienced, through a head-tracked stereo head-mounted display a virtual body coincident with their real body. There were 5 conditions in a between-groups experiment, with 10 participants per condition. In all conditions there was visuo-motor congruence between the real and virtual dominant arm. In an Incongruent condition (I), where the virtual arm length was equal to the real length, there was visuo-tactile incongruence. In four Congruent conditions there was visuo-tactile congruence, but the virtual arm lengths were either equal to (C1), double (C2), triple (C3) or quadruple (C4) the real ones. Questionnaire scores and defensive withdrawal movements in response to a threat showed that the overall level of ownership was high in both C1 and I, and there was no significant difference between these conditions. Additionally, participants experienced ownership over the virtual arm up to three times the length of the real one, and less strongly at four times the length. The illusion did decline, however, with the length of the virtual arm. In the C2-C4 conditions although a measure of proprioceptive drift positively correlated with virtual arm length, there was no correlation between the drift and ownership of the virtual arm, suggesting different underlying mechanisms between ownership and drift. Overall, these findings extend and enrich previous results that multisensory and sensorimotor information can reconstruct our perception of the body shape, size and symmetry even when this is not consistent with normal body proportions.
Extending Body Space in Immersive Virtual Reality: A Very Long Arm Illusion
Kilteni, Konstantina; Normand, Jean-Marie; Sanchez-Vives, Maria V.; Slater, Mel
2012-01-01
Recent studies have shown that a fake body part can be incorporated into human body representation through synchronous multisensory stimulation on the fake and corresponding real body part – the most famous example being the Rubber Hand Illusion. However, the extent to which gross asymmetries in the fake body can be assimilated remains unknown. Participants experienced, through a head-tracked stereo head-mounted display a virtual body coincident with their real body. There were 5 conditions in a between-groups experiment, with 10 participants per condition. In all conditions there was visuo-motor congruence between the real and virtual dominant arm. In an Incongruent condition (I), where the virtual arm length was equal to the real length, there was visuo-tactile incongruence. In four Congruent conditions there was visuo-tactile congruence, but the virtual arm lengths were either equal to (C1), double (C2), triple (C3) or quadruple (C4) the real ones. Questionnaire scores and defensive withdrawal movements in response to a threat showed that the overall level of ownership was high in both C1 and I, and there was no significant difference between these conditions. Additionally, participants experienced ownership over the virtual arm up to three times the length of the real one, and less strongly at four times the length. The illusion did decline, however, with the length of the virtual arm. In the C2–C4 conditions although a measure of proprioceptive drift positively correlated with virtual arm length, there was no correlation between the drift and ownership of the virtual arm, suggesting different underlying mechanisms between ownership and drift. Overall, these findings extend and enrich previous results that multisensory and sensorimotor information can reconstruct our perception of the body shape, size and symmetry even when this is not consistent with normal body proportions. PMID:22829891
Language/Culture Modulates Brain and Gaze Processes in Audiovisual Speech Perception.
Hisanaga, Satoko; Sekiyama, Kaoru; Igasaki, Tomohiko; Murayama, Nobuki
2016-10-13
Several behavioural studies have shown that the interplay between voice and face information in audiovisual speech perception is not universal. Native English speakers (ESs) are influenced by visual mouth movement to a greater degree than native Japanese speakers (JSs) when listening to speech. However, the biological basis of these group differences is unknown. Here, we demonstrate the time-varying processes of group differences in terms of event-related brain potentials (ERP) and eye gaze for audiovisual and audio-only speech perception. On a behavioural level, while congruent mouth movement shortened the ESs' response time for speech perception, the opposite effect was observed in JSs. Eye-tracking data revealed a gaze bias to the mouth for the ESs but not the JSs, especially before the audio onset. Additionally, the ERP P2 amplitude indicated that ESs processed multisensory speech more efficiently than auditory-only speech; however, the JSs exhibited the opposite pattern. Taken together, the ESs' early visual attention to the mouth was likely to promote phonetic anticipation, which was not the case for the JSs. These results clearly indicate the impact of language and/or culture on multisensory speech processing, suggesting that linguistic/cultural experiences lead to the development of unique neural systems for audiovisual speech perception.
Olfactory-visual integration facilitates perception of subthreshold negative emotion.
Novak, Lucas R; Gitelman, Darren R; Schuyler, Brianna; Li, Wen
2015-10-01
A fast growing literature of multisensory emotion integration notwithstanding, the chemical senses, intimately associated with emotion, have been largely overlooked. Moreover, an ecologically highly relevant principle of "inverse effectiveness", rendering maximal integration efficacy with impoverished sensory input, remains to be assessed in emotion integration. Presenting minute, subthreshold negative (vs. neutral) cues in faces and odors, we demonstrated olfactory-visual emotion integration in improved emotion detection (especially among individuals with weaker perception of unimodal negative cues) and response enhancement in the amygdala. Moreover, while perceptual gain for visual negative emotion involved the posterior superior temporal sulcus/pSTS, perceptual gain for olfactory negative emotion engaged both the associative olfactory (orbitofrontal) cortex and amygdala. Dynamic causal modeling (DCM) analysis of fMRI timeseries further revealed connectivity strengthening among these areas during crossmodal emotion integration. That multisensory (but not low-level unisensory) areas exhibited both enhanced response and region-to-region coupling favors a top-down (vs. bottom-up) account for olfactory-visual emotion integration. Current findings thus confirm the involvement of multisensory convergence areas, while highlighting unique characteristics of olfaction-related integration. Furthermore, successful crossmodal binding of subthreshold aversive cues not only supports the principle of "inverse effectiveness" in emotion integration but also accentuates the automatic, unconscious quality of crossmodal emotion synthesis. Copyright © 2015 Elsevier Ltd. All rights reserved.
Age Differences in Visual-Auditory Self-Motion Perception during a Simulated Driving Task
Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L.
2016-01-01
Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e., optic flow) and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e., engine, tire, and wind sounds). Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion. PMID:27199829
Women process multisensory emotion expressions more efficiently than men.
Collignon, O; Girard, S; Gosselin, F; Saint-Amour, D; Lepore, F; Lassonde, M
2010-01-01
Despite claims in the popular press, experiments investigating whether female are more efficient than male observers at processing expression of emotions produced inconsistent findings. In the present study, participants were asked to categorize fear and disgust expressions displayed auditorily, visually, or audio-visually. Results revealed an advantage of women in all the conditions of stimulus presentation. We also observed more nonlinear probabilistic summation in the bimodal conditions in female than male observers, indicating greater neural integration of different sensory-emotional informations. These findings indicate robust differences between genders in the multisensory perception of emotion expression.
Zamora-López, Gorka; Zhou, Changsong; Kurths, Jürgen
2009-01-01
Sensory stimuli entering the nervous system follow particular paths of processing, typically separated (segregated) from the paths of other modal information. However, sensory perception, awareness and cognition emerge from the combination of information (integration). The corticocortical networks of cats and macaque monkeys display three prominent characteristics: (i) modular organisation (facilitating the segregation), (ii) abundant alternative processing paths and (iii) the presence of highly connected hubs. Here, we study in detail the organisation and potential function of the cortical hubs by graph analysis and information theoretical methods. We find that the cortical hubs form a spatially delocalised, but topologically central module with the capacity to integrate multisensory information in a collaborative manner. With this, we resolve the underlying anatomical substrate that supports the simultaneous capacity of the cortex to segregate and to integrate multisensory information. PMID:20428515
Early and late beta-band power reflect audiovisual perception in the McGurk illusion
Senkowski, Daniel; Keil, Julian
2015-01-01
The McGurk illusion is a prominent example of audiovisual speech perception and the influence that visual stimuli can have on auditory perception. In this illusion, a visual speech stimulus influences the perception of an incongruent auditory stimulus, resulting in a fused novel percept. In this high-density electroencephalography (EEG) study, we were interested in the neural signatures of the subjective percept of the McGurk illusion as a phenomenon of speech-specific multisensory integration. Therefore, we examined the role of cortical oscillations and event-related responses in the perception of congruent and incongruent audiovisual speech. We compared the cortical activity elicited by objectively congruent syllables with incongruent audiovisual stimuli. Importantly, the latter elicited a subjectively congruent percept: the McGurk illusion. We found that early event-related responses (N1) to audiovisual stimuli were reduced during the perception of the McGurk illusion compared with congruent stimuli. Most interestingly, our study showed a stronger poststimulus suppression of beta-band power (13–30 Hz) at short (0–500 ms) and long (500–800 ms) latencies during the perception of the McGurk illusion compared with congruent stimuli. Our study demonstrates that auditory perception is influenced by visual context and that the subsequent formation of a McGurk illusion requires stronger audiovisual integration even at early processing stages. Our results provide evidence that beta-band suppression at early stages reflects stronger stimulus processing in the McGurk illusion. Moreover, stronger late beta-band suppression in McGurk illusion indicates the resolution of incongruent physical audiovisual input and the formation of a coherent, illusory multisensory percept. PMID:25568160
Early and late beta-band power reflect audiovisual perception in the McGurk illusion.
Roa Romero, Yadira; Senkowski, Daniel; Keil, Julian
2015-04-01
The McGurk illusion is a prominent example of audiovisual speech perception and the influence that visual stimuli can have on auditory perception. In this illusion, a visual speech stimulus influences the perception of an incongruent auditory stimulus, resulting in a fused novel percept. In this high-density electroencephalography (EEG) study, we were interested in the neural signatures of the subjective percept of the McGurk illusion as a phenomenon of speech-specific multisensory integration. Therefore, we examined the role of cortical oscillations and event-related responses in the perception of congruent and incongruent audiovisual speech. We compared the cortical activity elicited by objectively congruent syllables with incongruent audiovisual stimuli. Importantly, the latter elicited a subjectively congruent percept: the McGurk illusion. We found that early event-related responses (N1) to audiovisual stimuli were reduced during the perception of the McGurk illusion compared with congruent stimuli. Most interestingly, our study showed a stronger poststimulus suppression of beta-band power (13-30 Hz) at short (0-500 ms) and long (500-800 ms) latencies during the perception of the McGurk illusion compared with congruent stimuli. Our study demonstrates that auditory perception is influenced by visual context and that the subsequent formation of a McGurk illusion requires stronger audiovisual integration even at early processing stages. Our results provide evidence that beta-band suppression at early stages reflects stronger stimulus processing in the McGurk illusion. Moreover, stronger late beta-band suppression in McGurk illusion indicates the resolution of incongruent physical audiovisual input and the formation of a coherent, illusory multisensory percept. Copyright © 2015 the American Physiological Society.
Jacklin, Derek L; Goel, Amit; Clementino, Kyle J; Hall, Alexander W M; Talpos, John C; Winters, Boyer D
2012-01-01
Schizophrenia is a complex and debilitating disorder, characterized by positive, negative, and cognitive symptoms. Among the cognitive deficits observed in patients with schizophrenia, recent work has indicated abnormalities in multisensory integration, a process that is important for the formation of comprehensive environmental percepts and for the appropriate guidance of behavior. Very little is known about the neural bases of such multisensory integration deficits, partly because of the lack of viable behavioral tasks to assess this process in animal models. In this study, we used our recently developed rodent cross-modal object recognition (CMOR) task to investigate multisensory integration functions in rats treated sub-chronically with one of two N-methyl-D-aspartate receptor (NMDAR) antagonists, MK-801, or ketamine; such treatment is known to produce schizophrenia-like symptoms. Rats treated with the NMDAR antagonists were impaired on the standard spontaneous object recognition (SOR) task, unimodal (tactile or visual only) versions of SOR, and the CMOR task with intermediate to long retention delays between acquisition and testing phases, but they displayed a selective CMOR task deficit when mnemonic demand was minimized. This selective impairment in multisensory information processing was dose-dependently reversed by acute systemic administration of nicotine. These findings suggest that persistent NMDAR hypofunction may contribute to the multisensory integration deficits observed in patients with schizophrenia and highlight the valuable potential of the CMOR task to facilitate further systematic investigation of the neural bases of, and potential treatments for, this hitherto overlooked aspect of cognitive dysfunction in schizophrenia. PMID:22669170
Guterstam, Arvid; Brozzoli, Claudio; Ehrsson, H. Henrik
2013-01-01
The perception of our limbs in space is built upon the integration of visual, tactile, and proprioceptive signals. Accumulating evidence suggests that these signals are combined in areas of premotor, parietal, and cerebellar cortices. However, it remains to be determined whether neuronal populations in these areas integrate hand signals according to basic temporal and spatial congruence principles of multisensory integration. Here, we developed a setup based on advanced 3D video technology that allowed us to manipulate the spatiotemporal relationships of visuotactile (VT) stimuli delivered on a healthy human participant's real hand during fMRI and investigate the ensuing neural and perceptual correlates. Our experiments revealed two novel findings. First, we found responses in premotor, parietal, and cerebellar regions that were dependent upon the spatial and temporal congruence of VT stimuli. This multisensory integration effect required a simultaneous match between the seen and felt postures of the hand, which suggests that congruent visuoproprioceptive signals from the upper limb are essential for successful VT integration. Second, we observed that multisensory conflicts significantly disrupted the default feeling of ownership of the seen real limb, as indexed by complementary subjective, psychophysiological, and BOLD measures. The degree to which self-attribution was impaired could be predicted from the attenuation of neural responses in key multisensory areas. These results elucidate the neural bases of the integration of multisensory hand signals according to basic spatiotemporal principles and demonstrate that the disintegration of these signals leads to “disownership” of the seen real hand. PMID:23946393
Microcontroller based fibre-optic visual presentation system for multisensory neuroimaging.
Kurniawan, Veldri; Klemen, Jane; Chambers, Christopher D
2011-10-30
Presenting visual stimuli in physical 3D space during fMRI experiments carries significant technical challenges. Certain types of multisensory visuotactile experiments and visuomotor tasks require presentation of visual stimuli in peripersonal space, which cannot be accommodated by ordinary projection screens or binocular goggles. However, light points produced by a group of LEDs can be transmitted through fibre-optic cables and positioned anywhere inside the MRI scanner. Here we describe the design and implementation of a microcontroller-based programmable digital device for controlling fibre-optically transmitted LED lights from a PC. The main feature of this device is the ability to independently control the colour, brightness, and timing of each LED. Moreover, the device was designed in a modular and extensible way, which enables easy adaptation for various experimental paradigms. The device was tested and validated in three fMRI experiments involving basic visual perception, a simple colour discrimination task, and a blocked multisensory visuo-tactile task. The results revealed significant lateralized activation in occipital cortex of all participants, a reliable response in ventral occipital areas to colour stimuli elicited by the device, and strong activations in multisensory brain regions in the multisensory task. Overall, these findings confirm the suitability of this device for presenting complex fibre-optic visual and cross-modal stimuli inside the scanner. Copyright © 2011 Elsevier B.V. All rights reserved.
Multisensory emotion perception in congenitally, early, and late deaf CI users
Nava, Elena; Villwock, Agnes K.; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte
2017-01-01
Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a different deafness onset and were compared to three groups of age- and gender-matched hearing control participants. We hypothesized that congenitally deaf (CD) but not early deaf (ED) and late deaf (LD) CI users would show reduced multisensory interactions and a higher visual dominance in emotion perception than their hearing controls. The CD (n = 7), ED (deafness onset: <3 years of age; n = 7), and LD (deafness onset: >3 years; n = 13) CI users and the control participants performed an emotion recognition task with auditory, visual, and audio-visual emotionally congruent and incongruent nonsense speech stimuli. In different blocks, participants judged either the vocal (Voice task) or the facial expressions (Face task). In the Voice task, all three CI groups performed overall less efficiently than their respective controls and experienced higher interference from incongruent facial information. Furthermore, the ED CI users benefitted more than their controls from congruent faces and the CD CI users showed an analogous trend. In the Face task, recognition efficiency of the CI users and controls did not differ. Our results suggest that CI users acquire multisensory interactions to some degree, even after congenital deafness. When judging affective prosody they appear impaired and more strongly biased by concurrent facial information than typically hearing individuals. We speculate that limitations inherent to the CI contribute to these group differences. PMID:29023525
Multisensory Integration in Non-Human Primates during a Sensory-Motor Task
Lanz, Florian; Moret, Véronique; Rouiller, Eric Michel; Loquet, Gérard
2013-01-01
Daily our central nervous system receives inputs via several sensory modalities, processes them and integrates information in order to produce a suitable behavior. The amazing part is that such a multisensory integration brings all information into a unified percept. An approach to start investigating this property is to show that perception is better and faster when multimodal stimuli are used as compared to unimodal stimuli. This forms the first part of the present study conducted in a non-human primate’s model (n = 2) engaged in a detection sensory-motor task where visual and auditory stimuli were displayed individually or simultaneously. The measured parameters were the reaction time (RT) between stimulus and onset of arm movement, successes and errors percentages, as well as the evolution as a function of time of these parameters with training. As expected, RTs were shorter when the subjects were exposed to combined stimuli. The gains for both subjects were around 20 and 40 ms, as compared with the auditory and visual stimulus alone, respectively. Moreover the number of correct responses increased in response to bimodal stimuli. We interpreted such multisensory advantage through redundant signal effect which decreases perceptual ambiguity, increases speed of stimulus detection, and improves performance accuracy. The second part of the study presents single-unit recordings derived from the premotor cortex (PM) of the same subjects during the sensory-motor task. Response patterns to sensory/multisensory stimulation are documented and specific type proportions are reported. Characterization of bimodal neurons indicates a mechanism of audio-visual integration possibly through a decrease of inhibition. Nevertheless the neural processing leading to faster motor response from PM as a polysensory association cortical area remains still unclear. PMID:24319421
Multisensory emotion perception in congenitally, early, and late deaf CI users.
Fengler, Ineke; Nava, Elena; Villwock, Agnes K; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte
2017-01-01
Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a different deafness onset and were compared to three groups of age- and gender-matched hearing control participants. We hypothesized that congenitally deaf (CD) but not early deaf (ED) and late deaf (LD) CI users would show reduced multisensory interactions and a higher visual dominance in emotion perception than their hearing controls. The CD (n = 7), ED (deafness onset: <3 years of age; n = 7), and LD (deafness onset: >3 years; n = 13) CI users and the control participants performed an emotion recognition task with auditory, visual, and audio-visual emotionally congruent and incongruent nonsense speech stimuli. In different blocks, participants judged either the vocal (Voice task) or the facial expressions (Face task). In the Voice task, all three CI groups performed overall less efficiently than their respective controls and experienced higher interference from incongruent facial information. Furthermore, the ED CI users benefitted more than their controls from congruent faces and the CD CI users showed an analogous trend. In the Face task, recognition efficiency of the CI users and controls did not differ. Our results suggest that CI users acquire multisensory interactions to some degree, even after congenital deafness. When judging affective prosody they appear impaired and more strongly biased by concurrent facial information than typically hearing individuals. We speculate that limitations inherent to the CI contribute to these group differences.
Perceived object stability depends on multisensory estimates of gravity.
Barnett-Cowan, Michael; Fleming, Roland W; Singh, Manish; Bülthoff, Heinrich H
2011-04-27
How does the brain estimate object stability? Objects fall over when the gravity-projected centre-of-mass lies outside the point or area of support. To estimate an object's stability visually, the brain must integrate information across the shape and compare its orientation to gravity. When observers lie on their sides, gravity is perceived as tilted toward body orientation, consistent with a representation of gravity derived from multisensory information. We exploited this to test whether vestibular and kinesthetic information affect this visual task or whether the brain estimates object stability solely from visual information. In three body orientations, participants viewed images of objects close to a table edge. We measured the critical angle at which each object appeared equally likely to fall over or right itself. Perceived gravity was measured using the subjective visual vertical. The results show that the perceived critical angle was significantly biased in the same direction as the subjective visual vertical (i.e., towards the multisensory estimate of gravity). Our results rule out a general explanation that the brain depends solely on visual heuristics and assumptions about object stability. Instead, they suggest that multisensory estimates of gravity govern the perceived stability of objects, resulting in objects appearing more stable than they are when the head is tilted in the same direction in which they fall.
The sense of body ownership relaxes temporal constraints for multisensory integration.
Maselli, Antonella; Kilteni, Konstantina; López-Moliner, Joan; Slater, Mel
2016-08-03
Experimental work on body ownership illusions showed how simple multisensory manipulation can generate the illusory experience of an artificial limb as being part of the own-body. This work highlighted how own-body perception relies on a plastic brain representation emerging from multisensory integration. The flexibility of this representation is reflected in the short-term modulations of physiological states and perceptual processing observed during these illusions. Here, we explore the impact of ownership illusions on the temporal dimension of multisensory integration. We show that, during the illusion, the temporal window for integrating touch on the physical body with touch seen on a virtual body representation, increases with respect to integration with visual events seen close but separated from the virtual body. We show that this effect is mediated by the ownership illusion. Crucially, the temporal window for visuotactile integration was positively correlated with participants' scores rating the illusory experience of owning the virtual body and touching the object seen in contact with it. Our results corroborate the recently proposed causal inference mechanism for illusory body ownership. As a novelty, they show that the ensuing illusory causal binding between stimuli from the real and fake body relaxes constraints for the integration of bodily signals.
Perceptuo-motor compatibility governs multisensory integration in bimanual coordination dynamics.
Zelic, Gregory; Mottet, Denis; Lagarde, Julien
2016-02-01
The brain has the remarkable ability to bind together inputs from different sensory origin into a coherent percept. Behavioral benefits can result from such ability, e.g., a person typically responds faster and more accurately to cross-modal stimuli than to unimodal stimuli. To date, it is, however, largely unknown whether such multisensory benefits, shown for discrete reactive behaviors, generalize to the continuous coordination of movements. The present study addressed multisensory integration from the perspective of bimanual coordination dynamics, where the perceptual activity no longer triggers a single response but continuously guides the motor action. The task consisted in coordinating anti-symmetrically the continuous flexion-extension of the index fingers, while synchronizing with an external pacer. Three different configurations of metronome were tested, for which we examined whether a cross-modal pacing (audio-tactile beats) improved the stability of the coordination in comparison with unimodal pacing condition (auditory or tactile beats). We found a more stable bimanual coordination for cross-modal pacing, but only when the metronome configuration directly matched the anti-symmetric coordination pattern. We conclude that multisensory integration can benefit the continuous coordination of movements; however, this is constrained by whether the perceptual and motor activities match in space and time.
How prior expectations shape multisensory perception.
Gau, Remi; Noppeney, Uta
2016-01-01
The brain generates a representation of our environment by integrating signals from a common source, but segregating signals from different sources. This fMRI study investigated how the brain arbitrates between perceptual integration and segregation based on top-down congruency expectations and bottom-up stimulus-bound congruency cues. Participants were presented audiovisual movies of phonologically congruent, incongruent or McGurk syllables that can be integrated into an illusory percept (e.g. "ti" percept for visual «ki» with auditory /pi/). They reported the syllable they perceived. Critically, we manipulated participants' top-down congruency expectations by presenting McGurk stimuli embedded in blocks of congruent or incongruent syllables. Behaviorally, participants were more likely to fuse audiovisual signals into an illusory McGurk percept in congruent than incongruent contexts. At the neural level, the left inferior frontal sulcus (lIFS) showed increased activations for bottom-up incongruent relative to congruent inputs. Moreover, lIFS activations were increased for physically identical McGurk stimuli, when participants segregated the audiovisual signals and reported their auditory percept. Critically, this activation increase for perceptual segregation was amplified when participants expected audiovisually incongruent signals based on prior sensory experience. Collectively, our results demonstrate that the lIFS combines top-down prior (in)congruency expectations with bottom-up (in)congruency cues to arbitrate between multisensory integration and segregation. Copyright © 2015 Elsevier Inc. All rights reserved.
A link between individual differences in multisensory speech perception and eye movements
Gurler, Demet; Doyle, Nathan; Walker, Edgar; Magnotti, John; Beauchamp, Michael
2015-01-01
The McGurk effect is an illusion in which visual speech information dramatically alters the perception of auditory speech. However, there is a high degree of individual variability in how frequently the illusion is perceived: some individuals almost always perceive the McGurk effect, while others rarely do. Another axis of individual variability is the pattern of eye movements make while viewing a talking face: some individuals often fixate the mouth of the talker, while others rarely do. Since the talker's mouth carries the visual speech necessary information to induce the McGurk effect, we hypothesized that individuals who frequently perceive the McGurk effect should spend more time fixating the talker's mouth. We used infrared eye tracking to study eye movements as 40 participants viewed audiovisual speech. Frequent perceivers of the McGurk effect were more likely to fixate the mouth of the talker, and there was a significant correlation between McGurk frequency and mouth looking time. The noisy encoding of disparity model of McGurk perception showed that individuals who frequently fixated the mouth had lower sensory noise and higher disparity thresholds than those who rarely fixated the mouth. Differences in eye movements when viewing the talker's face may be an important contributor to interindividual differences in multisensory speech perception. PMID:25810157
Multisensory Integration and Internal Models for Sensing Gravity Effects in Primates
Lacquaniti, Francesco; La Scaleia, Barbara; Maffei, Vincenzo
2014-01-01
Gravity is crucial for spatial perception, postural equilibrium, and movement generation. The vestibular apparatus is the main sensory system involved in monitoring gravity. Hair cells in the vestibular maculae respond to gravitoinertial forces, but they cannot distinguish between linear accelerations and changes of head orientation relative to gravity. The brain deals with this sensory ambiguity (which can cause some lethal airplane accidents) by combining several cues with the otolith signals: angular velocity signals provided by the semicircular canals, proprioceptive signals from muscles and tendons, visceral signals related to gravity, and visual signals. In particular, vision provides both static and dynamic signals about body orientation relative to the vertical, but it poorly discriminates arbitrary accelerations of moving objects. However, we are able to visually detect the specific acceleration of gravity since early infancy. This ability depends on the fact that gravity effects are stored in brain regions which integrate visual, vestibular, and neck proprioceptive signals and combine this information with an internal model of gravity effects. PMID:25061610
Multisensory integration and internal models for sensing gravity effects in primates.
Lacquaniti, Francesco; Bosco, Gianfranco; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka
2014-01-01
Gravity is crucial for spatial perception, postural equilibrium, and movement generation. The vestibular apparatus is the main sensory system involved in monitoring gravity. Hair cells in the vestibular maculae respond to gravitoinertial forces, but they cannot distinguish between linear accelerations and changes of head orientation relative to gravity. The brain deals with this sensory ambiguity (which can cause some lethal airplane accidents) by combining several cues with the otolith signals: angular velocity signals provided by the semicircular canals, proprioceptive signals from muscles and tendons, visceral signals related to gravity, and visual signals. In particular, vision provides both static and dynamic signals about body orientation relative to the vertical, but it poorly discriminates arbitrary accelerations of moving objects. However, we are able to visually detect the specific acceleration of gravity since early infancy. This ability depends on the fact that gravity effects are stored in brain regions which integrate visual, vestibular, and neck proprioceptive signals and combine this information with an internal model of gravity effects.
Recalibration of the Multisensory Temporal Window of Integration Results from Changing Task Demands
Mégevand, Pierre; Molholm, Sophie; Nayak, Ashabari; Foxe, John J.
2013-01-01
The notion of the temporal window of integration, when applied in a multisensory context, refers to the breadth of the interval across which the brain perceives two stimuli from different sensory modalities as synchronous. It maintains a unitary perception of multisensory events despite physical and biophysical timing differences between the senses. The boundaries of the window can be influenced by attention and past sensory experience. Here we examined whether task demands could also influence the multisensory temporal window of integration. We varied the stimulus onset asynchrony between simple, short-lasting auditory and visual stimuli while participants performed two tasks in separate blocks: a temporal order judgment task that required the discrimination of subtle auditory-visual asynchronies, and a reaction time task to the first incoming stimulus irrespective of its sensory modality. We defined the temporal window of integration as the range of stimulus onset asynchronies where performance was below 75% in the temporal order judgment task, as well as the range of stimulus onset asynchronies where responses showed multisensory facilitation (race model violation) in the reaction time task. In 5 of 11 participants, we observed audio-visual stimulus onset asynchronies where reaction time was significantly accelerated (indicating successful integration in this task) while performance was accurate in the temporal order judgment task (indicating successful segregation in that task). This dissociation suggests that in some participants, the boundaries of the temporal window of integration can adaptively recalibrate in order to optimize performance according to specific task demands. PMID:23951203
Takagi, Sachiko; Hiramatsu, Saori; Tabei, Ken-ichi; Tanaka, Akihiro
2015-01-01
Previous studies have shown that the perception of facial and vocal affective expressions interacts with each other. Facial expressions usually dominate vocal expressions when we perceive the emotions of face–voice stimuli. In most of these studies, participants were instructed to pay attention to the face or voice. Few studies compared the perceived emotions with and without specific instructions regarding the modality to which attention should be directed. Also, these studies used combinations of the face and voice which expresses two opposing emotions, which limits the generalizability of the findings. The purpose of this study is to examine whether the emotion perception is modulated by instructions to pay attention to the face or voice using the six basic emotions. Also we examine the modality dominance between the face and voice for each emotion category. Before the experiment, we recorded faces and voices which expresses the six basic emotions and orthogonally combined these faces and voices. Consequently, the emotional valence of visual and auditory information was either congruent or incongruent. In the experiment, there were unisensory and multisensory sessions. The multisensory session was divided into three blocks according to whether an instruction was given to pay attention to a given modality (face attention, voice attention, and no instruction). Participants judged whether the speaker expressed happiness, sadness, anger, fear, disgust, or surprise. Our results revealed that instructions to pay attention to one modality and congruency of the emotions between modalities modulated the modality dominance, and the modality dominance is differed for each emotion category. In particular, the modality dominance for anger changed according to each instruction. Analyses also revealed that the modality dominance suggested by the congruency effect can be explained in terms of the facilitation effect and the interference effect. PMID:25698945
Medrea, Ioana
2013-01-01
The mouse has become an important model system for studying the cellular basis of learning and coding of heading by the vestibular system. Here we recorded from single neurons in the vestibular nuclei to understand how vestibular pathways encode self-motion under natural conditions, during which proprioceptive and motor-related signals as well as vestibular inputs provide feedback about an animal's movement through the world. We recorded neuronal responses in alert behaving mice focusing on a group of neurons, termed vestibular-only cells, that are known to control posture and project to higher-order centers. We found that the majority (70%, n = 21/30) of neurons were bimodal, in that they responded robustly to passive stimulation of proprioceptors as well as passive stimulation of the vestibular system. Additionally, the linear summation of a given neuron's vestibular and neck sensitivities predicted well its responses when both stimuli were applied simultaneously. In contrast, neuronal responses were suppressed when the same motion was actively generated, with the one striking exception that the activity of bimodal neurons similarly and robustly encoded head on body position in all conditions. Our results show that proprioceptive and motor-related signals are combined with vestibular information at the first central stage of vestibular processing in mice. We suggest that these results have important implications for understanding the multisensory integration underlying accurate postural control and the neural representation of directional heading in the head direction cell network of mice. PMID:24089394
Temporal Structure and Complexity Affect Audio-Visual Correspondence Detection
Denison, Rachel N.; Driver, Jon; Ruff, Christian C.
2013-01-01
Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence) to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task reproduced features of past findings based on explicit timing judgments but did not show any special advantage for perfectly synchronous streams. Importantly, the complexity of temporal patterns influences sensitivity to correspondence. Stochastic, irregular streams – with richer temporal pattern information – led to higher audio-visual matching sensitivity than predictable, rhythmic streams. Our results reveal that temporal structure and its complexity are key determinants for human detection of audio-visual correspondence. The distinctive emphasis of our new paradigms on temporal patterning could be useful for studying special populations with suspected abnormalities in audio-visual temporal perception and multisensory integration. PMID:23346067
Cloke, Jacob M; Nguyen, Robin; Chung, Beryl Y T; Wasserman, David I; De Lisio, Stephanie; Kim, Jun Chul; Bailey, Craig D C; Winters, Boyer D
2016-12-14
Atypical multisensory integration is an understudied cognitive symptom in schizophrenia. Procedures to evaluate multisensory integration in rodent models are lacking. We developed a novel multisensory object oddity (MSO) task to assess multisensory integration in ketamine-treated rats, a well established model of schizophrenia. Ketamine-treated rats displayed a selective MSO task impairment with tactile-visual and olfactory-visual sensory combinations, whereas basic unisensory perception was unaffected. Orbitofrontal cortex (OFC) administration of nicotine or ABT-418, an α 4 β 2 nicotinic acetylcholine receptor (nAChR) agonist, normalized MSO task performance in ketamine-treated rats and this effect was blocked by GABA A receptor antagonism. GABAergic currents were also decreased in OFC of ketamine-treated rats and were normalized by activation of α 4 β 2 nAChRs. Furthermore, parvalbumin (PV) immunoreactivity was decreased in the OFC of ketamine-treated rats. Accordingly, silencing of PV interneurons in OFC of PV-Cre mice using DREADDs (Designer Receptors Exclusively Activated by Designer Drugs) selectively impaired MSO task performance and this was reversed by ABT-418. Likewise, clozapine-N-oxide-induced inhibition of PV interneurons in brain slices was reversed by activation of α 4 β 2 nAChRs. These findings strongly imply a role for prefrontal GABAergic transmission in the integration of multisensory object features, a cognitive process with relevance to schizophrenia. Accordingly, nAChR agonism, which improves various facets of cognition in schizophrenia, reversed the severe MSO task impairment in this study and appears to do so via a GABAergic mechanism. Interactions between GABAergic and nAChR receptor systems warrant further investigation for potential therapeutic applications. The novel behavioral procedure introduced in the current study is acutely sensitive to schizophrenia-relevant cognitive impairment and should prove highly valuable for such research. Adaptive behaviors are driven by integration of information from different sensory modalities. Multisensory integration is disrupted in patients with schizophrenia, but little is known about the neural basis of this cognitive symptom. Development and validation of multisensory integration tasks for animal models is essential given the strong link between functional outcome and cognitive impairment in schizophrenia. We present a novel multisensory object oddity procedure that detects selective multisensory integration deficits in a rat model of schizophrenia using various combinations of sensory modalities. Moreover, converging data are consistent with a nicotinic-GABAergic mechanism of multisensory integration in the prefrontal cortex, results with strong clinical relevance to the study of cognitive impairment and treatment in schizophrenia. Copyright © 2016 the authors 0270-6474/16/3612571-16$15.00/0.
Aural-Visual-Kinesthetic Imagery in Motion Media.
ERIC Educational Resources Information Center
Allan, David W.
Motion media refers to film, television, and other forms of kinesthetic media including computerized multimedia technologies and virtual reality. Imagery reproduced by motion media carries a multisensory amalgamation of mental experiences. The blending of these experiences phenomenologically intersects with the reality and perception of words,…
Brooks, Cassandra J.; Chan, Yu Man; Anderson, Andrew J.; McKendrick, Allison M.
2018-01-01
Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information. PMID:29867415
Brooks, Cassandra J; Chan, Yu Man; Anderson, Andrew J; McKendrick, Allison M
2018-01-01
Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information.
Werner, Sebastian; Noppeney, Uta
2010-08-01
Merging information from multiple senses provides a more reliable percept of our environment. Yet, little is known about where and how various sensory features are combined within the cortical hierarchy. Combining functional magnetic resonance imaging and psychophysics, we investigated the neural mechanisms underlying integration of audiovisual object features. Subjects categorized or passively perceived audiovisual object stimuli with the informativeness (i.e., degradation) of the auditory and visual modalities being manipulated factorially. Controlling for low-level integration processes, we show higher level audiovisual integration selectively in the superior temporal sulci (STS) bilaterally. The multisensory interactions were primarily subadditive and even suppressive for intact stimuli but turned into additive effects for degraded stimuli. Consistent with the inverse effectiveness principle, auditory and visual informativeness determine the profile of audiovisual integration in STS similarly to the influence of physical stimulus intensity in the superior colliculus. Importantly, when holding stimulus degradation constant, subjects' audiovisual behavioral benefit predicts their multisensory integration profile in STS: only subjects that benefit from multisensory integration exhibit superadditive interactions, while those that do not benefit show suppressive interactions. In conclusion, superadditive and subadditive integration profiles in STS are functionally relevant and related to behavioral indices of multisensory integration with superadditive interactions mediating successful audiovisual object categorization.
Using visuo-kinetic virtual reality to induce illusory spinal movement: the MoOVi Illusion
Smith, Ross T.; Hunter, Estin V.; Davis, Miles G.; Sterling, Michele; Moseley, G. Lorimer
2017-01-01
Background Illusions that alter perception of the body provide novel opportunities to target brain-based contributions to problems such as persistent pain. One example of this, mirror therapy, uses vision to augment perceived movement of a painful limb to treat pain. Since mirrors can’t be used to induce augmented neck or other spinal movement, we aimed to test whether such an illusion could be achieved using virtual reality, in advance of testing its potential therapeutic benefit. We hypothesised that perceived head rotation would depend on visually suggested movement. Method In a within-subjects repeated measures experiment, 24 healthy volunteers performed neck movements to 50o of rotation, while a virtual reality system delivered corresponding visual feedback that was offset by a factor of 50%–200%—the Motor Offset Visual Illusion (MoOVi)—thus simulating more or less movement than that actually occurring. At 50o of real-world head rotation, participants pointed in the direction that they perceived they were facing. The discrepancy between actual and perceived direction was measured and compared between conditions. The impact of including multisensory (auditory and visual) feedback, the presence of a virtual body reference, and the use of 360o immersive virtual reality with and without three-dimensional properties, was also investigated. Results Perception of head movement was dependent on visual-kinaesthetic feedback (p = 0.001, partial eta squared = 0.17). That is, altered visual feedback caused a kinaesthetic drift in the direction of the visually suggested movement. The magnitude of the drift was not moderated by secondary variables such as the addition of illusory auditory feedback, the presence of a virtual body reference, or three-dimensionality of the scene. Discussion Virtual reality can be used to augment perceived movement and body position, such that one can perform a small movement, yet perceive a large one. The MoOVi technique tested here has clear potential for assessment and therapy of people with spinal pain. PMID:28243537
Using visuo-kinetic virtual reality to induce illusory spinal movement: the MoOVi Illusion.
Harvie, Daniel S; Smith, Ross T; Hunter, Estin V; Davis, Miles G; Sterling, Michele; Moseley, G Lorimer
2017-01-01
Illusions that alter perception of the body provide novel opportunities to target brain-based contributions to problems such as persistent pain. One example of this, mirror therapy, uses vision to augment perceived movement of a painful limb to treat pain. Since mirrors can't be used to induce augmented neck or other spinal movement, we aimed to test whether such an illusion could be achieved using virtual reality, in advance of testing its potential therapeutic benefit. We hypothesised that perceived head rotation would depend on visually suggested movement. In a within-subjects repeated measures experiment, 24 healthy volunteers performed neck movements to 50 o of rotation, while a virtual reality system delivered corresponding visual feedback that was offset by a factor of 50%-200%-the Motor Offset Visual Illusion (MoOVi)-thus simulating more or less movement than that actually occurring. At 50 o of real-world head rotation, participants pointed in the direction that they perceived they were facing. The discrepancy between actual and perceived direction was measured and compared between conditions. The impact of including multisensory (auditory and visual) feedback, the presence of a virtual body reference, and the use of 360 o immersive virtual reality with and without three-dimensional properties, was also investigated. Perception of head movement was dependent on visual-kinaesthetic feedback ( p = 0.001, partial eta squared = 0.17). That is, altered visual feedback caused a kinaesthetic drift in the direction of the visually suggested movement. The magnitude of the drift was not moderated by secondary variables such as the addition of illusory auditory feedback, the presence of a virtual body reference, or three-dimensionality of the scene. Virtual reality can be used to augment perceived movement and body position, such that one can perform a small movement, yet perceive a large one. The MoOVi technique tested here has clear potential for assessment and therapy of people with spinal pain.
Bair, Woei-Nan; Kiemel, Tim; Jeka, John J.; Clark, Jane E.
2012-01-01
Background Developmental Coordination Disorder (DCD) is a leading movement disorder in children that commonly involves poor postural control. Multisensory integration deficit, especially the inability to adaptively reweight to changing sensory conditions, has been proposed as a possible mechanism but with insufficient characterization. Empirical quantification of reweighting significantly advances our understanding of its developmental onset and improves the characterization of its difference in children with DCD compared to their typically developing (TD) peers. Methodology/Principal Findings Twenty children with DCD (6.6 to 11.8 years) were tested with a protocol in which visual scene and touch bar simultaneously oscillateded medio-laterally at different frequencies and various amplitudes. Their data were compared to data on TD children (4.2 to 10.8 years) from a previous study. Gains and phases were calculated for medio-lateral responses of the head and center of mass to both sensory stimuli. Gains and phases were simultaneously fitted by linear functions of age for each amplitude condition, segment, modality and group. Fitted gains and phases at two comparison ages (6.6 and 10.8 years) were tested for reweighting within each group and for group differences. Children with DCD reweight touch and vision at a later age (10.8 years) than their TD peers (4.2 years). Children with DCD demonstrate a weak visual reweighting, no advanced multisensory fusion and phase lags larger than those of TD children in response to both touch and vision. Conclusions/Significance Two developmental perspectives, postural body scheme and dorsal stream development, are provided to explain the weak vision reweighting. The lack of multisensory fusion supports the notion that optimal multisensory integration is a slow developmental process and is vulnerable in children with DCD. PMID:22815872
Møller, Cecilie; Højlund, Andreas; Bærentsen, Klaus B; Hansen, Niels Chr; Skewes, Joshua C; Vuust, Peter
2018-05-01
Perception is fundamentally a multisensory experience. The principle of inverse effectiveness (PoIE) states how the multisensory gain is maximal when responses to the unisensory constituents of the stimuli are weak. It is one of the basic principles underlying multisensory processing of spatiotemporally corresponding crossmodal stimuli that are well established at behavioral as well as neural levels. It is not yet clear, however, how modality-specific stimulus features influence discrimination of subtle changes in a crossmodally corresponding feature belonging to another modality. Here, we tested the hypothesis that reliance on visual cues to pitch discrimination follow the PoIE at the interindividual level (i.e., varies with varying levels of auditory-only pitch discrimination abilities). Using an oddball pitch discrimination task, we measured the effect of varying visually perceived vertical position in participants exhibiting a wide range of pitch discrimination abilities (i.e., musicians and nonmusicians). Visual cues significantly enhanced pitch discrimination as measured by the sensitivity index d', and more so in the crossmodally congruent than incongruent condition. The magnitude of gain caused by compatible visual cues was associated with individual pitch discrimination thresholds, as predicted by the PoIE. This was not the case for the magnitude of the congruence effect, which was unrelated to individual pitch discrimination thresholds, indicating that the pitch-height association is robust to variations in auditory skills. Our findings shed light on individual differences in multisensory processing by suggesting that relevant multisensory information that crucially aids some perceivers' performance may be of less importance to others, depending on their unisensory abilities.
Visual Enhancement of Illusory Phenomenal Accents in Non-Isochronous Auditory Rhythms
2016-01-01
Musical rhythms encompass temporal patterns that often yield regular metrical accents (e.g., a beat). There have been mixed results regarding perception as a function of metrical saliency, namely, whether sensitivity to a deviant was greater in metrically stronger or weaker positions. Besides, effects of metrical position have not been examined in non-isochronous rhythms, or with respect to multisensory influences. This study was concerned with two main issues: (1) In non-isochronous auditory rhythms with clear metrical accents, how would sensitivity to a deviant be modulated by metrical positions? (2) Would the effects be enhanced by multisensory information? Participants listened to strongly metrical rhythms with or without watching a point-light figure dance to the rhythm in the same meter, and detected a slight loudness increment. Both conditions were presented with or without an auditory interference that served to impair auditory metrical perception. Sensitivity to a deviant was found greater in weak beat than in strong beat positions, consistent with the Predictive Coding hypothesis and the idea of metrically induced illusory phenomenal accents. The visual rhythm of dance hindered auditory detection, but more so when the latter was itself less impaired. This pattern suggested that the visual and auditory rhythms were perceptually integrated to reinforce metrical accentuation, yielding more illusory phenomenal accents and thus lower sensitivity to deviants, in a manner consistent with the principle of inverse effectiveness. Results were discussed in the predictive framework for multisensory rhythms involving observed movements and possible mediation of the motor system. PMID:27880850
Multi-Sensory Input in the Non-Academic ESL Classroom.
ERIC Educational Resources Information Center
Bassano, Sharron
Teaching approaches for adult English as second language students with little previous formal education or native language literacy cannot rely on the traditional written materials. For students who cannot be reached through the written word, approaches must be devised that engage other channels of perceptions. Classroom activities are suggested…
Brief Report: Arrested Development of Audiovisual Speech Perception in Autism Spectrum Disorders
ERIC Educational Resources Information Center
Stevenson, Ryan A.; Siemann, Justin K.; Woynaroski, Tiffany G.; Schneider, Brittany C.; Eberly, Haley E.; Camarata, Stephen M.; Wallace, Mark T.
2014-01-01
Atypical communicative abilities are a core marker of Autism Spectrum Disorders (ASD). A number of studies have shown that, in addition to auditory comprehension differences, individuals with autism frequently show atypical responses to audiovisual speech, suggesting a multisensory contribution to these communicative differences from their…
Callan, Daniel E.; Jones, Jeffery A.; Callan, Akiko
2014-01-01
Behavioral and neuroimaging studies have demonstrated that brain regions involved with speech production also support speech perception, especially under degraded conditions. The premotor cortex (PMC) has been shown to be active during both observation and execution of action (“Mirror System” properties), and may facilitate speech perception by mapping unimodal and multimodal sensory features onto articulatory speech gestures. For this functional magnetic resonance imaging (fMRI) study, participants identified vowels produced by a speaker in audio-visual (saw the speaker's articulating face and heard her voice), visual only (only saw the speaker's articulating face), and audio only (only heard the speaker's voice) conditions with varying audio signal-to-noise ratios in order to determine the regions of the PMC involved with multisensory and modality specific processing of visual speech gestures. The task was designed so that identification could be made with a high level of accuracy from visual only stimuli to control for task difficulty and differences in intelligibility. The results of the functional magnetic resonance imaging (fMRI) analysis for visual only and audio-visual conditions showed overlapping activity in inferior frontal gyrus and PMC. The left ventral inferior premotor cortex (PMvi) showed properties of multimodal (audio-visual) enhancement with a degraded auditory signal. The left inferior parietal lobule and right cerebellum also showed these properties. The left ventral superior and dorsal premotor cortex (PMvs/PMd) did not show this multisensory enhancement effect, but there was greater activity for the visual only over audio-visual conditions in these areas. The results suggest that the inferior regions of the ventral premotor cortex are involved with integrating multisensory information, whereas, more superior and dorsal regions of the PMC are involved with mapping unimodal (in this case visual) sensory features of the speech signal with articulatory speech gestures. PMID:24860526
Martin, Andrea E.
2016-01-01
I argue that cue integration, a psychophysiological mechanism from vision and multisensory perception, offers a computational linking hypothesis between psycholinguistic theory and neurobiological models of language. I propose that this mechanism, which incorporates probabilistic estimates of a cue's reliability, might function in language processing from the perception of a phoneme to the comprehension of a phrase structure. I briefly consider the implications of the cue integration hypothesis for an integrated theory of language that includes acquisition, production, dialogue and bilingualism, while grounding the hypothesis in canonical neural computation. PMID:26909051
High sensitivity to multisensory conflicts in agoraphobia exhibited by virtual reality.
Viaud-Delmon, Isabelle; Warusfel, Olivier; Seguelas, Angeline; Rio, Emmanuel; Jouvent, Roland
2006-10-01
The primary aim of this study was to evaluate the effect of auditory feedback in a VR system planned for clinical use and to address the different factors that should be taken into account in building a bimodal virtual environment (VE). We conducted an experiment in which we assessed spatial performances in agoraphobic patients and normal subjects comparing two kinds of VEs, visual alone (Vis) and auditory-visual (AVis), during separate sessions. Subjects were equipped with a head-mounted display coupled with an electromagnetic sensor system and immersed in a virtual town. Their task was to locate different landmarks and become familiar with the town. In the AVis condition subjects were equipped with the head-mounted display and headphones, which delivered a soundscape updated in real-time according to their movement in the virtual town. While general performances remained comparable across the conditions, the reported feeling of immersion was more compelling in the AVis environment. However, patients exhibited more cybersickness symptoms in this condition. The result of this study points to the multisensory integration deficit of agoraphobic patients and underline the need for further research on multimodal VR systems for clinical use.
Multisensory connections of monkey auditory cerebral cortex
Smiley, John F.; Falchier, Arnaud
2009-01-01
Functional studies have demonstrated multisensory responses in auditory cortex, even in the primary and early auditory association areas. The features of somatosensory and visual responses in auditory cortex suggest that they are involved in multiple processes including spatial, temporal and object-related perception. Tract tracing studies in monkeys have demonstrated several potential sources of somatosensory and visual inputs to auditory cortex. These include potential somatosensory inputs from the retroinsular (RI) and granular insula (Ig) cortical areas, and from the thalamic posterior (PO) nucleus. Potential sources of visual responses include peripheral field representations of areas V2 and prostriata, as well as the superior temporal polysensory area (STP) in the superior temporal sulcus, and the magnocellular medial geniculate thalamic nucleus (MGm). Besides these sources, there are several other thalamic, limbic and cortical association structures that have multisensory responses and may contribute cross-modal inputs to auditory cortex. These connections demonstrated by tract tracing provide a list of potential inputs, but in most cases their significance has not been confirmed by functional experiments. It is possible that the somatosensory and visual modulation of auditory cortex are each mediated by multiple extrinsic sources. PMID:19619628
Cardini, Flavia; Tajadura-Jiménez, Ana; Serino, Andrea; Tsakiris, Manos
2013-01-01
Understanding other people’s feelings in social interactions depends on the ability to map onto our body the sensory experiences we observed on other people’s bodies. It has been shown that the perception of tactile stimuli on the face is improved when concurrently viewing a face being touched. This Visual Remapping of Touch (VRT) is enhanced the more similar others are perceived to be to the self, and is strongest when viewing one’s face. Here, we ask whether altering self-other boundaries can in turn change the VRT effect. We used the enfacement illusion, which relies on synchronous interpersonal multisensory stimulation (IMS), to manipulate self-other boundaries. Following synchronous, but not asynchronous, IMS, the self-related enhancement of the VRT extended to the other individual. These findings suggest that shared multisensory experiences represent one key way to overcome the boundaries between self and others, as evidenced by changes in somatosensory processing of tactile stimuli on one’s own face when concurrently viewing another person’s face being touched. PMID:23276110
Kanaya, Shoko; Kariya, Kenji; Fujisaki, Waka
2016-10-01
Certain systematic relationships are often assumed between information conveyed from multiple sensory modalities; for instance, a small figure and a high pitch may be perceived as more harmonious. This phenomenon, termed cross-modal correspondence, may result from correlations between multi-sensory signals learned in daily experience of the natural environment. If so, we would observe cross-modal correspondences not only in the perception of artificial stimuli but also in perception of natural objects. To test this hypothesis, we reanalyzed data collected previously in our laboratory examining perceptions of the material properties of wood using vision, audition, and touch. We compared participant evaluations of three perceptual properties (surface brightness, sharpness of sound, and smoothness) of the wood blocks obtained separately via vision, audition, and touch. Significant positive correlations were identified for all properties in the audition-touch comparison, and for two of the three properties regarding in the vision-touch comparison. By contrast, no properties exhibited significant positive correlations in the vision-audition comparison. These results suggest that we learn correlations between multi-sensory signals through experience; however, the strength of this statistical learning is apparently dependent on the particular combination of sensory modalities involved. © The Author(s) 2016.
ERIC Educational Resources Information Center
Barringer, Mary Dean
The manual presents a program planning framework and teaching units for teaching dance and movement to severely and profoundly handicapped individuals. The planning framework contains four components: (1) aesthetic perception/multisensory integration; (2) creative expression; (3) dance heritage/historical and cultural; and (4) aesthetic…
ERIC Educational Resources Information Center
Butler, Andrew J.; James, Thomas W.; James, Karin Harman
2011-01-01
Everyday experience affords us many opportunities to learn about objects through multiple senses using physical interaction. Previous work has shown that active motor learning of unisensory items enhances memory and leads to the involvement of motor systems during subsequent perception. However, the impact of active motor learning on subsequent…
Electrophysiological Evidence for a Multisensory Speech-Specific Mode of Perception
ERIC Educational Resources Information Center
Stekelenburg, Jeroen J.; Vroomen, Jean
2012-01-01
We investigated whether the interpretation of auditory stimuli as speech or non-speech affects audiovisual (AV) speech integration at the neural level. Perceptually ambiguous sine-wave replicas (SWS) of natural speech were presented to listeners who were either in "speech mode" or "non-speech mode". At the behavioral level, incongruent lipread…
Multi-sensory landscape assessment: the contribution of acoustic perception to landscape evaluation.
Gan, Yonghong; Luo, Tao; Breitung, Werner; Kang, Jian; Zhang, Tianhai
2014-12-01
In this paper, the contribution of visual and acoustic preference to multi-sensory landscape evaluation was quantitatively compared. The real landscapes were treated as dual-sensory ambiance and separated into visual landscape and soundscape. Both were evaluated by 63 respondents in laboratory conditions. The analysis of the relationship between respondent's visual and acoustic preference as well as their respective contribution to landscape preference showed that (1) some common attributes are universally identified in assessing visual, aural and audio-visual preference, such as naturalness or degree of human disturbance; (2) with acoustic and visual preferences as variables, a multi-variate linear regression model can satisfactorily predict landscape preference (R(2 )= 0.740), while the coefficients of determination for a unitary linear regression model were 0.345 and 0.720 for visual and acoustic preference as predicting factors, respectively; (3) acoustic preference played a much more important role in landscape evaluation than visual preference in this study (the former is about 4.5 times of the latter), which strongly suggests a rethinking of the role of soundscape in environment perception research and landscape planning practice.
Sasaki, Ryo; Angelaki, Dora E.
2017-01-01
We use visual image motion to judge the movement of objects, as well as our own movements through the environment. Generally, image motion components caused by object motion and self-motion are confounded in the retinal image. Thus, to estimate heading, the brain would ideally marginalize out the effects of object motion (or vice versa), but little is known about how this is accomplished neurally. Behavioral studies suggest that vestibular signals play a role in dissociating object motion and self-motion, and recent computational work suggests that a linear decoder can approximate marginalization by taking advantage of diverse multisensory representations. By measuring responses of MSTd neurons in two male rhesus monkeys and by applying a recently-developed method to approximate marginalization by linear population decoding, we tested the hypothesis that vestibular signals help to dissociate self-motion and object motion. We show that vestibular signals stabilize tuning for heading in neurons with congruent visual and vestibular heading preferences, whereas they stabilize tuning for object motion in neurons with discrepant preferences. Thus, vestibular signals enhance the separability of joint tuning for object motion and self-motion. We further show that a linear decoder, designed to approximate marginalization, allows the population to represent either self-motion or object motion with good accuracy. Decoder weights are broadly consistent with a readout strategy, suggested by recent computational work, in which responses are decoded according to the vestibular preferences of multisensory neurons. These results demonstrate, at both single neuron and population levels, that vestibular signals help to dissociate self-motion and object motion. SIGNIFICANCE STATEMENT The brain often needs to estimate one property of a changing environment while ignoring others. This can be difficult because multiple properties of the environment may be confounded in sensory signals. The brain can solve this problem by marginalizing over irrelevant properties to estimate the property-of-interest. We explore this problem in the context of self-motion and object motion, which are inherently confounded in the retinal image. We examine how diversity in a population of multisensory neurons may be exploited to decode self-motion and object motion from the population activity of neurons in macaque area MSTd. PMID:29030435
Perception of Multisensory Gender Coherence in 6- and 9-month-old Infants
de Boisferon, Anne Hillairet; Dupierrix, Eve; Quinn, Paul C.; Lœvenbruck, Hélène; Lewkowicz, David J.; Lee, Kang; Pascalis, Olivier
2015-01-01
One of the most salient social categories conveyed by human faces and voices is gender. We investigated the developmental emergence of the ability to perceive the coherence of auditory and visual attributes of gender in 6- and 9-month-old infants. Infants viewed two side-by-side video clips of a man and a woman singing a nursery rhyme and heard a synchronous male or female soundtrack. Results showed that 6-month-old infants did not match the audible and visible attributes of gender, and 9-month-old infants matched only female faces and voices. These findings indicate that the ability to perceive the multisensory coherence of gender emerges relatively late in infancy and that it reflects the greater experience that most infants have with female faces and voices. PMID:26561475
Roughness Perception during the Rubber Hand Illusion
ERIC Educational Resources Information Center
Schutz-Bosbach, Simone; Tausche, Peggy; Weiss, Carmen
2009-01-01
Watching a rubber hand being stroked by a paintbrush while feeling identical stroking of one's own occluded hand can create a compelling illusion that the seen hand becomes part of one's own body. It has been suggested that this so-called rubber hand illusion (RHI) does not simply reflect a bottom-up multisensory integration process but that the…
Intersensory Perception at Birth: Newborns Match Nonhuman Primate Faces and Voices
ERIC Educational Resources Information Center
Lewkowicz, David J.; Leo, Irene; Simion, Francesca
2010-01-01
Previous studies have shown that infants, including newborns, can match previously unseen and unheard human faces and vocalizations. More recently, it has been reported that infants as young as 4 months of age also can match the faces and vocalizations of other species raising the possibility that such broad multisensory perceptual tuning is…
The Audiovisual Temporal Binding Window Narrows in Early Childhood
ERIC Educational Resources Information Center
Lewkowicz, David J.; Flom, Ross
2014-01-01
Binding is key in multisensory perception. This study investigated the audio-visual (A-V) temporal binding window in 4-, 5-, and 6-year-old children (total N = 120). Children watched a person uttering a syllable whose auditory and visual components were either temporally synchronized or desynchronized by 366, 500, or 666 ms. They were asked…
Responses of prefrontal multisensory neurons to mismatching faces and vocalizations.
Diehl, Maria M; Romanski, Lizabeth M
2014-08-20
Social communication relies on the integration of auditory and visual information, which are present in faces and vocalizations. Evidence suggests that the integration of information from multiple sources enhances perception compared with the processing of a unimodal stimulus. Our previous studies demonstrated that single neurons in the ventrolateral prefrontal cortex (VLPFC) of the rhesus monkey (Macaca mulatta) respond to and integrate conspecific vocalizations and their accompanying facial gestures. We were therefore interested in how VLPFC neurons respond differentially to matching (congruent) and mismatching (incongruent) faces and vocalizations. We recorded VLPFC neurons during the presentation of movies with congruent or incongruent species-specific facial gestures and vocalizations as well as their unimodal components. Recordings showed that while many VLPFC units are multisensory and respond to faces, vocalizations, or their combination, a subset of neurons showed a significant change in neuronal activity in response to incongruent versus congruent vocalization movies. Among these neurons, we typically observed incongruent suppression during the early stimulus period and incongruent enhancement during the late stimulus period. Incongruent-responsive VLPFC neurons were both bimodal and nonlinear multisensory, fostering their ability to respond to changes in either modality of a face-vocalization stimulus. These results demonstrate that ventral prefrontal neurons respond to changes in either modality of an audiovisual stimulus, which is important in identity processing and for the integration of multisensory communication information. Copyright © 2014 the authors 0270-6474/14/3411233-11$15.00/0.
Ryan, Janice
2017-10-01
This exploratory, evidence-based practice research study focuses on presenting a plausible mesoscopic brain dynamics hypothesis for the benefits of treating clients with psychosocial and cognitive challenges using a mindful therapeutic approach and multi-sensory environments. After an extensive neuroscientific review of the therapeutic benefits of mindfulness, a multi-sensory environment is presented as a window of therapeutic opportunity to more quickly and efficiently facilitate the neurobiological experience of becoming more mindful or conscious of self and environment. The complementary relationship between the default mode network and the executive attention network is offered as a neurobiological hypothesis that could explain positive occupational engagement pattern shifts in a case study video of a hospice client with advanced dementia during multi-sensory environment treatment. Orbital Decomposition is used for a video analysis that shows a significant behavioral pattern shift consistent with dampening of the perceptual system attractors that contribute to negative emotional circular causalities in a variety of client populations. This treatment approach may also prove to be valuable for any person who has developed circular causalities due to feelings of isolation, victimization, or abuse. A case is made for broader applications of this intervention that may positively influence perception during the information transfer and processing of hippocampal learning. Future research is called for to determine if positive affective, interpersonal, and occupational engagement pattern shifts during treatment are related to the improved default mode network-executive attention network synchrony characteristic of increased mindfulness.
Zencius, A H; Wesolowski, M D; Rodriguez, I M
1998-01-01
The efficacy of using antecedent control procedures (practice, multi-sensory input and peer participation) in facilitating orientation to person, place and time with two survivors of traumatic brain injuries were tested in two studies. In the first investigation, a 23-year-old male was treated by presenting the orientation questions orally while being shown questions on written flashcards. Results suggest that correct responses to orientation questions only occurred when flashcards were coupled with oral questioning. The participant responded correctly to nearly 100% of all orientation questions within 2 weeks of initiating flashcards. In the second study, a 19-year-old male was asked to respond in writing to 20 orientation questions in a small group. The group had a leader and 4 TBI patients. Following this, group members who correctly answered the orientation questions, took turns reading orientation questions and providing the correct responses.
Vision and air flow combine to streamline flying honeybees
Taylor, Gavin J.; Luu, Tien; Ball, David; Srinivasan, Mandyam V.
2013-01-01
Insects face the challenge of integrating multi-sensory information to control their flight. Here we study a ‘streamlining' response in honeybees, whereby honeybees raise their abdomen to reduce drag. We find that this response, which was recently reported to be mediated by optic flow, is also strongly modulated by the presence of air flow simulating a head wind. The Johnston's organs in the antennae were found to play a role in the measurement of the air speed that is used to control the streamlining response. The response to a combination of visual motion and wind is complex and can be explained by a model that incorporates a non-linear combination of the two stimuli. The use of visual and mechanosensory cues increases the strength of the streamlining response when the stimuli are present concurrently. We propose this multisensory integration will make the response more robust to transient disturbances in either modality. PMID:24019053
The multisensory body revealed through its cast shadows.
Pavani, Francesco; Galfano, Giovanni
2015-01-01
One key issue when conceiving the body as a multisensory object is how the cognitive system integrates visible instances of the self and other bodies with one's own somatosensory processing, to achieve self-recognition and body ownership. Recent research has strongly suggested that shadows cast by our own body have a special status for cognitive processing, directing attention to the body in a fast and highly specific manner. The aim of the present article is to review the most recent scientific contributions addressing how body shadows affect both sensory/perceptual and attentional processes. The review examines three main points: (1) body shadows as a special window to investigate the construction of multisensory body perception; (2) experimental paradigms and related findings; (3) open questions and future trajectories. The reviewed literature suggests that shadows cast by one's own body promote binding between personal and extrapersonal space and elicit automatic orienting of attention toward the body-part casting the shadow. Future research should address whether the effects exerted by body shadows are similar to those observed when observers are exposed to other visual instances of their body. The results will further clarify the processes underlying the merging of vision and somatosensation when creating body representations.
The multisensory body revealed through its cast shadows
Pavani, Francesco; Galfano, Giovanni
2015-01-01
One key issue when conceiving the body as a multisensory object is how the cognitive system integrates visible instances of the self and other bodies with one’s own somatosensory processing, to achieve self-recognition and body ownership. Recent research has strongly suggested that shadows cast by our own body have a special status for cognitive processing, directing attention to the body in a fast and highly specific manner. The aim of the present article is to review the most recent scientific contributions addressing how body shadows affect both sensory/perceptual and attentional processes. The review examines three main points: (1) body shadows as a special window to investigate the construction of multisensory body perception; (2) experimental paradigms and related findings; (3) open questions and future trajectories. The reviewed literature suggests that shadows cast by one’s own body promote binding between personal and extrapersonal space and elicit automatic orienting of attention toward the body-part casting the shadow. Future research should address whether the effects exerted by body shadows are similar to those observed when observers are exposed to other visual instances of their body. The results will further clarify the processes underlying the merging of vision and somatosensation when creating body representations. PMID:26042079
Can You Hear That Peak? Utilization of Auditory and Visual Feedback at Peak Limb Velocity
ERIC Educational Resources Information Center
Loria, Tristan; de Grosbois, John; Tremblay, Luc
2016-01-01
Purpose: At rest, the central nervous system combines and integrates multisensory cues to yield an optimal percept. When engaging in action, the relative weighing of sensory modalities has been shown to be altered. Because the timing of peak velocity is the critical moment in some goal-directed movements (e.g., overarm throwing), the current study…
ERIC Educational Resources Information Center
Russo, N.; Mottron, L.; Burack, J. A.; Jemel, B.
2012-01-01
Individuals with autism spectrum disorders (ASD) report difficulty integrating simultaneously presented visual and auditory stimuli (Iarocci & McDonald, 2006), albeit showing enhanced perceptual processing of unisensory stimuli, as well as an enhanced role of perception in higher-order cognitive tasks (Enhanced Perceptual Functioning (EPF) model;…
The Sound and Feel of Titrations: A Smartphone Aid for Color-Blind and Visually Impaired Students
ERIC Educational Resources Information Center
Bandyopadhyay, Subhajit; Rathod, Balraj B.
2017-01-01
An Android-based application has been developed to provide color-blind and visually impaired students a multisensory perception of color change observed in a titration. The application records and converts the color information into beep sounds and vibration pulses, which are generated by the smartphone. It uses a range threshold of hue and…
Evidence for Enhanced Interoceptive Accuracy in Professional Musicians
Schirmer-Mokwa, Katharina L.; Fard, Pouyan R.; Zamorano, Anna M.; Finkel, Sebastian; Birbaumer, Niels; Kleber, Boris A.
2015-01-01
Interoception is defined as the perceptual activity involved in the processing of internal bodily signals. While the ability of internal perception is considered a relatively stable trait, recent data suggest that learning to integrate multisensory information can modulate it. Making music is a uniquely rich multisensory experience that has shown to alter motor, sensory, and multimodal representations in the brain of musicians. We hypothesize that musical training also heightens interoceptive accuracy comparable to other perceptual modalities. Thirteen professional singers, twelve string players, and thirteen matched non-musicians were examined using a well-established heartbeat discrimination paradigm complemented by self-reported dispositional traits. Results revealed that both groups of musicians displayed higher interoceptive accuracy than non-musicians, whereas no differences were found between singers and string-players. Regression analyses showed that accumulated musical practice explained about 49% variation in heartbeat perception accuracy in singers but not in string-players. Psychometric data yielded a number of psychologically plausible inter-correlations in musicians related to performance anxiety. However, dispositional traits were not a confounding factor on heartbeat discrimination accuracy. Together, these data provide first evidence indicating that professional musicians show enhanced interoceptive accuracy compared to non-musicians. We argue that musical training largely accounted for this effect. PMID:26733836
Perceptual drifts of real and artificial limbs in the rubber hand illusion.
Fuchs, Xaver; Riemer, Martin; Diers, Martin; Flor, Herta; Trojan, Jörg
2016-04-22
In the rubber hand illusion (RHI), transient embodiment of an artificial hand is induced. An often-used indicator for this effect is the "proprioceptive drift", a localization bias of the real hand towards the artificial hand. This measure suggests that the real hand is attracted by the artificial hand. Principles of multisensory integration, however, rather suggest that conflicting sensory information is combined in a "compromise" fashion and that hands should rather be attracted towards each other. Here, we used a new variant of the RHI paradigm in which participants pointed at the artificial hand. Our results indicate that the perceived positions of the real and artificial hand converge towards each other: in addition to the well-known drift of the real hand towards the artificial hand, we also found an opposite drift of the artificial hand towards the real hand. Our results contradict the notion of perceptual substitution of the real hand by the artificial hand. Rather, they are in line with the view that vision and proprioception are fused into an intermediate percept. This is further evidence that the perception of our body is a flexible multisensory construction that is based on integration principles.
Audiovisual semantic congruency during encoding enhances memory performance.
Heikkilä, Jenni; Alho, Kimmo; Hyvönen, Heidi; Tiippana, Kaisa
2015-01-01
Studies of memory and learning have usually focused on a single sensory modality, although human perception is multisensory in nature. In the present study, we investigated the effects of audiovisual encoding on later unisensory recognition memory performance. The participants were to memorize auditory or visual stimuli (sounds, pictures, spoken words, or written words), each of which co-occurred with either a semantically congruent stimulus, incongruent stimulus, or a neutral (non-semantic noise) stimulus in the other modality during encoding. Subsequent memory performance was overall better when the stimulus to be memorized was initially accompanied by a semantically congruent stimulus in the other modality than when it was accompanied by a neutral stimulus. These results suggest that semantically congruent multisensory experiences enhance encoding of both nonverbal and verbal materials, resulting in an improvement in their later recognition memory.
The role of vision in auditory distance perception.
Calcagno, Esteban R; Abregú, Ezequiel L; Eguía, Manuel C; Vergara, Ramiro
2012-01-01
In humans, multisensory interaction is an important strategy for improving the detection of stimuli of different nature and reducing the variability of response. It is known that the presence of visual information affects the auditory perception in the horizontal plane (azimuth), but there are few researches that study the influence of vision in the auditory distance perception. In general, the data obtained from these studies are contradictory and do not completely define the way in which visual cues affect the apparent distance of a sound source. Here psychophysical experiments on auditory distance perception in humans are performed, including and excluding visual cues. The results show that the apparent distance from the source is affected by the presence of visual information and that subjects can store in their memory a representation of the environment that later improves the perception of distance.
The role of alpha oscillations for illusory perception
Lange, Joachim; Keil, Julian; Schnitzler, Alfons; van Dijk, Hanneke; Weisz, Nathan
2014-01-01
Alpha oscillations are a prominent electrophysiological signal measured across a wide range of species and cortical and subcortical sites. Alpha oscillations have been viewed for a long time as an “idling” rhythm, purely reflecting inactive sites. Despite earlier evidence from neurophysiology, awareness that alpha oscillations can substantially influence perception and behavior has grown only recently in cognitive neuroscience. Evidence for an active role of alpha for perception comes mainly from several visual, near-threshold experiments. In the current review, we extend this view by summarizing studies showing how alpha-defined brain states relate to illusory perception, i.e. cases of perceptual reports that are not “objectively” verifiable by distinct stimuli or stimulus features. These studies demonstrate that ongoing or prestimulus alpha oscillations substantially influence the perception of auditory, visual or multisensory illusions. PMID:24931795
ERIC Educational Resources Information Center
Fengler, Ineke; Delfau, Pia-Céline; Röder, Brigitte
2018-01-01
It is yet unclear whether congenitally deaf cochlear implant (CD CI) users' visual and multisensory emotion perception is influenced by their history in sign language acquisition. We hypothesized that early-signing CD CI users, relative to late-signing CD CI users and hearing, non-signing controls, show better facial expression recognition and…
ERIC Educational Resources Information Center
Cardini, Flavia; Tajadura-Jimenez, Ana; Serino, Andrea; Tsakiris, Manos
2013-01-01
Understanding other people's feelings in social interactions depends on the ability to map onto our body the sensory experiences we observed on other people's bodies. It has been shown that the perception of tactile stimuli on the face is improved when concurrently viewing a face being touched. This Visual Remapping of Touch (VRT) is enhanced the…
Sasaki, Ryo; Angelaki, Dora E; DeAngelis, Gregory C
2017-11-15
We use visual image motion to judge the movement of objects, as well as our own movements through the environment. Generally, image motion components caused by object motion and self-motion are confounded in the retinal image. Thus, to estimate heading, the brain would ideally marginalize out the effects of object motion (or vice versa), but little is known about how this is accomplished neurally. Behavioral studies suggest that vestibular signals play a role in dissociating object motion and self-motion, and recent computational work suggests that a linear decoder can approximate marginalization by taking advantage of diverse multisensory representations. By measuring responses of MSTd neurons in two male rhesus monkeys and by applying a recently-developed method to approximate marginalization by linear population decoding, we tested the hypothesis that vestibular signals help to dissociate self-motion and object motion. We show that vestibular signals stabilize tuning for heading in neurons with congruent visual and vestibular heading preferences, whereas they stabilize tuning for object motion in neurons with discrepant preferences. Thus, vestibular signals enhance the separability of joint tuning for object motion and self-motion. We further show that a linear decoder, designed to approximate marginalization, allows the population to represent either self-motion or object motion with good accuracy. Decoder weights are broadly consistent with a readout strategy, suggested by recent computational work, in which responses are decoded according to the vestibular preferences of multisensory neurons. These results demonstrate, at both single neuron and population levels, that vestibular signals help to dissociate self-motion and object motion. SIGNIFICANCE STATEMENT The brain often needs to estimate one property of a changing environment while ignoring others. This can be difficult because multiple properties of the environment may be confounded in sensory signals. The brain can solve this problem by marginalizing over irrelevant properties to estimate the property-of-interest. We explore this problem in the context of self-motion and object motion, which are inherently confounded in the retinal image. We examine how diversity in a population of multisensory neurons may be exploited to decode self-motion and object motion from the population activity of neurons in macaque area MSTd. Copyright © 2017 the authors 0270-6474/17/3711204-16$15.00/0.
Neural Correlates of Interindividual Differences in Children’s Audiovisual Speech Perception
Nath, Audrey R.; Fava, Eswen E.; Beauchamp, Michael S.
2011-01-01
Children use information from both the auditory and visual modalities to aid in understanding speech. A dramatic illustration of this multisensory integration is the McGurk effect, an illusion in which an auditory syllable is perceived differently when it is paired with an incongruent mouth movement. However, there are significant interindividual differences in McGurk perception: some children never perceive the illusion, while others always do. Because converging evidence suggests that the posterior superior temporal sulcus (STS) is a critical site for multisensory integration, we hypothesized that activity within the STS would predict susceptibility to the McGurk effect. To test this idea, we used blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) in seventeen children aged 6 to 12 years to measure brain responses to three audiovisual stimulus categories: McGurk incongruent, non-McGurk incongruent and congruent syllables. Two separate analysis approaches, one using independent functional localizers and another using whole-brain voxel-based regression, showed differences in the left STS between perceivers and non-perceivers. The STS of McGurk perceivers responded significantly more than non-perceivers to McGurk syllables, but not to other stimuli, and perceivers’ hemodynamic responses in the STS were significantly prolonged. In addition to the STS, weaker differences between perceivers and non-perceivers were observed in the FFA and extrastriate visual cortex. These results suggest that the STS is an important source of interindividual variability in children’s audiovisual speech perception. PMID:21957257
Primary and multisensory cortical activity is correlated with audiovisual percepts.
Benoit, Margo McKenna; Raij, Tommi; Lin, Fa-Hsuan; Jääskeläinen, Iiro P; Stufflebeam, Steven
2010-04-01
Incongruent auditory and visual stimuli can elicit audiovisual illusions such as the McGurk effect where visual /ka/ and auditory /pa/ fuse into another percept such as/ta/. In the present study, human brain activity was measured with adaptation functional magnetic resonance imaging to investigate which brain areas support such audiovisual illusions. Subjects viewed trains of four movies beginning with three congruent /pa/ stimuli to induce adaptation. The fourth stimulus could be (i) another congruent /pa/, (ii) a congruent /ka/, (iii) an incongruent stimulus that evokes the McGurk effect in susceptible individuals (lips /ka/ voice /pa/), or (iv) the converse combination that does not cause the McGurk effect (lips /pa/ voice/ ka/). This paradigm was predicted to show increased release from adaptation (i.e. stronger brain activation) when the fourth movie and the related percept was increasingly different from the three previous movies. A stimulus change in either the auditory or the visual stimulus from /pa/ to /ka/ (iii, iv) produced within-modality and cross-modal responses in primary auditory and visual areas. A greater release from adaptation was observed for incongruent non-McGurk (iv) compared to incongruent McGurk (iii) trials. A network including the primary auditory and visual cortices, nonprimary auditory cortex, and several multisensory areas (superior temporal sulcus, intraparietal sulcus, insula, and pre-central cortex) showed a correlation between perceiving the McGurk effect and the fMRI signal, suggesting that these areas support the audiovisual illusion. Copyright 2009 Wiley-Liss, Inc.
Primary and Multisensory Cortical Activity is Correlated with Audiovisual Percepts
Benoit, Margo McKenna; Raij, Tommi; Lin, Fa-Hsuan; Jääskeläinen, Iiro P.; Stufflebeam, Steven
2012-01-01
Incongruent auditory and visual stimuli can elicit audiovisual illusions such as the McGurk effect where visual /ka/ and auditory /pa/ fuse into another percept such as/ta/. In the present study, human brain activity was measured with adaptation functional magnetic resonance imaging to investigate which brain areas support such audiovisual illusions. Subjects viewed trains of four movies beginning with three congruent /pa/ stimuli to induce adaptation. The fourth stimulus could be (i) another congruent /pa/, (ii) a congruent /ka/, (iii) an incongruent stimulus that evokes the McGurk effect in susceptible individuals (lips /ka/ voice /pa/), or (iv) the converse combination that does not cause the McGurk effect (lips /pa/ voice/ ka/). This paradigm was predicted to show increased release from adaptation (i.e. stronger brain activation) when the fourth movie and the related percept was increasingly different from the three previous movies. A stimulus change in either the auditory or the visual stimulus from /pa/ to /ka/ (iii, iv) produced within-modality and cross-modal responses in primary auditory and visual areas. A greater release from adaptation was observed for incongruent non-McGurk (iv) compared to incongruent McGurk (iii) trials. A network including the primary auditory and visual cortices, nonprimary auditory cortex, and several multisensory areas (superior temporal sulcus, intraparietal sulcus, insula, and pre-central cortex) showed a correlation between perceiving the McGurk effect and the fMRI signal, suggesting that these areas support the audiovisual illusion. PMID:19780040
Listening to Another Sense: Somatosensory Integration in the Auditory System
Wu, Calvin; Stefanescu, Roxana A.; Martel, David T.
2014-01-01
Conventionally, sensory systems are viewed as separate entities, each with its own physiological process serving a different purpose. However, many functions require integrative inputs from multiple sensory systems, and sensory intersection and convergence occur throughout the central nervous system. The neural processes for hearing perception undergo significant modulation by the two other major sensory systems, vision and somatosensation. This synthesis occurs at every level of the ascending auditory pathway: the cochlear nucleus, inferior colliculus, medial geniculate body, and the auditory cortex. In this review, we explore the process of multisensory integration from 1) anatomical (inputs and connections), 2) physiological (cellular responses), 3) functional, and 4) pathological aspects. We focus on the convergence between auditory and somatosensory inputs in each ascending auditory station. This review highlights the intricacy of sensory processing, and offers a multisensory perspective regarding the understanding of sensory disorders. PMID:25526698
Multi- and unisensory visual flash illusions.
Courtney, Jon R; Motes, Michael A; Hubbard, Timothy L
2007-01-01
The role of stimulus structure in multisensory and unisensory interactions was examined. When a flash (17 ms) was accompanied by multiple tones (each 7 ms, SOA < or =100 ms) multiple flashes were reported, and this effect has been suggested to reflect the role of stimulus continuity in multisensory interactions. In experiments 1 and 2 we examined if stimulus continuity would affect concurrently presented stimuli. When a relatively longer flash (317 ms) was accompanied by multiple tones (each 7 ms), observers reported perceiving multiple flashes. In experiment 3 we tested whether a flash presented near fixation would induce an illusory flash further in the periphery. One flash (17 ms) presented 5 degrees below fixation was reported as multiple flashes if presented with two flashes (each 17 ms, SOA =100 ms) 2 degrees above fixation. The extent to which these data support a phenomenological continuity principle and whether this principle applies to unisensory perception is discussed.
Cross-modal versus within-modal recall: differences in behavioral and brain responses.
Butler, Andrew J; James, Karin H
2011-10-31
Although human experience is multisensory in nature, previous research has focused predominantly on memory for unisensory as opposed to multisensory information. In this work, we sought to investigate behavioral and neural differences between the cued recall of cross-modal audiovisual associations versus within-modal visual or auditory associations. Participants were presented with cue-target associations comprised of pairs of nonsense objects, pairs of nonsense sounds, objects paired with sounds, and sounds paired with objects. Subsequently, they were required to recall the modality of the target given the cue while behavioral accuracy, reaction time, and blood oxygenation level dependent (BOLD) activation were measured. Successful within-modal recall was associated with modality-specific reactivation in primary perceptual regions, and was more accurate than cross-modal retrieval. When auditory targets were correctly or incorrectly recalled using a cross-modal visual cue, there was re-activation in auditory association cortex, and recall of information from cross-modal associations activated the hippocampus to a greater degree than within-modal associations. Findings support theories that propose an overlap between regions active during perception and memory, and show that behavioral and neural differences exist between within- and cross-modal associations. Overall the current study highlights the importance of the role of multisensory information in memory. Copyright © 2011 Elsevier B.V. All rights reserved.
Multisensory Motion Perception in 3–4 Month-Old Infants
Nava, Elena; Grassi, Massimo; Brenna, Viola; Croci, Emanuela; Turati, Chiara
2017-01-01
Human infants begin very early in life to take advantage of multisensory information by extracting the invariant amodal information that is conveyed redundantly by multiple senses. Here we addressed the question as to whether infants can bind multisensory moving stimuli, and whether this occurs even if the motion produced by the stimuli is only illusory. Three- to 4-month-old infants were presented with two bimodal pairings: visuo-tactile and audio-visual. Visuo-tactile pairings consisted of apparently vertically moving bars (the Barber Pole illusion) moving in either the same or opposite direction with a concurrent tactile stimulus consisting of strokes given on the infant’s back. Audio-visual pairings consisted of the Barber Pole illusion in its visual and auditory version, the latter giving the impression of a continuous rising or ascending pitch. We found that infants were able to discriminate congruently (same direction) vs. incongruently moving (opposite direction) pairs irrespective of modality (Experiment 1). Importantly, we also found that congruently moving visuo-tactile and audio-visual stimuli were preferred over incongruently moving bimodal stimuli (Experiment 2). Our findings suggest that very young infants are able to extract motion as amodal component and use it to match stimuli that only apparently move in the same direction. PMID:29187829
How music alters a kiss: superior temporal gyrus controls fusiform-amygdalar effective connectivity.
Pehrs, Corinna; Deserno, Lorenz; Bakels, Jan-Hendrik; Schlochtermeier, Lorna H; Kappelhoff, Hermann; Jacobs, Arthur M; Fritz, Thomas Hans; Koelsch, Stefan; Kuchinke, Lars
2014-11-01
While watching movies, the brain integrates the visual information and the musical soundtrack into a coherent percept. Multisensory integration can lead to emotion elicitation on which soundtrack valences may have a modulatory impact. Here, dynamic kissing scenes from romantic comedies were presented to 22 participants (13 females) during functional magnetic resonance imaging scanning. The kissing scenes were either accompanied by happy music, sad music or no music. Evidence from cross-modal studies motivated a predefined three-region network for multisensory integration of emotion, consisting of fusiform gyrus (FG), amygdala (AMY) and anterior superior temporal gyrus (aSTG). The interactions in this network were investigated using dynamic causal models of effective connectivity. This revealed bilinear modulations by happy and sad music with suppression effects on the connectivity from FG and AMY to aSTG. Non-linear dynamic causal modeling showed a suppressive gating effect of aSTG on fusiform-amygdalar connectivity. In conclusion, fusiform to amygdala coupling strength is modulated via feedback through aSTG as region for multisensory integration of emotional material. This mechanism was emotion-specific and more pronounced for sad music. Therefore, soundtrack valences may modulate emotion elicitation in movies by differentially changing preprocessed visual information to the amygdala. © The Author (2013). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
How music alters a kiss: superior temporal gyrus controls fusiform–amygdalar effective connectivity
Deserno, Lorenz; Bakels, Jan-Hendrik; Schlochtermeier, Lorna H.; Kappelhoff, Hermann; Jacobs, Arthur M.; Fritz, Thomas Hans; Koelsch, Stefan; Kuchinke, Lars
2014-01-01
While watching movies, the brain integrates the visual information and the musical soundtrack into a coherent percept. Multisensory integration can lead to emotion elicitation on which soundtrack valences may have a modulatory impact. Here, dynamic kissing scenes from romantic comedies were presented to 22 participants (13 females) during functional magnetic resonance imaging scanning. The kissing scenes were either accompanied by happy music, sad music or no music. Evidence from cross-modal studies motivated a predefined three-region network for multisensory integration of emotion, consisting of fusiform gyrus (FG), amygdala (AMY) and anterior superior temporal gyrus (aSTG). The interactions in this network were investigated using dynamic causal models of effective connectivity. This revealed bilinear modulations by happy and sad music with suppression effects on the connectivity from FG and AMY to aSTG. Non-linear dynamic causal modeling showed a suppressive gating effect of aSTG on fusiform–amygdalar connectivity. In conclusion, fusiform to amygdala coupling strength is modulated via feedback through aSTG as region for multisensory integration of emotional material. This mechanism was emotion-specific and more pronounced for sad music. Therefore, soundtrack valences may modulate emotion elicitation in movies by differentially changing preprocessed visual information to the amygdala. PMID:24298171
NASA Technical Reports Server (NTRS)
Fisher, Scott S.
1986-01-01
A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed for use as a multipurpose interface environment. The system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, application scenarios, and research directions are described.
The interactions of multisensory integration with endogenous and exogenous attention
Tang, Xiaoyu; Wu, Jinglong; Shen, Yong
2016-01-01
Stimuli from multiple sensory organs can be integrated into a coherent representation through multiple phases of multisensory processing; this phenomenon is called multisensory integration. Multisensory integration can interact with attention. Here, we propose a framework in which attention modulates multisensory processing in both endogenous (goal-driven) and exogenous (stimulus-driven) ways. Moreover, multisensory integration exerts not only bottom-up but also top-down control over attention. Specifically, we propose the following: (1) endogenous attentional selectivity acts on multiple levels of multisensory processing to determine the extent to which simultaneous stimuli from different modalities can be integrated; (2) integrated multisensory events exert top-down control on attentional capture via multisensory search templates that are stored in the brain; (3) integrated multisensory events can capture attention efficiently, even in quite complex circumstances, due to their increased salience compared to unimodal events and can thus improve search accuracy; and (4) within a multisensory object, endogenous attention can spread from one modality to another in an exogenous manner. PMID:26546734
The interactions of multisensory integration with endogenous and exogenous attention.
Tang, Xiaoyu; Wu, Jinglong; Shen, Yong
2016-02-01
Stimuli from multiple sensory organs can be integrated into a coherent representation through multiple phases of multisensory processing; this phenomenon is called multisensory integration. Multisensory integration can interact with attention. Here, we propose a framework in which attention modulates multisensory processing in both endogenous (goal-driven) and exogenous (stimulus-driven) ways. Moreover, multisensory integration exerts not only bottom-up but also top-down control over attention. Specifically, we propose the following: (1) endogenous attentional selectivity acts on multiple levels of multisensory processing to determine the extent to which simultaneous stimuli from different modalities can be integrated; (2) integrated multisensory events exert top-down control on attentional capture via multisensory search templates that are stored in the brain; (3) integrated multisensory events can capture attention efficiently, even in quite complex circumstances, due to their increased salience compared to unimodal events and can thus improve search accuracy; and (4) within a multisensory object, endogenous attention can spread from one modality to another in an exogenous manner. Copyright © 2015 Elsevier Ltd. All rights reserved.
Saint-Amour, Dave; De Sanctis, Pierfilippo; Molholm, Sophie; Ritter, Walter; Foxe, John J
2007-02-01
Seeing a speaker's facial articulatory gestures powerfully affects speech perception, helping us overcome noisy acoustical environments. One particularly dramatic illustration of visual influences on speech perception is the "McGurk illusion", where dubbing an auditory phoneme onto video of an incongruent articulatory movement can often lead to illusory auditory percepts. This illusion is so strong that even in the absence of any real change in auditory stimulation, it activates the automatic auditory change-detection system, as indexed by the mismatch negativity (MMN) component of the auditory event-related potential (ERP). We investigated the putative left hemispheric dominance of McGurk-MMN using high-density ERPs in an oddball paradigm. Topographic mapping of the initial McGurk-MMN response showed a highly lateralized left hemisphere distribution, beginning at 175 ms. Subsequently, scalp activity was also observed over bilateral fronto-central scalp with a maximal amplitude at approximately 290 ms, suggesting later recruitment of right temporal cortices. Strong left hemisphere dominance was again observed during the last phase of the McGurk-MMN waveform (350-400 ms). Source analysis indicated bilateral sources in the temporal lobe just posterior to primary auditory cortex. While a single source in the right superior temporal gyrus (STG) accounted for the right hemisphere activity, two separate sources were required, one in the left transverse gyrus and the other in STG, to account for left hemisphere activity. These findings support the notion that visually driven multisensory illusory phonetic percepts produce an auditory-MMN cortical response and that left hemisphere temporal cortex plays a crucial role in this process.
Saint-Amour, Dave; De Sanctis, Pierfilippo; Molholm, Sophie; Ritter, Walter; Foxe, John J.
2006-01-01
Seeing a speaker’s facial articulatory gestures powerfully affects speech perception, helping us overcome noisy acoustical environments. One particularly dramatic illustration of visual influences on speech perception is the “McGurk illusion”, where dubbing an auditory phoneme onto video of an incongruent articulatory movement can often lead to illusory auditory percepts. This illusion is so strong that even in the absence of any real change in auditory stimulation, it activates the automatic auditory change-detection system, as indexed by the mismatch negativity (MMN) component of the auditory event-related potential (ERP). We investigated the putative left hemispheric dominance of McGurk-MMN using high-density ERPs in an oddball paradigm. Topographic mapping of the initial McGurk-MMN response showed a highly lateralized left hemisphere distribution, beginning at 175 ms. Subsequently, scalp activity was also observed over bilateral fronto-central scalp with a maximal amplitude at ~290 ms, suggesting later recruitment of right temporal cortices. Strong left hemisphere dominance was again observed during the last phase of the McGurk-MMN waveform (350–400 ms). Source analysis indicated bilateral sources in the temporal lobe just posterior to primary auditory cortex. While a single source in the right superior temporal gyrus (STG) accounted for the right hemisphere activity, two separate sources were required, one in the left transverse gyrus and the other in STG, to account for left hemisphere activity. These findings support the notion that visually driven multisensory illusory phonetic percepts produce an auditory-MMN cortical response and that left hemisphere temporal cortex plays a crucial role in this process. PMID:16757004
Multisensory integration in the basal ganglia.
Nagy, Attila; Eördegh, Gabriella; Paróczy, Zsuzsanna; Márkus, Zita; Benedek, György
2006-08-01
Sensorimotor co-ordination in mammals is achieved predominantly via the activity of the basal ganglia. To investigate the underlying multisensory information processing, we recorded the neuronal responses in the caudate nucleus (CN) and substantia nigra (SN) of anaesthetized cats to visual, auditory or somatosensory stimulation alone and also to their combinations, i.e. multisensory stimuli. The main goal of the study was to ascertain whether multisensory information provides more information to the neurons than do the individual sensory components. A majority of the investigated SN and CN multisensory units exhibited significant cross-modal interactions. The multisensory response enhancements were either additive or superadditive; multisensory response depressions were also detected. CN and SN cells with facilitatory and inhibitory interactions were found in each multisensory combination. The strengths of the multisensory interactions did not differ in the two structures. A significant inverse correlation was found between the strengths of the best unimodal responses and the magnitudes of the multisensory response enhancements, i.e. the neurons with the weakest net unimodal responses exhibited the strongest enhancement effects. The onset latencies of the responses of the integrative CN and SN neurons to the multisensory stimuli were significantly shorter than those to the unimodal stimuli. These results provide evidence that the multisensory CN and SN neurons, similarly to those in the superior colliculus and related structures, have the ability to integrate multisensory information. Multisensory integration may help in the effective processing of sensory events and the changes in the environment during motor actions controlled by the basal ganglia.
Anthro-Centric Multisensory Interface for Vision Augmentation/Substitution
2013-02-01
for human perception of the visual environment. Figure 1: (left) Photograph of the Argus™ I and II Retinal Prosthesis System epiretinal...scleralband (a);the visualprocessing unit (b);spectacle m ounted m iniature cam era (c). Figure 3. C olour photo of A rgus II epiretinal prosthesis ...items in the environment. Alternatively, we have also implemented a touch screen mechanism that allows the user to feel the pixels under his or her
The Role of Auditory and Visual Speech in Word Learning at 18 Months and in Adulthood
ERIC Educational Resources Information Center
Havy, Mélanie; Foroud, Afra; Fais, Laurel; Werker, Janet F.
2017-01-01
Visual information influences speech perception in both infants and adults. It is still unknown whether lexical representations are multisensory. To address this question, we exposed 18-month-old infants (n = 32) and adults (n = 32) to new word-object pairings: Participants either heard the acoustic form of the words or saw the talking face in…
Fengler, Ineke; Delfau, Pia-Céline; Röder, Brigitte
2018-04-01
It is yet unclear whether congenitally deaf cochlear implant (CD CI) users' visual and multisensory emotion perception is influenced by their history in sign language acquisition. We hypothesized that early-signing CD CI users, relative to late-signing CD CI users and hearing, non-signing controls, show better facial expression recognition and rely more on the facial cues of audio-visual emotional stimuli. Two groups of young adult CD CI users-early signers (ES CI users; n = 11) and late signers (LS CI users; n = 10)-and a group of hearing, non-signing, age-matched controls (n = 12) performed an emotion recognition task with auditory, visual, and cross-modal emotionally congruent and incongruent speech stimuli. On different trials, participants categorized either the facial or the vocal expressions. The ES CI users more accurately recognized affective prosody than the LS CI users in the presence of congruent facial information. Furthermore, the ES CI users, but not the LS CI users, gained more than the controls from congruent visual stimuli when recognizing affective prosody. Both CI groups performed overall worse than the controls in recognizing affective prosody. These results suggest that early sign language experience affects multisensory emotion perception in CD CI users.
Russo, N; Mottron, L; Burack, J A; Jemel, B
2012-07-01
Individuals with autism spectrum disorders (ASD) report difficulty integrating simultaneously presented visual and auditory stimuli (Iarocci & McDonald, 2006), albeit showing enhanced perceptual processing of unisensory stimuli, as well as an enhanced role of perception in higher-order cognitive tasks (Enhanced Perceptual Functioning (EPF) model; Mottron, Dawson, Soulières, Hubert, & Burack, 2006). Individuals with an ASD also integrate auditory-visual inputs over longer periods of time than matched typically developing (TD) peers (Kwakye, Foss-Feig, Cascio, Stone & Wallace, 2011). To tease apart the dichotomy of both extended multisensory processing and enhanced perceptual processing, we used behavioral and electrophysiological measurements of audio-visual integration among persons with ASD. 13 TD and 14 autistics matched on IQ completed a forced choice multisensory semantic congruence task requiring speeded responses regarding the congruence or incongruence of animal sounds and pictures. Stimuli were presented simultaneously or sequentially at various stimulus onset asynchronies in both auditory first and visual first presentations. No group differences were noted in reaction time (RT) or accuracy. The latency at which congruent and incongruent waveforms diverged was the component of interest. In simultaneous presentations, congruent and incongruent waveforms diverged earlier (circa 150 ms) among persons with ASD than among TD individuals (around 350 ms). In sequential presentations, asymmetries in the timing of neuronal processing were noted in ASD which depended on stimulus order, but these were consistent with the nature of specific perceptual strengths in this group. These findings extend the Enhanced Perceptual Functioning Model to the multisensory domain, and provide a more nuanced context for interpreting ERP findings of impaired semantic processing in ASD. Copyright © 2012 Elsevier Ltd. All rights reserved.
Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity
Gibney, Kyla D.; Aligbe, Enimielen; Eggleston, Brady A.; Nunes, Sarah R.; Kerkhoff, Willa G.; Dean, Cassandra L.; Kwakye, Leslie D.
2017-01-01
The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller’s inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial role in multisensory integration for both highly complex and simple multisensory tasks and that attention may interact differently with multisensory processing in individuals who do not strongly integrate multisensory information. PMID:28163675
Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity.
Gibney, Kyla D; Aligbe, Enimielen; Eggleston, Brady A; Nunes, Sarah R; Kerkhoff, Willa G; Dean, Cassandra L; Kwakye, Leslie D
2017-01-01
The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller's inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial role in multisensory integration for both highly complex and simple multisensory tasks and that attention may interact differently with multisensory processing in individuals who do not strongly integrate multisensory information.
Virtual workstation - A multimodal, stereoscopic display environment
NASA Astrophysics Data System (ADS)
Fisher, S. S.; McGreevy, M.; Humphries, J.; Robinett, W.
1987-01-01
A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed for use in a multipurpose interface environment. The system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, application scenarios, and research directions are described.
Time perception impairs sensory-motor integration in Parkinson’s disease
2013-01-01
It is well known that perception and estimation of time are fundamental for the relationship between humans and their environment. However, this temporal information processing is inefficient in patients with Parkinson’ disease (PD), resulting in temporal judgment deficits. In general, the pathophysiology of PD has been described as a dysfunction in the basal ganglia, which is a multisensory integration station. Thus, a deficit in the sensorimotor integration process could explain many of the Parkinson symptoms, such as changes in time perception. This physiological distortion may be better understood if we analyze the neurobiological model of interval timing, expressed within the conceptual framework of a traditional information-processing model called “Scalar Expectancy Theory”. Therefore, in this review we discuss the pathophysiology and sensorimotor integration process in PD, the theories and neural basic mechanisms involved in temporal processing, and the main clinical findings about the impact of time perception in PD. PMID:24131660
Prediction and constraint in audiovisual speech perception
Peelle, Jonathan E.; Sommers, Mitchell S.
2015-01-01
During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. PMID:25890390
A simulator for surgery training: optimal sensory stimuli in a bone pinning simulation
NASA Astrophysics Data System (ADS)
Daenzer, Stefan; Fritzsche, Klaus
2008-03-01
Currently available low cost haptic devices allow inexpensive surgical training with no risk to patients. Major drawbacks of lower cost devices include limited maximum feedback force and the incapability to expose occurring moments. Aim of this work was the design and implementation of a surgical simulator that allows the evaluation of multi-sensory stimuli in order to overcome the occurring drawbacks. The simulator was built following a modular architecture to allow flexible combinations and thorough evaluation of different multi-sensory feedback modules. A Kirschner-Wire (K-Wire) tibial fracture fixation procedure was defined and implemented as a first test scenario. A set of computational metrics has been derived from the clinical requirements of the task to objectively assess the trainees performance during simulation. Sensory feedback modules for haptic and visual feedback have been developed, each in a basic and additionally in an enhanced form. First tests have shown that specific visual concepts can overcome some of the drawbacks coming along with low cost haptic devices. The simulator, the metrics and the surgery scenario together represent an important step towards a better understanding of the perception of multi-sensory feedback in complex surgical training tasks. Field studies on top of the architecture can open the way to risk-less and inexpensive surgical simulations that can keep up with traditional surgical training.
Preston, Catherine; Ehrsson, H. Henrik
2014-01-01
Historically, body size overestimation has been linked to abnormal levels of body dissatisfaction found in eating disorders. However, recently this relationship has been called into question. Indeed, despite a link between how we perceive and how we feel about our body seeming intuitive, until now lack of an experimental method to manipulate body size has meant that a causal link, even in healthy participants, has remained elusive. Recent developments in body perception research demonstrate that the perceptual experience of the body can be readily manipulated using multisensory illusions. The current study exploits such illusions to modulate perceived body size in an attempt to influence body satisfaction. Participants were presented with stereoscopic video images of slimmer and wider mannequin bodies viewed through head-mounted displays from first person perspective. Illusory ownership was induced by synchronously stroking the seen mannequin body with the unseen real body. Pre and post-illusion affective and perceptual measures captured changes in perceived body size and body satisfaction. Illusory ownership of a slimmer body resulted in participants perceiving their actual body as slimmer and giving higher ratings of body satisfaction demonstrating a direct link between perceptual and affective body representations. Change in body satisfaction following illusory ownership of a wider body, however, was related to degree of (non-clinical) eating disorder psychopathology, which can be linked to fluctuating body representations found in clinical samples. The results suggest that body perception is linked to body satisfaction and may be of importance for eating disorder symptomology. PMID:24465698
Modeling the Perception of Audiovisual Distance: Bayesian Causal Inference and Other Models
2016-01-01
Studies of audiovisual perception of distance are rare. Here, visual and auditory cue interactions in distance are tested against several multisensory models, including a modified causal inference model. In this causal inference model predictions of estimate distributions are included. In our study, the audiovisual perception of distance was overall better explained by Bayesian causal inference than by other traditional models, such as sensory dominance and mandatory integration, and no interaction. Causal inference resolved with probability matching yielded the best fit to the data. Finally, we propose that sensory weights can also be estimated from causal inference. The analysis of the sensory weights allows us to obtain windows within which there is an interaction between the audiovisual stimuli. We find that the visual stimulus always contributes by more than 80% to the perception of visual distance. The visual stimulus also contributes by more than 50% to the perception of auditory distance, but only within a mobile window of interaction, which ranges from 1 to 4 m. PMID:27959919
A multisensory perspective of working memory
Quak, Michel; London, Raquel Elea; Talsma, Durk
2015-01-01
Although our sensory experience is mostly multisensory in nature, research on working memory representations has focused mainly on examining the senses in isolation. Results from the multisensory processing literature make it clear that the senses interact on a more intimate manner than previously assumed. These interactions raise questions regarding the manner in which multisensory information is maintained in working memory. We discuss the current status of research on multisensory processing and the implications of these findings on our theoretical understanding of working memory. To do so, we focus on reviewing working memory research conducted from a multisensory perspective, and discuss the relation between working memory, attention, and multisensory processing in the context of the predictive coding framework. We argue that a multisensory approach to the study of working memory is indispensable to achieve a realistic understanding of how working memory processes maintain and manipulate information. PMID:25954176
Multisensory Stimulation Can Induce an Illusion of Larger Belly Size in Immersive Virtual Reality
Normand, Jean-Marie; Giannopoulos, Elias; Spanlang, Bernhard; Slater, Mel
2011-01-01
Background Body change illusions have been of great interest in recent years for the understanding of how the brain represents the body. Appropriate multisensory stimulation can induce an illusion of ownership over a rubber or virtual arm, simple types of out-of-the-body experiences, and even ownership with respect to an alternate whole body. Here we use immersive virtual reality to investigate whether the illusion of a dramatic increase in belly size can be induced in males through (a) first person perspective position (b) synchronous visual-motor correlation between real and virtual arm movements, and (c) self-induced synchronous visual-tactile stimulation in the stomach area. Methodology Twenty two participants entered into a virtual reality (VR) delivered through a stereo head-tracked wide field-of-view head-mounted display. They saw from a first person perspective a virtual body substituting their own that had an inflated belly. For four minutes they repeatedly prodded their real belly with a rod that had a virtual counterpart that they saw in the VR. There was a synchronous condition where their prodding movements were synchronous with what they felt and saw and an asynchronous condition where this was not the case. The experiment was repeated twice for each participant in counter-balanced order. Responses were measured by questionnaire, and also a comparison of before and after self-estimates of belly size produced by direct visual manipulation of the virtual body seen from the first person perspective. Conclusions The results show that first person perspective of a virtual body that substitutes for the own body in virtual reality, together with synchronous multisensory stimulation can temporarily produce changes in body representation towards the larger belly size. This was demonstrated by (a) questionnaire results, (b) the difference between the self-estimated belly size, judged from a first person perspective, after and before the experimental manipulation, and (c) significant positive correlations between these two measures. We discuss this result in the general context of body ownership illusions, and suggest applications including treatment for body size distortion illnesses. PMID:21283823
Slow changing postural cues cancel visual field dependence on self-tilt detection.
Scotto Di Cesare, C; Macaluso, T; Mestre, D R; Bringoux, L
2015-01-01
Interindividual differences influence the multisensory integration process involved in spatial perception. Here, we assessed the effect of visual field dependence on self-tilt detection relative to upright, as a function of static vs. slow changing visual or postural cues. To that aim, we manipulated slow rotations (i.e., 0.05° s(-1)) of the body and/or the visual scene in pitch. Participants had to indicate whether they felt being tilted forward at successive angles. Results show that thresholds for self-tilt detection substantially differed between visual field dependent/independent subjects, when only the visual scene was rotated. This difference was no longer present when the body was actually rotated, whatever the visual scene condition (i.e., absent, static or rotated relative to the observer). These results suggest that the cancellation of visual field dependence by dynamic postural cues may rely on a multisensory reweighting process, where slow changing vestibular/somatosensory inputs may prevail over visual inputs. Copyright © 2014 Elsevier B.V. All rights reserved.
Walsh, E; Guilmette, D N; Longo, M R; Moore, J W; Oakley, D A; Halligan, P W; Mehta, M A; Deeley, Q
2015-01-01
Hypnotic suggestibility (HS) is the ability to respond automatically to suggestions and to experience alterations in perception and behavior. Hypnotically suggestible participants are also better able to focus and sustain their attention on an experimental stimulus. The present study explores the relation between HS and susceptibility to the rubber hand illusion (RHI). Based on previous research with visual illusions, it was predicted that higher HS would lead to a stronger RHI. Two behavioral output measures of the RHI, an implicit (proprioceptive drift) and an explicit (RHI questionnaire) measure, were correlated against HS scores. Hypnotic suggestibility correlated positively with the implicit RHI measure contributing to 30% of the variation. However, there was no relation between HS and the explicit RHI questionnaire measure, or with compliance control items. High hypnotic suggestibility may facilitate, via attentional mechanisms, the multisensory integration of visuoproprioceptive inputs that leads to greater perceptual mislocalization of a participant's hand. These results may provide insight into the multisensory brain mechanisms involved in our sense of embodiment.
Panichi, Roberto; Botti, Fabio Massimo; Ferraresi, Aldo; Faralli, Mario; Kyriakareli, Artemis; Schieppati, Marco; Pettorossi, Vito Enrico
2011-04-01
Self-motion perception and vestibulo-ocular reflex (VOR) were studied during whole body yaw rotation in the dark at different static head positions. Rotations consisted of four cycles of symmetric sinusoidal and asymmetric oscillations. Self-motion perception was evaluated by measuring the ability of subjects to manually track a static remembered target. VOR was recorded separately and the slow phase eye position (SPEP) was computed. Three different head static yaw deviations (active and passive) relative to the trunk (0°, 45° to right and 45° to left) were examined. Active head deviations had a significant effect during asymmetric oscillation: the movement perception was enhanced when the head was kept turned toward the side of body rotation and decreased in the opposite direction. Conversely, passive head deviations had no effect on movement perception. Further, vibration (100 Hz) of the neck muscles splenius capitis and sternocleidomastoideus remarkably influenced perceived rotation during asymmetric oscillation. On the other hand, SPEP of VOR was modulated by active head deviation, but was not influenced by neck muscle vibration. Through its effects on motion perception and reflex gain, head position improved gaze stability and enhanced self-motion perception in the direction of the head deviation. Copyright © 2010 Elsevier B.V. All rights reserved.
Self-motion Perception Training: Thresholds Improve in the Light but not in the Dark
Hartmann, Matthias; Furrer, Sarah; Herzog, Michael H.; Merfeld, Daniel M.; Mast, Fred W.
2014-01-01
We investigated perceptual learning in self-motion perception. Blindfolded participants were displaced leftward or rightward by means of a motion platform, and asked to indicate the direction of motion. A total of eleven participants underwent 3360 practice trials, distributed over twelve (Experiment 1) or six days (Experiment 2). We found no improvement in motion discrimination in both experiments. These results are surprising since perceptual learning has been demonstrated for visual, auditory, and somatosensory discrimination. Improvements in the same task were found when visual input was provided (Experiment 3). The multisensory nature of vestibular information is discussed as a possible explanation of the absence of perceptual learning in darkness. PMID:23392475
Cross-modal tactile-taste interactions in food evaluations
Slocombe, B. G.; Carmichael, D.A.; Simner, J.
2016-01-01
Detecting the taste components within a flavoured substance relies on exposing chemoreceptors within the mouth to the chemical components of ingested food. In our paper, we show that the evaluation of taste components can also be influenced by the tactile quality of the food. We first discuss how multisensory factors might influence taste, flavour and smell for both typical and atypical (synaesthetic) populations and we then present two empirical studies showing tactile-taste interactions in the general population. We asked a group of average adults to evaluate the taste components of flavoured food substances, whilst we presented simultaneous cross-sensory visuo-tactile cues within the eating environment. Specifically, we presented foodstuffs between subjects that were otherwise identical but had a rough versus smooth surface, or were served on a rough versus smooth serving-plate. We found no effect of the serving-plate, but we found the rough/smoothness of the foodstuff itself significantly influenced perception: food was rated as significantly more sour if it had a rough (vs. smooth) surface. In modifying taste perception via ostensibly unrelated dimensions, we demonstrate that the detection of tastes within flavours may be influenced by higher level cross-sensory cues. Finally, we suggest that the direction of our cross-sensory associations may speak to the types of hedonic mapping found both in normal multisensory integration, and in the unusual condition of synaesthesia. PMID:26169315
Grouping and Segregation of Sensory Events by Actions in Temporal Audio-Visual Recalibration.
Ikumi, Nara; Soto-Faraco, Salvador
2016-01-01
Perception in multi-sensory environments involves both grouping and segregation of events across sensory modalities. Temporal coincidence between events is considered a strong cue to resolve multisensory perception. However, differences in physical transmission and neural processing times amongst modalities complicate this picture. This is illustrated by cross-modal recalibration, whereby adaptation to audio-visual asynchrony produces shifts in perceived simultaneity. Here, we examined whether voluntary actions might serve as a temporal anchor to cross-modal recalibration in time. Participants were tested on an audio-visual simultaneity judgment task after an adaptation phase where they had to synchronize voluntary actions with audio-visual pairs presented at a fixed asynchrony (vision leading or vision lagging). Our analysis focused on the magnitude of cross-modal recalibration to the adapted audio-visual asynchrony as a function of the nature of the actions during adaptation, putatively fostering cross-modal grouping or, segregation. We found larger temporal adjustments when actions promoted grouping than segregation of sensory events. However, a control experiment suggested that additional factors, such as attention to planning/execution of actions, could have an impact on recalibration effects. Contrary to the view that cross-modal temporal organization is mainly driven by external factors related to the stimulus or environment, our findings add supporting evidence for the idea that perceptual adjustments strongly depend on the observer's inner states induced by motor and cognitive demands.
Temporal factors affecting somatosensory–auditory interactions in speech processing
Ito, Takayuki; Gracco, Vincent L.; Ostry, David J.
2014-01-01
Speech perception is known to rely on both auditory and visual information. However, sound-specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009). In the present study, we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory–auditory interaction in speech perception. We examined the changes in event-related potentials (ERPs) in response to multisensory synchronous (simultaneous) and asynchronous (90 ms lag and lead) somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the ERP was reliably different from the two unisensory potentials. More importantly, the magnitude of the ERP difference varied as a function of the relative timing of the somatosensory–auditory stimulation. Event-related activity change due to stimulus timing was seen between 160 and 220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory–auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production. PMID:25452733
Filling-in visual motion with sounds.
Väljamäe, A; Soto-Faraco, S
2008-10-01
Information about the motion of objects can be extracted by multiple sensory modalities, and, as a consequence, object motion perception typically involves the integration of multi-sensory information. Often, in naturalistic settings, the flow of such information can be rather discontinuous (e.g. a cat racing through the furniture in a cluttered room is partly seen and partly heard). This study addressed audio-visual interactions in the perception of time-sampled object motion by measuring adaptation after-effects. We found significant auditory after-effects following adaptation to unisensory auditory and visual motion in depth, sampled at 12.5 Hz. The visually induced (cross-modal) auditory motion after-effect was eliminated if visual adaptors flashed at half of the rate (6.25 Hz). Remarkably, the addition of the high-rate acoustic flutter (12.5 Hz) to this ineffective, sparsely time-sampled, visual adaptor restored the auditory after-effect to a level comparable to what was seen with high-rate bimodal adaptors (flashes and beeps). Our results suggest that this auditory-induced reinstatement of the motion after-effect from the poor visual signals resulted from the occurrence of sound-induced illusory flashes. This effect was found to be dependent both on the directional congruency between modalities and on the rate of auditory flutter. The auditory filling-in of time-sampled visual motion supports the feasibility of using reduced frame rate visual content in multisensory broadcasting and virtual reality applications.
Grouping and Segregation of Sensory Events by Actions in Temporal Audio-Visual Recalibration
Ikumi, Nara; Soto-Faraco, Salvador
2017-01-01
Perception in multi-sensory environments involves both grouping and segregation of events across sensory modalities. Temporal coincidence between events is considered a strong cue to resolve multisensory perception. However, differences in physical transmission and neural processing times amongst modalities complicate this picture. This is illustrated by cross-modal recalibration, whereby adaptation to audio-visual asynchrony produces shifts in perceived simultaneity. Here, we examined whether voluntary actions might serve as a temporal anchor to cross-modal recalibration in time. Participants were tested on an audio-visual simultaneity judgment task after an adaptation phase where they had to synchronize voluntary actions with audio-visual pairs presented at a fixed asynchrony (vision leading or vision lagging). Our analysis focused on the magnitude of cross-modal recalibration to the adapted audio-visual asynchrony as a function of the nature of the actions during adaptation, putatively fostering cross-modal grouping or, segregation. We found larger temporal adjustments when actions promoted grouping than segregation of sensory events. However, a control experiment suggested that additional factors, such as attention to planning/execution of actions, could have an impact on recalibration effects. Contrary to the view that cross-modal temporal organization is mainly driven by external factors related to the stimulus or environment, our findings add supporting evidence for the idea that perceptual adjustments strongly depend on the observer's inner states induced by motor and cognitive demands. PMID:28154529
Incidental category learning and cognitive load in a multisensory environment across childhood.
Broadbent, H J; Osborne, T; Rea, M; Peng, A; Mareschal, D; Kirkham, N Z
2018-06-01
Multisensory information has been shown to facilitate learning (Bahrick & Lickliter, 2000; Broadbent, White, Mareschal, & Kirkham, 2017; Jordan & Baker, 2011; Shams & Seitz, 2008). However, although research has examined the modulating effect of unisensory and multisensory distractors on multisensory processing, the extent to which a concurrent unisensory or multisensory cognitive load task would interfere with or support multisensory learning remains unclear. This study examined the role of concurrent task modality on incidental category learning in 6- to 10-year-olds. Participants were engaged in a multisensory learning task while also performing either a unisensory (visual or auditory only) or multisensory (audiovisual) concurrent task (CT). We found that engaging in an auditory CT led to poorer performance on incidental category learning compared with an audiovisual or visual CT, across groups. In 6-year-olds, category test performance was at chance in the auditory-only CT condition, suggesting auditory concurrent tasks may interfere with learning in younger children, but the addition of visual information may serve to focus attention. These findings provide novel insight into the use of multisensory concurrent information on incidental learning. Implications for the deployment of multisensory learning tasks within education across development and developmental changes in modality dominance and ability to switch flexibly across modalities are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Virtual interface environment workstations
NASA Technical Reports Server (NTRS)
Fisher, S. S.; Wenzel, E. M.; Coler, C.; Mcgreevy, M. W.
1988-01-01
A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed at NASA's Ames Research Center for use as a multipurpose interface environment. This Virtual Interface Environment Workstation (VIEW) system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, research scenarios, and research directions are described.
Influence of visual path information on human heading perception during rotation.
Li, Li; Chen, Jing; Peng, Xiaozhe
2009-03-31
How does visual path information influence people's perception of their instantaneous direction of self-motion (heading)? We have previously shown that humans can perceive heading without direct access to visual path information. Here we vary two key parameters for estimating heading from optic flow, the field of view (FOV) and the depth range of environmental points, to investigate the conditions under which visual path information influences human heading perception. The display simulated an observer traveling on a circular path. Observers used a joystick to rotate their line of sight until deemed aligned with true heading. Four FOV sizes (110 x 94 degrees, 48 x 41 degrees, 16 x 14 degrees, 8 x 7 degrees) and depth ranges (6-50 m, 6-25 m, 6-12.5 m, 6-9 m) were tested. Consistent with our computational modeling results, heading bias increased with the reduction of FOV or depth range when the display provided a sequence of velocity fields but no direct path information. When the display provided path information, heading bias was not influenced as much by the reduction of FOV or depth range. We conclude that human heading and path perception involve separate visual processes. Path helps heading perception when the display does not contain enough optic-flow information for heading estimation during rotation.
Incidental Category Learning and Cognitive Load in a Multisensory Environment across Childhood
ERIC Educational Resources Information Center
Broadbent, H. J.; Osborne, T.; Rea, M.; Peng, A.; Mareschal, D.; Kirkham, N. Z.
2018-01-01
Multisensory information has been shown to facilitate learning (Bahrick & Lickliter, 2000; Broadbent, White, Mareschal, & Kirkham, 2017; Jordan & Baker, 2011; Shams & Seitz, 2008). However, although research has examined the modulating effect of unisensory and multisensory distractors on multisensory processing, the extent to which…
A. Smith, Nicholas; A. Folland, Nicholas; Martinez, Diana M.; Trainor, Laurel J.
2017-01-01
Infants learn to use auditory and visual information to organize the sensory world into identifiable objects with particular locations. Here we use a behavioural method to examine infants' use of harmonicity cues to auditory object perception in a multisensory context. Sounds emitted by different objects sum in the air and the auditory system must figure out which parts of the complex waveform belong to different sources (auditory objects). One important cue to this source separation is that complex tones with pitch typically contain a fundamental frequency and harmonics at integer multiples of the fundamental. Consequently, adults hear a mistuned harmonic in a complex sound as a distinct auditory object (Alain et al., 2003). Previous work by our group demonstrated that 4-month-old infants are also sensitive to this cue. They behaviourally discriminate a complex tone with a mistuned harmonic from the same complex with in-tune harmonics, and show an object-related event-related potential (ERP) electrophysiological (EEG) response to the stimulus with mistuned harmonics. In the present study we use an audiovisual procedure to investigate whether infants perceive a complex tone with an 8% mistuned harmonic as emanating from two objects, rather than merely detecting the mistuned cue. We paired in-tune and mistuned complex tones with visual displays that contained either one or two bouncing balls. Four-month-old infants showed surprise at the incongruous pairings, looking longer at the display of two balls when paired with the in-tune complex and at the display of one ball when paired with the mistuned harmonic complex. We conclude that infants use harmonicity as a cue for source separation when integrating auditory and visual information in object perception. PMID:28346869
Prediction and constraint in audiovisual speech perception.
Peelle, Jonathan E; Sommers, Mitchell S
2015-07-01
During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. Copyright © 2015 Elsevier Ltd. All rights reserved.
Audio–visual interactions for motion perception in depth modulate activity in visual area V3A
Ogawa, Akitoshi; Macaluso, Emiliano
2013-01-01
Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) “matched vs. unmatched” conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio–visual “congruent vs. incongruent” between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio–visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio–visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio–visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices. PMID:23333414
Characteristic symptoms and associated features of exploding head syndrome in undergraduates.
Sharpless, Brian A
2018-03-01
Background Exploding head syndrome (EHS) is characterized by loud noises or a sense of explosion in the head during sleep transitions. Though relatively common, little is known about its characteristic symptoms or associated features. Methods A cross-sectional study of 49 undergraduates with EHS was performed. A clinical interview established diagnosis. Results The most common accompanying symptoms were tachycardia, fear, and muscle jerks/twitches with the most severe associated with respiration difficulties. Visual phenomena were more common than expected (27%). EHS episodes were perceived as having a random course, but were most likely to occur during wake-sleep transitions and when sleeping in a supine position. Only 11% reported EHS to a professional, and 8% of those with recurrent EHS attempted to prevent episodes. Conclusions EHS episodes are complex (Mean (M) = 4.5 additional symptoms), often multisensorial, and usually associated with clinically-significant fear. They are rarely reported to professionals and treatment approaches are limited.
Convergence of multimodal sensory pathways to the mushroom body calyx in Drosophila melanogaster
Yagi, Ryosuke; Mabuchi, Yuta; Mizunami, Makoto; Tanaka, Nobuaki K.
2016-01-01
Detailed structural analyses of the mushroom body which plays critical roles in olfactory learning and memory revealed that it is directly connected with multiple primary sensory centers in Drosophila. Connectivity patterns between the mushroom body and primary sensory centers suggest that each mushroom body lobe processes information on different combinations of multiple sensory modalities. This finding provides a novel focus of research by Drosophila genetics for perception of the external world by integrating multisensory signals. PMID:27404960
Reduced orienting to audiovisual synchrony in infancy predicts autism diagnosis at 3 years of age.
Falck-Ytter, Terje; Nyström, Pär; Gredebäck, Gustaf; Gliga, Teodora; Bölte, Sven
2018-01-23
Effective multisensory processing develops in infancy and is thought to be important for the perception of unified and multimodal objects and events. Previous research suggests impaired multisensory processing in autism, but its role in the early development of the disorder is yet uncertain. Here, using a prospective longitudinal design, we tested whether reduced visual attention to audiovisual synchrony is an infant marker of later-emerging autism diagnosis. We studied 10-month-old siblings of children with autism using an eye tracking task previously used in studies of preschoolers. The task assessed the effect of manipulations of audiovisual synchrony on viewing patterns while the infants were observing point light displays of biological motion. We analyzed the gaze data recorded in infancy according to diagnostic status at 3 years of age (DSM-5). Ten-month-old infants who later received an autism diagnosis did not orient to audiovisual synchrony expressed within biological motion. In contrast, both infants at low-risk and high-risk siblings without autism at follow-up had a strong preference for this type of information. No group differences were observed in terms of orienting to upright biological motion. This study suggests that reduced orienting to audiovisual synchrony within biological motion is an early sign of autism. The findings support the view that poor multisensory processing could be an important antecedent marker of this neurodevelopmental condition. © 2018 Association for Child and Adolescent Mental Health.
Ownership of an artificial limb induced by electrical brain stimulation
Collins, Kelly L.; Cronin, Jeneva; Olson, Jared D.; Ehrsson, H. Henrik; Ojemann, Jeffrey G.
2017-01-01
Replacing the function of a missing or paralyzed limb with a prosthetic device that acts and feels like one’s own limb is a major goal in applied neuroscience. Recent studies in nonhuman primates have shown that motor control and sensory feedback can be achieved by connecting sensors in a robotic arm to electrodes implanted in the brain. However, it remains unknown whether electrical brain stimulation can be used to create a sense of ownership of an artificial limb. In this study on two human subjects, we show that ownership of an artificial hand can be induced via the electrical stimulation of the hand section of the somatosensory (SI) cortex in synchrony with touches applied to a rubber hand. Importantly, the illusion was not elicited when the electrical stimulation was delivered asynchronously or to a portion of the SI cortex representing a body part other than the hand, suggesting that multisensory integration according to basic spatial and temporal congruence rules is the underlying mechanism of the illusion. These findings show that the brain is capable of integrating “natural” visual input and direct cortical-somatosensory stimulation to create the multisensory perception that an artificial limb belongs to one’s own body. Thus, they serve as a proof of concept that electrical brain stimulation can be used to “bypass” the peripheral nervous system to induce multisensory illusions and ownership of artificial body parts, which has important implications for patients who lack peripheral sensory input due to spinal cord or nerve lesions. PMID:27994147
Saidi, Maryam; Towhidkhah, Farzad; Gharibzadeh, Shahriar; Lari, Abdolaziz Azizi
2013-12-01
Humans perceive the surrounding world by integration of information through different sensory modalities. Earlier models of multisensory integration rely mainly on traditional Bayesian and causal Bayesian inferences for single causal (source) and two causal (for two senses such as visual and auditory systems), respectively. In this paper a new recurrent neural model is presented for integration of visual and proprioceptive information. This model is based on population coding which is able to mimic multisensory integration of neural centers in the human brain. The simulation results agree with those achieved by casual Bayesian inference. The model can also simulate the sensory training process of visual and proprioceptive information in human. Training process in multisensory integration is a point with less attention in the literature before. The effect of proprioceptive training on multisensory perception was investigated through a set of experiments in our previous study. The current study, evaluates the effect of both modalities, i.e., visual and proprioceptive training and compares them with each other through a set of new experiments. In these experiments, the subject was asked to move his/her hand in a circle and estimate its position. The experiments were performed on eight subjects with proprioception training and eight subjects with visual training. Results of the experiments show three important points: (1) visual learning rate is significantly more than that of proprioception; (2) means of visual and proprioceptive errors are decreased by training but statistical analysis shows that this decrement is significant for proprioceptive error and non-significant for visual error, and (3) visual errors in training phase even in the beginning of it, is much less than errors of the main test stage because in the main test, the subject has to focus on two senses. The results of the experiments in this paper is in agreement with the results of the neural model simulation.
Creating Multisensory Environments: Practical Ideas for Teaching and Learning. David Fulton/Nasen
ERIC Educational Resources Information Center
Davies, Christopher
2011-01-01
Multi-sensory environments in the classroom provide a wealth of stimulating learning experiences for all young children whose senses are still under development. "Creating Multisensory Environments: Practical Ideas for Teaching and Learning" is a highly practical guide to low-cost cost, easy to assemble multi-sensory environments. With a…
Sensory dominance and multisensory integration as screening tools in aging.
Murray, Micah M; Eardley, Alison F; Edginton, Trudi; Oyekan, Rebecca; Smyth, Emily; Matusz, Pawel J
2018-06-11
Multisensory information typically confers neural and behavioural advantages over unisensory information. We used a simple audio-visual detection task to compare healthy young (HY), healthy older (HO) and mild-cognitive impairment (MCI) individuals. Neuropsychological tests assessed individuals' learning and memory impairments. First, we provide much-needed clarification regarding the presence of enhanced multisensory benefits in both healthily and abnormally aging individuals. The pattern of sensory dominance shifted with healthy and abnormal aging to favour a propensity of auditory-dominant behaviour (i.e., detecting sounds faster than flashes). Notably, multisensory benefits were larger only in healthy older than younger individuals who were also visually-dominant. Second, we demonstrate that the multisensory detection task offers benefits as a time- and resource-economic MCI screening tool. Receiver operating characteristic (ROC) analysis demonstrated that MCI diagnosis could be reliably achieved based on the combination of indices of multisensory integration together with indices of sensory dominance. Our findings showcase the importance of sensory profiles in determining multisensory benefits in healthy and abnormal aging. Crucially, our findings open an exciting possibility for multisensory detection tasks to be used as a cost-effective screening tool. These findings clarify relationships between multisensory and memory functions in aging, while offering new avenues for improved dementia diagnostics.
The High School Department Head: Powerful or Powerless in Guiding Change?
ERIC Educational Resources Information Center
Hord, Shirley M.; Murphy, Sheila C.
This report, one of four studies on roles of participants in high school change, presents data about activities of department heads in 30 schools throughout the nation. The report analyzes background research on the subject as well as popular perceptions, perceptions of teachers and administrators, and perceptions of department heads themselves…
Emotional voice and emotional body postures influence each other independently of visual awareness.
Stienen, Bernard M C; Tanaka, Akihiro; de Gelder, Beatrice
2011-01-01
Multisensory integration may occur independently of visual attention as previously shown with compound face-voice stimuli. We investigated in two experiments whether the perception of whole body expressions and the perception of voices influence each other when observers are not aware of seeing the bodily expression. In the first experiment participants categorized masked happy and angry bodily expressions while ignoring congruent or incongruent emotional voices. The onset between target and mask varied from -50 to +133 ms. Results show that the congruency between the emotion in the voice and the bodily expressions influences audiovisual perception independently of the visibility of the stimuli. In the second experiment participants categorized the emotional voices combined with masked bodily expressions as fearful or happy. This experiment showed that bodily expressions presented outside visual awareness still influence prosody perception. Our experiments show that audiovisual integration between bodily expressions and affective prosody can take place outside and independent of visual awareness.
Competitive Dynamics in MSTd: A Mechanism for Robust Heading Perception Based on Optic Flow
Layton, Oliver W.; Fajen, Brett R.
2016-01-01
Human heading perception based on optic flow is not only accurate, it is also remarkably robust and stable. These qualities are especially apparent when observers move through environments containing other moving objects, which introduce optic flow that is inconsistent with observer self-motion and therefore uninformative about heading direction. Moving objects may also occupy large portions of the visual field and occlude regions of the background optic flow that are most informative about heading perception. The fact that heading perception is biased by no more than a few degrees under such conditions attests to the robustness of the visual system and warrants further investigation. The aim of the present study was to investigate whether recurrent, competitive dynamics among MSTd neurons that serve to reduce uncertainty about heading over time offer a plausible mechanism for capturing the robustness of human heading perception. Simulations of existing heading models that do not contain competitive dynamics yield heading estimates that are far more erratic and unstable than human judgments. We present a dynamical model of primate visual areas V1, MT, and MSTd based on that of Layton, Mingolla, and Browning that is similar to the other models, except that the model includes recurrent interactions among model MSTd neurons. Competitive dynamics stabilize the model’s heading estimate over time, even when a moving object crosses the future path. Soft winner-take-all dynamics enhance units that code a heading direction consistent with the time history and suppress responses to transient changes to the optic flow field. Our findings support recurrent competitive temporal dynamics as a crucial mechanism underlying the robustness and stability of perception of heading. PMID:27341686
The influence of brewing water characteristic on sensory perception of pour-over local coffee
NASA Astrophysics Data System (ADS)
Fibrianto, K.; Ardianti, A. D.; Pradipta, K.; Sunarharum, W. B.
2018-01-01
The coffee quality can be characterized by its multisensory perceptions. The content and mineral composition and other substances of brewing water can affect the result of brewed-coffee. The water may influence in extraction capabilities and flavor clarity. The ground Dampit coffee and two commercial instant coffee with pour-over method were used in this study. Various types of commercial drinking water were used to brew the coffee. The result suggests that the different brewing water affects the intensity of sweet and chocolate aroma, as well as oily mouth-feel. Surprisingly, taste and flavour attributes were not affected by the pH of brewing water within the range of 5.5 to 9.1.
Spatial heterogeneity of cortical receptive fields and its impact on multisensory interactions.
Carriere, Brian N; Royal, David W; Wallace, Mark T
2008-05-01
Investigations of multisensory processing at the level of the single neuron have illustrated the importance of the spatial and temporal relationship of the paired stimuli and their relative effectiveness in determining the product of the resultant interaction. Although these principles provide a good first-order description of the interactive process, they were derived by treating space, time, and effectiveness as independent factors. In the anterior ectosylvian sulcus (AES) of the cat, previous work hinted that the spatial receptive field (SRF) architecture of multisensory neurons might play an important role in multisensory processing due to differences in the vigor of responses to identical stimuli placed at different locations within the SRF. In this study the impact of SRF architecture on cortical multisensory processing was investigated using semichronic single-unit electrophysiological experiments targeting a multisensory domain of the cat AES. The visual and auditory SRFs of AES multisensory neurons exhibited striking response heterogeneity, with SRF architecture appearing to play a major role in the multisensory interactions. The deterministic role of SRF architecture was tightly coupled to the manner in which stimulus location modulated the responsiveness of the neuron. Thus multisensory stimulus combinations at weakly effective locations within the SRF resulted in large (often superadditive) response enhancements, whereas combinations at more effective spatial locations resulted in smaller (additive/subadditive) interactions. These results provide important insights into the spatial organization and processing capabilities of cortical multisensory neurons, features that may provide important clues as to the functional roles played by this area in spatially directed perceptual processes.
Jamali, Mohsen; Mitchell, Diana E; Dale, Alexis; Carriot, Jerome; Sadeghi, Soroush G; Cullen, Kathleen E
2014-04-01
The vestibular system is responsible for processing self-motion, allowing normal subjects to discriminate the direction of rotational movements as slow as 1-2 deg s(-1). After unilateral vestibular injury patients' direction-discrimination thresholds worsen to ∼20 deg s(-1), and despite some improvement thresholds remain substantially elevated following compensation. To date, however, the underlying neural mechanisms of this recovery have not been addressed. Here, we recorded from first-order central neurons in the macaque monkey that provide vestibular information to higher brain areas for self-motion perception. Immediately following unilateral labyrinthectomy, neuronal detection thresholds increased by more than two-fold (from 14 to 30 deg s(-1)). While thresholds showed slight improvement by week 3 (25 deg s(-1)), they never recovered to control values - a trend mirroring the time course of perceptual thresholds in patients. We further discovered that changes in neuronal response variability paralleled changes in sensitivity for vestibular stimulation during compensation, thereby causing detection thresholds to remain elevated over time. However, we found that in a subset of neurons, the emergence of neck proprioceptive responses combined with residual vestibular modulation during head-on-body motion led to better neuronal detection thresholds. Taken together, our results emphasize that increases in response variability to vestibular inputs ultimately constrain neural thresholds and provide evidence that sensory substitution with extravestibular (i.e. proprioceptive) inputs at the first central stage of vestibular processing is a neural substrate for improvements in self-motion perception following vestibular loss. Thus, our results provide a neural correlate for the patient benefits provided by rehabilitative strategies that take advantage of the convergence of these multisensory cues.
Jamali, Mohsen; Mitchell, Diana E; Dale, Alexis; Carriot, Jerome; Sadeghi, Soroush G; Cullen, Kathleen E
2014-01-01
The vestibular system is responsible for processing self-motion, allowing normal subjects to discriminate the direction of rotational movements as slow as 1–2 deg s−1. After unilateral vestibular injury patients’ direction–discrimination thresholds worsen to ∼20 deg s−1, and despite some improvement thresholds remain substantially elevated following compensation. To date, however, the underlying neural mechanisms of this recovery have not been addressed. Here, we recorded from first-order central neurons in the macaque monkey that provide vestibular information to higher brain areas for self-motion perception. Immediately following unilateral labyrinthectomy, neuronal detection thresholds increased by more than two-fold (from 14 to 30 deg s−1). While thresholds showed slight improvement by week 3 (25 deg s−1), they never recovered to control values – a trend mirroring the time course of perceptual thresholds in patients. We further discovered that changes in neuronal response variability paralleled changes in sensitivity for vestibular stimulation during compensation, thereby causing detection thresholds to remain elevated over time. However, we found that in a subset of neurons, the emergence of neck proprioceptive responses combined with residual vestibular modulation during head-on-body motion led to better neuronal detection thresholds. Taken together, our results emphasize that increases in response variability to vestibular inputs ultimately constrain neural thresholds and provide evidence that sensory substitution with extravestibular (i.e. proprioceptive) inputs at the first central stage of vestibular processing is a neural substrate for improvements in self-motion perception following vestibular loss. Thus, our results provide a neural correlate for the patient benefits provided by rehabilitative strategies that take advantage of the convergence of these multisensory cues. PMID:24366259
Modality distribution of sensory neurons in the feline caudate nucleus and the substantia nigra.
Márkus, Zita; Eördegh, Gabriella; Paróczy, Zsuzsanna; Benedek, G; Nagy, A
2008-09-01
Despite extensive analysis of the motor functions of the basal ganglia and the fact that multisensory information processing appears critical for the execution of their behavioral action, little is known concerning the sensory functions of the caudate nucleus (CN) and the substantia nigra (SN). In the present study, we set out to describe the sensory modality distribution and to determine the proportions of multisensory units within the CN and the SN. The separate single sensory modality tests demonstrated that a majority of the neurons responded to only one modality, so that they seemed to be unimodal. In contrast with these findings, a large proportion of these neurons exhibited significant multisensory cross-modal interactions. Thus, these neurons should also be classified as multisensory. Our results suggest that a surprisingly high proportion of sensory neurons in the basal ganglia are multisensory, and demonstrate that an analysis without a consideration of multisensory cross-modal interactions may strongly underrepresent the number of multisensory units. We conclude that a majority of the sensory neurons in the CN and SN process multisensory information and only a minority of these units are clearly unimodal.
Ozker, Muge; Schepers, Inga M; Magnotti, John F; Yoshor, Daniel; Beauchamp, Michael S
2017-06-01
Human speech can be comprehended using only auditory information from the talker's voice. However, comprehension is improved if the talker's face is visible, especially if the auditory information is degraded as occurs in noisy environments or with hearing loss. We explored the neural substrates of audiovisual speech perception using electrocorticography, direct recording of neural activity using electrodes implanted on the cortical surface. We observed a double dissociation in the responses to audiovisual speech with clear and noisy auditory component within the superior temporal gyrus (STG), a region long known to be important for speech perception. Anterior STG showed greater neural activity to audiovisual speech with clear auditory component, whereas posterior STG showed similar or greater neural activity to audiovisual speech in which the speech was replaced with speech-like noise. A distinct border between the two response patterns was observed, demarcated by a landmark corresponding to the posterior margin of Heschl's gyrus. To further investigate the computational roles of both regions, we considered Bayesian models of multisensory integration, which predict that combining the independent sources of information available from different modalities should reduce variability in the neural responses. We tested this prediction by measuring the variability of the neural responses to single audiovisual words. Posterior STG showed smaller variability than anterior STG during presentation of audiovisual speech with noisy auditory component. Taken together, these results suggest that posterior STG but not anterior STG is important for multisensory integration of noisy auditory and visual speech.
Implicit multisensory associations influence voice recognition.
von Kriegstein, Katharina; Giraud, Anne-Lise
2006-10-01
Natural objects provide partially redundant information to the brain through different sensory modalities. For example, voices and faces both give information about the speech content, age, and gender of a person. Thanks to this redundancy, multimodal recognition is fast, robust, and automatic. In unimodal perception, however, only part of the information about an object is available. Here, we addressed whether, even under conditions of unimodal sensory input, crossmodal neural circuits that have been shaped by previous associative learning become activated and underpin a performance benefit. We measured brain activity with functional magnetic resonance imaging before, while, and after participants learned to associate either sensory redundant stimuli, i.e. voices and faces, or arbitrary multimodal combinations, i.e. voices and written names, ring tones, and cell phones or brand names of these cell phones. After learning, participants were better at recognizing unimodal auditory voices that had been paired with faces than those paired with written names, and association of voices with faces resulted in an increased functional coupling between voice and face areas. No such effects were observed for ring tones that had been paired with cell phones or names. These findings demonstrate that brief exposure to ecologically valid and sensory redundant stimulus pairs, such as voices and faces, induces specific multisensory associations. Consistent with predictive coding theories, associative representations become thereafter available for unimodal perception and facilitate object recognition. These data suggest that for natural objects effective predictive signals can be generated across sensory systems and proceed by optimization of functional connectivity between specialized cortical sensory modules.
Stekelenburg, Jeroen J; Keetels, Mirjam; Vroomen, Jean
2018-05-01
Numerous studies have demonstrated that the vision of lip movements can alter the perception of auditory speech syllables (McGurk effect). While there is ample evidence for integration of text and auditory speech, there are only a few studies on the orthographic equivalent of the McGurk effect. Here, we examined whether written text, like visual speech, can induce an illusory change in the perception of speech sounds on both the behavioural and neural levels. In a sound categorization task, we found that both text and visual speech changed the identity of speech sounds from an /aba/-/ada/ continuum, but the size of this audiovisual effect was considerably smaller for text than visual speech. To examine at which level in the information processing hierarchy these multisensory interactions occur, we recorded electroencephalography in an audiovisual mismatch negativity (MMN, a component of the event-related potential reflecting preattentive auditory change detection) paradigm in which deviant text or visual speech was used to induce an illusory change in a sequence of ambiguous sounds halfway between /aba/ and /ada/. We found that only deviant visual speech induced an MMN, but not deviant text, which induced a late P3-like positive potential. These results demonstrate that text has much weaker effects on sound processing than visual speech does, possibly because text has different biological roots than visual speech. © 2018 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Perceived Synchrony of Frog Multimodal Signal Components Is Influenced by Content and Order.
Taylor, Ryan C; Page, Rachel A; Klein, Barrett A; Ryan, Michael J; Hunter, Kimberly L
2017-10-01
Multimodal signaling is common in communication systems. Depending on the species, individual signal components may be produced synchronously as a result of physiological constraint (fixed) or each component may be produced independently (fluid) in time. For animals that rely on fixed signals, a basic prediction is that asynchrony between the components should degrade the perception of signal salience, reducing receiver response. Male túngara frogs, Physalaemus pustulosus, produce a fixed multisensory courtship signal by vocalizing with two call components (whines and chucks) and inflating a vocal sac (visual component). Using a robotic frog, we tested female responses to variation in the temporal arrangement between acoustic and visual components. When the visual component lagged a complex call (whine + chuck), females largely rejected this asynchronous multisensory signal in favor of the complex call absent the visual cue. When the chuck component was removed from one call, but the robofrog inflation lagged the complex call, females responded strongly to the asynchronous multimodal signal. When the chuck component was removed from both calls, females reversed preference and responded positively to the asynchronous multisensory signal. When the visual component preceded the call, females responded as often to the multimodal signal as to the call alone. These data show that asynchrony of a normally fixed signal does reduce receiver responsiveness. The magnitude and overall response, however, depend on specific temporal interactions between the acoustic and visual components. The sensitivity of túngara frogs to lagging visual cues, but not leading ones, and the influence of acoustic signal content on the perception of visual asynchrony is similar to those reported in human psychophysics literature. Virtually all acoustically communicating animals must conduct auditory scene analyses and identify the source of signals. Our data suggest that some basic audiovisual neural integration processes may be at work in the vertebrate brain. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology 2017. This work is written by US Government employees and is in the public domain in the US.
Developmental trends in the facilitation of multisensory objects with distractors
Downing, Harriet C.; Barutchu, Ayla; Crewther, Sheila G.
2015-01-01
Sensory integration and the ability to discriminate target objects from distractors are critical to survival, yet the developmental trajectories of these abilities are unknown. This study investigated developmental changes in 9- (n = 18) and 11-year-old (n = 20) children, adolescents (n = 19) and adults (n = 22) using an audiovisual object discrimination task with uni- and multisensory distractors. Reaction times (RTs) were slower with visual/audiovisual distractors, and although all groups demonstrated facilitation of multisensory RTs in these conditions, children's and adolescents' responses corresponded to fewer race model violations than adults', suggesting protracted maturation of multisensory processes. Multisensory facilitation could not be explained by changes in RT variability, suggesting that tests of race model violations may still have theoretical value at least for familiar multisensory stimuli. PMID:25653630
The perception of heading during eye movements
NASA Technical Reports Server (NTRS)
Royden, Constance S.; Banks, Martin S.; Crowell, James A.
1992-01-01
Warren and Hannon (1988, 1990), while studying the perception of heading during eye movements, concluded that people do not require extraretinal information to judge heading with eye/head movements present. Here, heading judgments are examined at higher, more typical eye movement velocities than the extremely slow tracking eye movements used by Warren and Hannon. It is found that people require extraretinal information about eye position to perceive heading accurately under many viewing conditions.
The temporal dynamics of heading perception in the presence of moving objects
Fajen, Brett R.
2015-01-01
Many forms of locomotion rely on the ability to accurately perceive one's direction of locomotion (i.e., heading) based on optic flow. Although accurate in rigid environments, heading judgments may be biased when independently moving objects are present. The aim of this study was to systematically investigate the conditions in which moving objects influence heading perception, with a focus on the temporal dynamics and the mechanisms underlying this bias. Subjects viewed stimuli simulating linear self-motion in the presence of a moving object and judged their direction of heading. Experiments 1 and 2 revealed that heading perception is biased when the object crosses or almost crosses the observer's future path toward the end of the trial, but not when the object crosses earlier in the trial. Nonetheless, heading perception is not based entirely on the instantaneous optic flow toward the end of the trial. This was demonstrated in Experiment 3 by varying the portion of the earlier part of the trial leading up to the last frame that was presented to subjects. When the stimulus duration was long enough to include the part of the trial before the moving object crossed the observer's path, heading judgments were less biased. The findings suggest that heading perception is affected by the temporal evolution of optic flow. The time course of dorsal medial superior temporal area (MSTd) neuron responses may play a crucial role in perceiving heading in the presence of moving objects, a property not captured by many existing models. PMID:26510765
An autism-associated serotonin transporter variant disrupts multisensory processing.
Siemann, J K; Muller, C L; Forsberg, C G; Blakely, R D; Veenstra-VanderWeele, J; Wallace, M T
2017-03-21
Altered sensory processing is observed in many children with autism spectrum disorder (ASD), with growing evidence that these impairments extend to the integration of information across the different senses (that is, multisensory function). The serotonin system has an important role in sensory development and function, and alterations of serotonergic signaling have been suggested to have a role in ASD. A gain-of-function coding variant in the serotonin transporter (SERT) associates with sensory aversion in humans, and when expressed in mice produces traits associated with ASD, including disruptions in social and communicative function and repetitive behaviors. The current study set out to test whether these mice also exhibit changes in multisensory function when compared with wild-type (WT) animals on the same genetic background. Mice were trained to respond to auditory and visual stimuli independently before being tested under visual, auditory and paired audiovisual (multisensory) conditions. WT mice exhibited significant gains in response accuracy under audiovisual conditions. In contrast, although the SERT mutant animals learned the auditory and visual tasks comparably to WT littermates, they failed to show behavioral gains under multisensory conditions. We believe these results provide the first behavioral evidence of multisensory deficits in a genetic mouse model related to ASD and implicate the serotonin system in multisensory processing and in the multisensory changes seen in ASD.
The Multisensory Nature of Verbal Discourse in Parent-Toddler Interactions.
Suanda, Sumarga H; Smith, Linda B; Yu, Chen
Toddlers learn object names in sensory rich contexts. Many argue that this multisensory experience facilitates learning. Here, we examine how toddlers' multisensory experience is linked to another aspect of their experience associated with better learning: the temporally extended nature of verbal discourse. We observed parent-toddler dyads as they played with, and as parents talked about, a set of objects. Analyses revealed links between the multisensory and extended nature of speech, highlighting inter-connections and redundancies in the environment. We discuss the implications of these results for our understanding of early discourse, multisensory communication, and how the learning environment shapes language development.
Wu, Jinglong; Yang, Jiajia; Yu, Yinghua; Li, Qi; Nakamura, Naoya; Shen, Yong; Ohta, Yasuyuki; Yu, Shengyuan; Abe, Koji
2012-01-01
The human brain can anatomically combine task-relevant information from different sensory pathways to form a unified perception; this process is called multisensory integration. The aim of the present study was to test whether the multisensory integration abilities of patients with mild cognitive impairment (MCI) and Alzheimer's disease (AD) differed from those of normal aged controls (NC). A total of 64 subjects were divided into three groups: NC individuals (n = 24), MCI patients (n = 19), and probable AD patients (n = 21). All of the subjects were asked to perform three separate audiovisual integration tasks and were instructed to press the response key associated with the auditory, visual, or audiovisual stimuli in the three tasks. The accuracy and response time (RT) of each task were measured, and the RTs were analyzed using cumulative distribution functions to observe the audiovisual integration. Our results suggest that the mean RT of patients with AD was significantly longer than those of patients with MCI and NC individuals. Interestingly, we found that patients with both MCI and AD exhibited adequate audiovisual integration, and a greater peak (time bin with the highest percentage of benefit) and broader temporal window (time duration of benefit) of multisensory enhancement were observed. However, the onset time and peak benefit of audiovisual integration in MCI and AD patients occurred significantly later than did those of the NC. This finding indicates that the cognitive functional deficits of patients with MCI and AD contribute to the differences in performance enhancements of audiovisual integration compared with NC.
Wu, Jinglong; Yang, Jiajia; Yu, Yinghua; Li, Qi; Nakamura, Naoya; Shen, Yong; Ohta, Yasuyuki; Yu, Shengyuan; Abe, Koji
2013-01-01
The human brain can anatomically combine task-relevant information from different sensory pathways to form a unified perception; this process is called multisensory integration. The aim of the present study was to test whether the multisensory integration abilities of patients with mild cognitive impairment (MCI) and Alzheimer’s disease (AD) differed from those of normal aged controls (NC). A total of 64 subjects were divided into three groups: NC individuals (n = 24), MCI patients (n = 19), and probable AD patients (n = 21). All of the subjects were asked to perform three separate audiovisual integration tasks and were instructed to press the response key associated with the auditory, visual, or audiovisual stimuli in the three tasks. The accuracy and response time (RT) of each task were measured, and the RTs were analyzed using cumulative distribution functions to observe the audiovisual integration. Our results suggest that the mean RT of patients with AD was significantly longer than those of patients with MCI and NC individuals. Interestingly, we found that patients with both MCI and AD exhibited adequate audiovisual integration, and a greater peak (time bin with the highest percentage of benefit) and broader temporal window (time duration of benefit) of multisensory enhancement were observed. However, the onset time and peak benefit of audiovisual integration in MCI and AD patients occurred significantly later than did those of the NC. This finding indicates that the cognitive functional deficits of patients with MCI and AD contribute to the differences in performance enhancements of audiovisual integration compared with NC. PMID:22810093
Representation of vestibular and visual cues to self-motion in ventral intraparietal (VIP) cortex
Chen, Aihua; Deangelis, Gregory C.; Angelaki, Dora E.
2011-01-01
Convergence of vestibular and visual motion information is important for self-motion perception. One cortical area that combines vestibular and optic flow signals is the ventral intraparietal area (VIP). We characterized unisensory and multisensory responses of macaque VIP neurons to translations and rotations in three dimensions. Approximately half of VIP cells show significant directional selectivity in response to optic flow, half show tuning to vestibular stimuli, and one-third show multisensory responses. Visual and vestibular direction preferences of multisensory VIP neurons could be congruent or opposite. When visual and vestibular stimuli were combined, VIP responses could be dominated by either input, unlike medial superior temporal area (MSTd) where optic flow tuning typically dominates or the visual posterior sylvian area (VPS) where vestibular tuning dominates. Optic flow selectivity in VIP was weaker than in MSTd but stronger than in VPS. In contrast, vestibular tuning for translation was strongest in VPS, intermediate in VIP, and weakest in MSTd. To characterize response dynamics, direction-time data were fit with a spatiotemporal model in which temporal responses were modeled as weighted sums of velocity, acceleration, and position components. Vestibular responses in VIP reflected balanced contributions of velocity and acceleration, whereas visual responses were dominated by velocity. Timing of vestibular responses in VIP was significantly faster than in MSTd, whereas timing of optic flow responses did not differ significantly among areas. These findings suggest that VIP may be proximal to MSTd in terms of vestibular processing but hierarchically similar to MSTd in terms of optic flow processing. PMID:21849564
Neural representation of orientation relative to gravity in the macaque cerebellum
Laurens, Jean; Meng, Hui; Angelaki, Dora E.
2013-01-01
Summary A fundamental challenge for maintaining spatial orientation and interacting with the world is knowledge of our orientation relative to gravity, i.e. tilt. Sensing gravity is complicated because of Einstein’s equivalence principle, where gravitational and translational accelerations are physically indistinguishable. Theory has proposed that this ambiguity is solved by tracking head tilt through multisensory integration. Here we identify a group of Purkinje cells in the caudal cerebellar vermis with responses that reflect an estimate of head tilt. These tilt-selective cells are complementary to translation-selective Purkinje cells, such that their population activities sum to the net gravito-inertial acceleration encoded by the otolith organs, as predicted by theory. These findings reflect the remarkable ability of the cerebellum for neural computation and provide novel quantitative evidence for a neural representation of gravity, whose calculation relies on long-postulated theoretical concepts such as internal models and Bayesian priors. PMID:24360549
ERIC Educational Resources Information Center
Lumeng, Julie C.; Kaplan-Sanoff, Margot; Shuman, Steve; Kannan, Srimathi
2008-01-01
Objective: To describe Head Start teachers' perceptions of mealtime, feeding, and overweight risk in Head Start students. Design: Qualitative focus group study. Setting: Five Head Starts in a greater metropolitan area in the Northeast. Participants: Thirty-five teachers in 5 focus groups. Intervention: Two experienced focus group facilitators…
Knight, Margaret; Adkison, Lesley; Kovach, Joan Stack
2010-01-01
Sensory rooms and the use of multisensory interventions are becoming popular in inpatient psychiatry. The empirical data supporting their use are limited, and there is only anecdotal evidence indicating effectiveness in psychiatric populations. The specific aims of this observational pilot study were to determine whether multisensory-based therapies were effective in managing psychiatric symptoms and to evaluate how these interventions compared to traditional ones used in the milieu. The study found that multisensory interventions were as effective as traditional ones in managing symptoms, and participants' Brief Psychiatric Rating Scale scores significantly improved following both kinds of intervention. Medication administration did not affect symptom reduction. This article explores how multisensory interventions offer choice in symptom management. Education regarding multisensory strategies should become integral to inpatient and outpatient group programs, in that additional symptom management strategies can only be an asset.
Charbonneau, Geneviève; Véronneau, Marie; Boudrias-Fournier, Colin; Lepore, Franco; Collignon, Olivier
2013-10-28
The relative reliability of separate sensory estimates influences the way they are merged into a unified percept. We investigated how eccentricity-related changes in reliability of auditory and visual stimuli influence their integration across the entire frontal space. First, we surprisingly found that despite a strong decrease in auditory and visual unisensory localization abilities in periphery, the redundancy gain resulting from the congruent presentation of audio-visual targets was not affected by stimuli eccentricity. This result therefore contrasts with the common prediction that a reduction in sensory reliability necessarily induces an enhanced integrative gain. Second, we demonstrate that the visual capture of sounds observed with spatially incongruent audio-visual targets (ventriloquist effect) steadily decreases with eccentricity, paralleling a lowering of the relative reliability of unimodal visual over unimodal auditory stimuli in periphery. Moreover, at all eccentricities, the ventriloquist effect positively correlated with a weighted combination of the spatial resolution obtained in unisensory conditions. These findings support and extend the view that the localization of audio-visual stimuli relies on an optimal combination of auditory and visual information according to their respective spatial reliability. All together, these results evidence that the external spatial coordinates of multisensory events relative to an observer's body (e.g., eyes' or head's position) influence how this information is merged, and therefore determine the perceptual outcome.
Evaluating the operations underlying multisensory integration in the cat superior colliculus.
Stanford, Terrence R; Quessy, Stephan; Stein, Barry E
2005-07-13
It is well established that superior colliculus (SC) multisensory neurons integrate cues from different senses; however, the mechanisms responsible for producing multisensory responses are poorly understood. Previous studies have shown that spatially congruent cues from different modalities (e.g., auditory and visual) yield enhanced responses and that the greatest relative enhancements occur for combinations of the least effective modality-specific stimuli. Although these phenomena are well documented, little is known about the mechanisms that underlie them, because no study has systematically examined the operation that multisensory neurons perform on their modality-specific inputs. The goal of this study was to evaluate the computations that multisensory neurons perform in combining the influences of stimuli from two modalities. The extracellular activities of single neurons in the SC of the cat were recorded in response to visual, auditory, and bimodal visual-auditory stimulation. Each neuron was tested across a range of stimulus intensities and multisensory responses evaluated against the null hypothesis of simple summation of unisensory influences. We found that the multisensory response could be superadditive, additive, or subadditive but that the computation was strongly dictated by the efficacies of the modality-specific stimulus components. Superadditivity was most common within a restricted range of near-threshold stimulus efficacies, whereas for the majority of stimuli, response magnitudes were consistent with the linear summation of modality-specific influences. In addition to providing a constraint for developing models of multisensory integration, the relationship between response mode and stimulus efficacy emphasizes the importance of considering stimulus parameters when inducing or interpreting multisensory phenomena.
A model of the temporal dynamics of multisensory enhancement
Rowland, Benjamin A.; Stein, Barry E.
2014-01-01
The senses transduce different forms of environmental energy, and the brain synthesizes information across them to enhance responses to salient biological events. We hypothesize that the potency of multisensory integration is attributable to the convergence of independent and temporally aligned signals derived from cross-modal stimulus configurations onto multisensory neurons. The temporal profile of multisensory integration in neurons of the deep superior colliculus (SC) is consistent with this hypothesis. The responses of these neurons to visual, auditory, and combinations of visual–auditory stimuli reveal that multisensory integration takes place in real-time; that is, the input signals are integrated as soon as they arrive at the target neuron. Interactions between cross-modal signals may appear to reflect linear or nonlinear computations on a moment-by-moment basis, the aggregate of which determines the net product of multisensory integration. Modeling observations presented here suggest that the early nonlinear components of the temporal profile of multisensory integration can be explained with a simple spiking neuron model, and do not require more sophisticated assumptions about the underlying biology. A transition from nonlinear “super-additive” computation to linear, additive computation can be accomplished via scaled inhibition. The findings provide a set of design constraints for artificial implementations seeking to exploit the basic principles and potency of biological multisensory integration in contexts of sensory substitution or augmentation. PMID:24374382
Affective and physiological correlates of the perception of unimodal and bimodal emotional stimuli.
Rosa, Pedro J; Oliveira, Jorge; Alghazzawi, Daniyal; Fardoun, Habib; Gamito, Pedro
2017-08-01
Despite the multisensory nature of perception, previous research on emotions has been focused on unimodal emotional cues with visual stimuli. To the best of our knowledge, there is no evidence on the extent to which incongruent emotional cues from visual and auditory sensory channels affect pupil size. To investigate the effects of audiovisual emotional information perception on the physiological and affective response, but also to determine the impact of mismatched cues in emotional perception on these physiological indexes. Pupil size, electrodermal activity and affective subjective responses were recorded while 30 participants were exposed to visual and auditory stimuli with varied emotional content in three different experimental conditions: pictures and sounds presented alone (unimodal), emotionally matched audio-visual stimuli (bimodal congruent) and emotionally mismatched audio-visual stimuli (bimodal incongruent). The data revealed no effect of emotional incongruence on physiological and affective responses. On the other hand, pupil size covaried with skin conductance response (SCR), but the subjective experience was partially dissociated from autonomic responses. Emotional stimuli are able to trigger physiological responses regardless of valence, sensory modality or level of emotional congruence.
Bolaños, Alfredo D; Coffman, Brian A; Candelaria-Cook, Felicha T; Kodituwakku, Piyadasa; Stephen, Julia M
2017-12-01
Children with fetal alcohol spectrum disorder (FASD), who were exposed to alcohol in utero, display a broad range of sensory, cognitive, and behavioral deficits, which are broadly theorized to be rooted in altered brain function and structure. Based on the role of neural oscillations in multisensory integration from past studies, we hypothesized that adolescents with FASD would show a decrease in oscillatory power during event-related gamma oscillatory activity (30 to 100 Hz), when compared to typically developing healthy controls (HC), and that such decrease in oscillatory power would predict behavioral performance. We measured sensory neurophysiology using magnetoencephalography (MEG) during passive auditory, somatosensory, and multisensory (synchronous) stimulation in 19 adolescents (12 to 21 years) with FASD and 23 age- and gender-matched HC. We employed a cross-hemisphere multisensory paradigm to assess interhemispheric connectivity deficits in children with FASD. Time-frequency analysis of MEG data revealed a significant decrease in gamma oscillatory power for both unisensory and multisensory conditions in the FASD group relative to HC, based on permutation testing of significant group differences. Greater beta oscillatory power (15 to 30 Hz) was also noted in the FASD group compared to HC in both unisensory and multisensory conditions. Regression analysis revealed greater predictive power of multisensory oscillations from unisensory oscillations in the FASD group compared to the HC group. Furthermore, multisensory oscillatory power, for both groups, predicted performance on the Intra-Extradimensional Set Shift Task and the Cambridge Gambling Task. Altered oscillatory power in the FASD group may reflect a restricted ability to process somatosensory and multisensory stimuli during day-to-day interactions. These alterations in neural oscillations may be associated with the neurobehavioral deficits experienced by adolescents with FASD and may carry over to adulthood. Copyright © 2017 by the Research Society on Alcoholism.
Implicit Multisensory Associations Influence Voice Recognition
von Kriegstein, Katharina; Giraud, Anne-Lise
2006-01-01
Natural objects provide partially redundant information to the brain through different sensory modalities. For example, voices and faces both give information about the speech content, age, and gender of a person. Thanks to this redundancy, multimodal recognition is fast, robust, and automatic. In unimodal perception, however, only part of the information about an object is available. Here, we addressed whether, even under conditions of unimodal sensory input, crossmodal neural circuits that have been shaped by previous associative learning become activated and underpin a performance benefit. We measured brain activity with functional magnetic resonance imaging before, while, and after participants learned to associate either sensory redundant stimuli, i.e. voices and faces, or arbitrary multimodal combinations, i.e. voices and written names, ring tones, and cell phones or brand names of these cell phones. After learning, participants were better at recognizing unimodal auditory voices that had been paired with faces than those paired with written names, and association of voices with faces resulted in an increased functional coupling between voice and face areas. No such effects were observed for ring tones that had been paired with cell phones or names. These findings demonstrate that brief exposure to ecologically valid and sensory redundant stimulus pairs, such as voices and faces, induces specific multisensory associations. Consistent with predictive coding theories, associative representations become thereafter available for unimodal perception and facilitate object recognition. These data suggest that for natural objects effective predictive signals can be generated across sensory systems and proceed by optimization of functional connectivity between specialized cortical sensory modules. PMID:17002519
Ozker, Muge; Schepers, Inga M.; Magnotti, John F.; Yoshor, Daniel; Beauchamp, Michael S.
2017-01-01
Human speech can be comprehended using only auditory information from the talker’s voice. However, comprehension is improved if the talker’s face is visible, especially if the auditory information is degraded as occurs in noisy environments or with hearing loss. We explored the neural substrates of audiovisual speech perception using electrocorticography, direct recording of neural activity using electrodes implanted on the cortical surface. We observed a double dissociation in the responses to audiovisual speech with clear and noisy auditory component within the superior temporal gyrus (STG), a region long known to be important for speech perception. Anterior STG showed greater neural activity to audiovisual speech with clear auditory component, whereas posterior STG showed similar or greater neural activity to audiovisual speech in which the speech was replaced with speech-like noise. A distinct border between the two response patterns was observed, demarcated by a landmark corresponding to the posterior margin of Heschl’s gyrus. To further investigate the computational roles of both regions, we considered Bayesian models of multisensory integration, which predict that combining the independent sources of information available from different modalities should reduce variability in the neural responses. We tested this prediction by measuring the variability of the neural responses to single audiovisual words. Posterior STG showed smaller variability than anterior STG during presentation of audiovisual speech with noisy auditory component. Taken together, these results suggest that posterior STG but not anterior STG is important for multisensory integration of noisy auditory and visual speech. PMID:28253074
Su, Yi-Huang
2014-01-01
Both lower-level stimulus factors (e.g., temporal proximity) and higher-level cognitive factors (e.g., content congruency) are known to influence multisensory integration. The former can direct attention in a converging manner, and the latter can indicate whether information from the two modalities belongs together. The present research investigated whether and how these two factors interacted in the perception of rhythmic, audiovisual (AV) streams derived from a human movement scenario. Congruency here was based on sensorimotor correspondence pertaining to rhythm perception. Participants attended to bimodal stimuli consisting of a humanlike figure moving regularly to a sequence of auditory beat, and detected a possible auditory temporal deviant. The figure moved either downwards (congruently) or upwards (incongruently) to the downbeat, while in both situations the movement was either synchronous with the beat, or lagging behind it. Greater cross-modal binding was expected to hinder deviant detection. Results revealed poorer detection for congruent than for incongruent streams, suggesting stronger integration in the former. False alarms increased in asynchronous stimuli only for congruent streams, indicating greater tendency for deviant report due to visual capture of asynchronous auditory events. In addition, a greater increase in perceived synchrony was associated with a greater reduction in false alarms for congruent streams, while the pattern was reversed for incongruent ones. These results demonstrate that content congruency as a top-down factor not only promotes integration, but also modulates bottom-up effects of synchrony. Results are also discussed regarding how theories of integration and attentional entrainment may be combined in the context of rhythmic multisensory stimuli.
Neuronal plasticity and multisensory integration in filial imprinting.
Town, Stephen Michael; McCabe, Brian John
2011-03-10
Many organisms sample their environment through multiple sensory systems and the integration of multisensory information enhances learning. However, the mechanisms underlying multisensory memory formation and their similarity to unisensory mechanisms remain unclear. Filial imprinting is one example in which experience is multisensory, and the mechanisms of unisensory neuronal plasticity are well established. We investigated the storage of audiovisual information through experience by comparing the activity of neurons in the intermediate and medial mesopallium of imprinted and naïve domestic chicks (Gallus gallus domesticus) in response to an audiovisual imprinting stimulus and novel object and their auditory and visual components. We find that imprinting enhanced the mean response magnitude of neurons to unisensory but not multisensory stimuli. Furthermore, imprinting enhanced responses to incongruent audiovisual stimuli comprised of mismatched auditory and visual components. Our results suggest that the effects of imprinting on the unisensory and multisensory responsiveness of IMM neurons differ and that IMM neurons may function to detect unexpected deviations from the audiovisual imprinting stimulus.
Neuronal Plasticity and Multisensory Integration in Filial Imprinting
Town, Stephen Michael; McCabe, Brian John
2011-01-01
Many organisms sample their environment through multiple sensory systems and the integration of multisensory information enhances learning. However, the mechanisms underlying multisensory memory formation and their similarity to unisensory mechanisms remain unclear. Filial imprinting is one example in which experience is multisensory, and the mechanisms of unisensory neuronal plasticity are well established. We investigated the storage of audiovisual information through experience by comparing the activity of neurons in the intermediate and medial mesopallium of imprinted and naïve domestic chicks (Gallus gallus domesticus) in response to an audiovisual imprinting stimulus and novel object and their auditory and visual components. We find that imprinting enhanced the mean response magnitude of neurons to unisensory but not multisensory stimuli. Furthermore, imprinting enhanced responses to incongruent audiovisual stimuli comprised of mismatched auditory and visual components. Our results suggest that the effects of imprinting on the unisensory and multisensory responsiveness of IMM neurons differ and that IMM neurons may function to detect unexpected deviations from the audiovisual imprinting stimulus. PMID:21423770
Auditory object perception: A neurobiological model and prospective review.
Brefczynski-Lewis, Julie A; Lewis, James W
2017-10-01
Interaction with the world is a multisensory experience, but most of what is known about the neural correlates of perception comes from studying vision. Auditory inputs enter cortex with its own set of unique qualities, and leads to use in oral communication, speech, music, and the understanding of emotional and intentional states of others, all of which are central to the human experience. To better understand how the auditory system develops, recovers after injury, and how it may have transitioned in its functions over the course of hominin evolution, advances are needed in models of how the human brain is organized to process real-world natural sounds and "auditory objects". This review presents a simple fundamental neurobiological model of hearing perception at a category level that incorporates principles of bottom-up signal processing together with top-down constraints of grounded cognition theories of knowledge representation. Though mostly derived from human neuroimaging literature, this theoretical framework highlights rudimentary principles of real-world sound processing that may apply to most if not all mammalian species with hearing and acoustic communication abilities. The model encompasses three basic categories of sound-source: (1) action sounds (non-vocalizations) produced by 'living things', with human (conspecific) and non-human animal sources representing two subcategories; (2) action sounds produced by 'non-living things', including environmental sources and human-made machinery; and (3) vocalizations ('living things'), with human versus non-human animals as two subcategories therein. The model is presented in the context of cognitive architectures relating to multisensory, sensory-motor, and spoken language organizations. The models' predictive values are further discussed in the context of anthropological theories of oral communication evolution and the neurodevelopment of spoken language proto-networks in infants/toddlers. These phylogenetic and ontogenetic frameworks both entail cortical network maturations that are proposed to at least in part be organized around a number of universal acoustic-semantic signal attributes of natural sounds, which are addressed herein. Copyright © 2017. Published by Elsevier Ltd.
Illusions of having small or large invisible bodies influence visual perception of object size
van der Hoort, Björn; Ehrsson, H. Henrik
2016-01-01
The size of our body influences the perceived size of the world so that objects appear larger to children than to adults. The mechanisms underlying this effect remain unclear. It has been difficult to dissociate visual rescaling of the external environment based on an individual’s visible body from visual rescaling based on a central multisensory body representation. To differentiate these potential causal mechanisms, we manipulated body representation without a visible body by taking advantage of recent developments in body representation research. Participants experienced the illusion of having a small or large invisible body while object-size perception was tested. Our findings show that the perceived size of test-objects was determined by the size of the invisible body (inverse relation), and by the strength of the invisible body illusion. These findings demonstrate how central body representation directly influences visual size perception, without the need for a visible body, by rescaling the spatial representation of the environment. PMID:27708344
Multi-Sensory Intervention Observational Research
ERIC Educational Resources Information Center
Thompson, Carla J.
2011-01-01
An observational research study based on sensory integration theory was conducted to examine the observed impact of student selected multi-sensory experiences within a multi-sensory intervention center relative to the sustained focus levels of students with special needs. A stratified random sample of 50 students with severe developmental…
Using Multisensory Phonics to Foster Reading Skills of Adolescent Delinquents
ERIC Educational Resources Information Center
Warnick, Kristan; Caldarella, Paul
2016-01-01
This study examined the effectiveness of a multisensory phonics-based reading remediation program for adolescent delinquents classified as poor readers living at a residential treatment center. We used a pretest--posttest control group design with random assignment. The treatment group participated in a 30-hr multisensory phonics reading…
Incidental Learning in a Multisensory Environment across Childhood
ERIC Educational Resources Information Center
Broadbent, Hannah J.; White, Hayley; Mareschal, Denis; Kirkham, Natasha Z.
2018-01-01
Multisensory information has been shown to modulate attention in infants and facilitate learning in adults, by enhancing the amodal properties of a stimulus. However, it remains unclear whether this translates to learning in a multisensory environment across middle childhood, and particularly in the case of incidental learning. One hundred and…
Influence of Motor Therapy on Children with Multisensory Disabilities: A Preliminary Study.
ERIC Educational Resources Information Center
Rider, Robert A.; Candeletti, Glenn
1982-01-01
Effects of a program of motor therapy on the motor ability levels of eight multisensory handicapped children were examined. Participation improved performance for all subjects. The gain scores from pretest to posttest indicated that children with multisensory disabilities may benefit from such a program. (Author)
A Rational Analysis of the Acquisition of Multisensory Representations
ERIC Educational Resources Information Center
Yildirim, Ilker; Jacobs, Robert A.
2012-01-01
How do people learn multisensory, or amodal, representations, and what consequences do these representations have for perceptual performance? We address this question by performing a rational analysis of the problem of learning multisensory representations. This analysis makes use of a Bayesian nonparametric model that acquires latent multisensory…
Multisensory Modalities for Blending and Segmenting among Early Readers
ERIC Educational Resources Information Center
Lee, Lay Wah
2016-01-01
With the advent of touch-screen interfaces on the tablet computer, multisensory elements in reading instruction have taken on a new dimension. This computer assisted language learning research aimed to determine whether specific technology features of a tablet computer can add to the functionality of multisensory instruction in early reading…
The LD Teacher's Language Arts Companion[TM]: A Multisensory Approach.
ERIC Educational Resources Information Center
Wadlington, Elizabeth M.; Currie, Paula S.
This book presents a multisensory approach for teaching language arts skills to students in grades 3-10 with learning disabilities. It is intended for teachers, parents, speech-language pathologists, and other professionals who work with students with learning disabilities. An introduction discusses multisensory instruction and the benefits of…
Visual illusion of tool use recalibrates tactile perception
Miller, Luke E.; Longo, Matthew R.; Saygin, Ayse P.
2018-01-01
Brief use of a tool recalibrates multisensory representations of the user’s body, a phenomenon called tool embodiment. Despite two decades of research, little is known about its boundary conditions. It has been widely argued that embodiment requires active tool use, suggesting a critical role for somatosensory and motor feedback. The present study used a visual illusion to cast doubt on this view. We used a mirror-based setup to induce a visual experience of tool use with an arm that was in fact stationary. Following illusory tool use, tactile perception was recalibrated on this stationary arm, and with equal magnitude as physical use. Recalibration was not found following illusory passive tool holding, and could not be accounted for by sensory conflict or general interhemispheric plasticity. These results suggest visual tool-use signals play a critical role in driving tool embodiment. PMID:28196765
Illusory ownership of an invisible body reduces autonomic and subjective social anxiety responses.
Guterstam, Arvid; Abdulkarim, Zakaryah; Ehrsson, H Henrik
2015-04-23
What is it like to be invisible? This question has long fascinated man and has been the central theme of many classic literary works. Recent advances in materials science suggest that invisibility cloaking of the human body may be possible in the not-so-distant future. However, it remains unknown how invisibility affects body perception and embodied cognition. To address these questions, we developed a perceptual illusion of having an entire invisible body. Through a series of experiments, we characterized the multisensory rules that govern the elicitation of the illusion and show that the experience of having an invisible body reduces the social anxiety response to standing in front of an audience. This study provides an experimental model of what it is like to be invisible and shows that this experience affects bodily self-perception and social cognition.
Cecere, Roberto; Gross, Joachim; Thut, Gregor
2016-06-01
The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
The multi-sensory approach as a geoeducational strategy
NASA Astrophysics Data System (ADS)
Musacchio, Gemma; Piangiamore, Giovanna Lucia; Pino, Nicola Alessandro
2014-05-01
Geoscience knowledge has a strong impact in modern society as it relates to natural hazards, sustainability and environmental issues. The general public has a demanding attitude towards the understanding of crucial geo-scientific topics that is only partly satisfied by science communication strategies and/or by outreach or school programs. A proper knowledge of the phenomena might help trigger crucial inquiries when approaching mitigation of geo-hazards and geo-resources, while providing the right tool for the understanding of news and ideas floating from the web or other media, and, in other words, help communication to be more efficient. Nonetheless available educational resources seem to be inadequate in meeting the goal, while research institutions are facing the challenge to experience new communication strategies and non-conventional way of learning capable to allow the understanding of crucial scientific contents. We suggest the use of multi-sensory approach as a successful non-conventional way of learning for children and as a different perspective of learning for older students and adults. Sense organs stimulation are perceived and processed to build the knowledge of the surrounding, including all sorts of hazards. Powerfully relying in the sense of sight, Humans have somehow lost most of their ability for a deep perception of the environment enriched by all the other senses. Since hazards involve emotions we argue that new ways to approach the learning might go exactly through emotions that one might stress with a tactile experience, a hearing or smell stimulation. To test and support our idea we are building a package of learning activities and exhibits based on a multi-sensory experience where the sight is not allowed.
Sensory-Challenge Balance Exercises Improve Multisensory Reweighting in Fall-Prone Older Adults.
Allison, Leslie K; Kiemel, Tim; Jeka, John J
2018-04-01
Multisensory reweighting (MSR) deficits in older adults contribute to fall risk. Sensory-challenge balance exercises may have value for addressing the MSR deficits in fall-prone older adults. The purpose of this study was to examine the effect of sensory-challenge balance exercises on MSR and clinical balance measures in fall-prone older adults. We used a quasi-experimental, repeated-measures, within-subjects design. Older adults with a history of falls underwent an 8-week baseline (control) period. This was followed by an 8-week intervention period that included 16 sensory-challenge balance exercise sessions performed with computerized balance training equipment. Measurements, taken twice before and once after intervention, included laboratory measures of MSR (center of mass gain and phase, position, and velocity variability) and clinical tests (Activities-specific Balance Confidence Scale, Berg Balance Scale, Sensory Organization Test, Limits of Stability test, and lower extremity strength and range of motion). Twenty adults 70 years of age and older with a history of falls completed all 16 sessions. Significant improvements were observed in laboratory-based MSR measures of touch gain (P = 0.006) and phase (P = 0.05), Berg Balance Scale (P = 0.002), Sensory Organization Test (P = 0.002), Limits of Stability Test (P = 0.001), and lower extremity strength scores (P = 0.005). Mean values of vision gain increased more than those for touch gain, but did not reach significance. A balance exercise program specifically targeting multisensory integration mechanisms improved MSR, balance, and lower extremity strength in this mechanistic study. These valuable findings provide the scientific rationale for sensory-challenge balance exercise to improve perception of body position and motion in space and potential reduction in fall risk.
Panagiotidi, Maria; Overton, Paul G; Stafford, Tom
2017-11-01
Abnormalities in multimodal processing have been found in many developmental disorders such as autism and dyslexia. However, surprisingly little empirical work has been conducted to test the integrity of multisensory integration in Attention Deficit Hyperactivity Disorder (ADHD). The main aim of the present study was to examine links between symptoms of ADHD (as measured using a self-report scale in a healthy adult population) and the temporal aspects of multisensory processing. More specifically, a Simultaneity Judgement (SJ) and a Temporal Order Judgement (TOJ) task were used in participants with low and high levels of ADHD-like traits to measure the temporal integration window and Just-Noticeable Difference (JND) (respectively) between the timing of an auditory beep and a visual pattern presented over a broad range of stimulus onset asynchronies. The Point of Subjective Similarity (PSS) was also measured in both cases. In the SJ task, participants with high levels of ADHD-like traits considered significantly fewer stimuli to be simultaneous than participants with high levels of ADHD-like traits, and the former were found to have significantly smaller temporal windows of integration (although no difference was found in the PSS in the SJ or TOJ tasks, or the JND in the latter). This is the first study to identify an abnormal temporal integration window in individuals with ADHD-like traits. Perceived temporal misalignment of two or more modalities can lead to distractibility (e.g., when the stimulus components from different modalities occur separated by too large of a temporal gap). Hence, an abnormality in the perception of simultaneity could lead to the increased distractibility seen in ADHD. Copyright © 2017 Elsevier B.V. All rights reserved.
Keeping time in the brain: Autism spectrum disorder and audiovisual temporal processing.
Stevenson, Ryan A; Segers, Magali; Ferber, Susanne; Barense, Morgan D; Camarata, Stephen; Wallace, Mark T
2016-07-01
A growing area of interest and relevance in the study of autism spectrum disorder (ASD) focuses on the relationship between multisensory temporal function and the behavioral, perceptual, and cognitive impairments observed in ASD. Atypical sensory processing is becoming increasingly recognized as a core component of autism, with evidence of atypical processing across a number of sensory modalities. These deviations from typical processing underscore the value of interpreting ASD within a multisensory framework. Furthermore, converging evidence illustrates that these differences in audiovisual processing may be specifically related to temporal processing. This review seeks to bridge the connection between temporal processing and audiovisual perception, and to elaborate on emerging data showing differences in audiovisual temporal function in autism. We also discuss the consequence of such changes, the specific impact on the processing of different classes of audiovisual stimuli (e.g. speech vs. nonspeech, etc.), and the presumptive brain processes and networks underlying audiovisual temporal integration. Finally, possible downstream behavioral implications, and possible remediation strategies are outlined. Autism Res 2016, 9: 720-738. © 2015 International Society for Autism Research, Wiley Periodicals, Inc. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.
Effect of odour on multisensory environmental evaluations of road traffic
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Like, E-mail: jianglike@yahoo.com; Masullo, Massimiliano, E-mail: Massimiliano.MASULLO@unina2.it; Maffei, Luigi, E-mail: luigi.maffei@unina2.it
This study investigated the effect of odour on multisensory environmental evaluations of road traffic. The study aimed to answer: (1) Does odour have any effect on evaluations on noise, landscape and the overall environment? (2) How different are participants' responses to odour stimuli and are these differences influential on the evaluations? Experimental scenarios varied in three Traffic levels, three Tree screening conditions and two Odour presence conditions were designed, and presented to participants in virtual reality. Perceived Loudness, Noise Annoyance, Landscape Quality and Overall Pleasantness of each scenario were evaluated and the results were analysed. It shows that Odour presencemore » did not have significant main effect on any of the evaluations, but has significant interactions with Traffic level on Noise Annoyance and with Tree screening on Landscape Quality, indicating the potential of odour to modulate noise and visual landscape perceptions in specific environmental content. Concerning participants' responses to odour stimuli, large differences were found in this study. However, the differences did not seem to be influential on environmental evaluations in this study. Larger samples of participants may benefit this study for more significant results of odour effect.« less
Bayesian-based integration of multisensory naturalistic perithreshold stimuli.
Regenbogen, Christina; Johansson, Emilia; Andersson, Patrik; Olsson, Mats J; Lundström, Johan N
2016-07-29
Most studies exploring multisensory integration have used clearly perceivable stimuli. According to the principle of inverse effectiveness, the added neural and behavioral benefit of integrating clear stimuli is reduced in comparison to stimuli with degraded and less salient unisensory information. Traditionally, speed and accuracy measures have been analyzed separately with few studies merging these to gain an understanding of speed-accuracy trade-offs in multisensory integration. In two separate experiments, we assessed multisensory integration of naturalistic audio-visual objects consisting of individually-tailored perithreshold dynamic visual and auditory stimuli, presented within a multiple-choice task, using a Bayesian Hierarchical Drift Diffusion Model that combines response time and accuracy. For both experiments, unisensory stimuli were degraded to reach a 75% identification accuracy level for all individuals and stimuli to promote multisensory binding. In Experiment 1, we subsequently presented uni- and their respective bimodal stimuli followed by a 5-alternative-forced-choice task. In Experiment 2, we controlled for low-level integration and attentional differences. Both experiments demonstrated significant superadditive multisensory integration of bimodal perithreshold dynamic information. We present evidence that the use of degraded sensory stimuli may provide a link between previous findings of inverse effectiveness on a single neuron level and overt behavior. We further suggest that a combined measure of accuracy and reaction time may be a more valid and holistic approach of studying multisensory integration and propose the application of drift diffusion models for studying behavioral correlates as well as brain-behavior relationships of multisensory integration. Copyright © 2015 Elsevier Ltd. All rights reserved.
The Race that Precedes Coactivation: Development of Multisensory Facilitation in Children
ERIC Educational Resources Information Center
Barutchu, Ayla; Crewther, David P.; Crewther, Sheila G.
2009-01-01
Rationale: The facilitating effect of multisensory integration on motor responses in adults is much larger than predicted by race-models and is in accordance with the idea of coactivation. However, the development of multisensory facilitation of endogenously driven motor processes and its relationship to the development of complex cognitive skills…
ERIC Educational Resources Information Center
Mount, Helen; Cavet, Judith
1995-01-01
This article addresses the controversy concerning multisensory environments for children and adults with profound and multiple learning difficulties, from a British perspective. The need for critical evaluation of such multisensory interventions as the "snoezelen" approach and the paucity of relevant, rigorous research on educational…
Multisensory Teaching of Basic Language Skills Activity Book. Revised Edition
ERIC Educational Resources Information Center
Carreker, Suzanne; Birsh, Judith R.
2011-01-01
With the new edition of this activity book--the companion to Judith Birsh's bestselling text, "Multisensory Teaching of Basic Language Skills"--students and practitioners will get the practice they need to use multisensory teaching effectively with students who have dyslexia and other learning disabilities. Ideal for both pre-service teacher…
Tilt perception during dynamic linear acceleration.
Seidman, S H; Telford, L; Paige, G D
1998-04-01
Head tilt is a rotation of the head relative to gravity, as exemplified by head roll or pitch from the natural upright orientation. Tilt stimulates both the otolith organs, owing to shifts in gravitational orientation, and the semicircular canals in response to head rotation, which in turn drive a variety of behavioral and perceptual responses. Studies of tilt perception typically have not adequately isolated otolith and canal inputs or their dynamic contributions. True tilt cannot readily dissociate otolith from canal influences. Alternatively, centrifugation generates centripetal accelerations that simulate tilt, but still entails a rotatory (canal) stimulus during important periods of the stimulus profiles. We reevaluated the perception of head tilt in humans, but limited the stimulus to linear forces alone, thus isolating the influence of otolith inputs. This was accomplished by employing a centrifugation technique with a variable-radius spinning sled. This allowed us to accelerate the sled to a constant angular velocity (128 degrees/s), with the subject centered, and then apply dynamic centripetal accelerations after all rotatory perceptions were extinguished. These stimuli were presented in the subjects' naso-occipital axis by translating the subjects 50 cm eccentrically either forward or backward. Centripetal accelerations were thus induced (0.25 g), which combined with gravity to yield a dynamically shifting gravitoinertial force simulating pitch-tilt, but without actually rotating the head. A magnitude-estimation task was employed to characterize the dynamic perception of pitch-tilt. Tilt perception responded sluggishly to linear acceleration, typically reaching a peak after 10-30 s. Tilt perception also displayed an adaptation phenomenon. Adaptation was manifested as a per-stimulus decline in perceived tilt during prolonged stimulation and a reversal aftereffect upon return to zero acceleration (i.e., recentering the subject). We conclude that otolith inputs can produce tilt perception in the absence of canal stimulation, and that this perception is subject to an adaptation phenomenon and low-pass filtering of its otolith input.
Stone, David B.; Coffman, Brian A.; Bustillo, Juan R.; Aine, Cheryl J.; Stephen, Julia M.
2014-01-01
Deficits in auditory and visual unisensory responses are well documented in patients with schizophrenia; however, potential abnormalities elicited from multisensory audio-visual stimuli are less understood. Further, schizophrenia patients have shown abnormal patterns in task-related and task-independent oscillatory brain activity, particularly in the gamma frequency band. We examined oscillatory responses to basic unisensory and multisensory stimuli in schizophrenia patients (N = 46) and healthy controls (N = 57) using magnetoencephalography (MEG). Time-frequency decomposition was performed to determine regions of significant changes in gamma band power by group in response to unisensory and multisensory stimuli relative to baseline levels. Results showed significant behavioral differences between groups in response to unisensory and multisensory stimuli. In addition, time-frequency analysis revealed significant decreases and increases in gamma-band power in schizophrenia patients relative to healthy controls, which emerged both early and late over both sensory and frontal regions in response to unisensory and multisensory stimuli. Unisensory gamma-band power predicted multisensory gamma-band power differently by group. Furthermore, gamma-band power in these regions predicted performance in select measures of the Measurement and Treatment Research to Improve Cognition in Schizophrenia (MATRICS) test battery differently by group. These results reveal a unique pattern of task-related gamma-band power in schizophrenia patients relative to controls that may indicate reduced inhibition in combination with impaired oscillatory mechanisms in patients with schizophrenia. PMID:25414652
Noel, Jean-Paul; Blanke, Olaf; Serino, Andrea
2018-06-06
Integrating information across sensory systems is a critical step toward building a cohesive representation of the environment and one's body, and as illustrated by numerous illusions, scaffolds subjective experience of the world and self. In the last years, classic principles of multisensory integration elucidated in the subcortex have been translated into the language of statistical inference understood by the neocortical mantle. Most importantly, a mechanistic systems-level description of multisensory computations via probabilistic population coding and divisive normalization is actively being put forward. In parallel, by describing and understanding bodily illusions, researchers have suggested multisensory integration of bodily inputs within the peripersonal space as a key mechanism in bodily self-consciousness. Importantly, certain aspects of bodily self-consciousness, although still very much a minority, have been recently casted under the light of modern computational understandings of multisensory integration. In doing so, we argue, the field of bodily self-consciousness may borrow mechanistic descriptions regarding the neural implementation of inference computations outlined by the multisensory field. This computational approach, leveraged on the understanding of multisensory processes generally, promises to advance scientific comprehension regarding one of the most mysterious questions puzzling humankind, that is, how our brain creates the experience of a self in interaction with the environment. © 2018 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of New York Academy of Sciences.
Multisensory control of a straight locomotor trajectory.
Hanna, Maxim; Fung, Joyce; Lamontagne, Anouk
2017-01-01
Locomotor steering is contingent upon orienting oneself spatially in the environment. When the head is turned while walking, the optic flow projected onto the retina is a complex pattern comprising of a translational and a rotational component. We have created a unique paradigm to simulate different optic flows in a virtual environment. We hypothesized that non-visual (vestibular and somatosensory) cues are required for proper control of a straight trajectory while walking. This research study included 9 healthy young subjects walking in a large physical space (40×25m2) while the virtual environment is viewed in a helmet-mounted display. They were instructed to walk straight in the physical world while being exposed to three conditions: (1) self-initiated active head turns (AHT: 40° right, left, or none); (2) visually simulated head turns (SHT); and (3) visually simulated head turns with no target element (SHT_NT). Conditions 1 and 2 involved an eye-level target which subjects were instructed to fixate, whereas condition 3 was similar to condition 2 but with no target. Identical retinal flow patterns were present in the AHT and SHT conditions whereas non-visual cues differed in that a head rotation was sensed only in AHT but not in SHT. Body motions were captured by a 12-camera Vicon system. Horizontal orientations of the head and body segments, as well as the trajectory of the body's centre of mass were analyzed. SHT and SNT_NT yielded similar results. Heading and body segment orientations changed in the direction opposite to the head turns in SHT conditions. Heading remained unchanged across head turn directions in AHT. Results suggest that non-visual information is used in the control of heading while being exposed to changing rotational optic flows. The small magnitude of the changes in SHT conditions suggests that the CNS can re-weight relevant sources of information to minimize heading errors in the presence of sensory conflicts.
Perceiving circular heading in noncanonical flow fields.
Kim, N G; Fajen, B R; Turvey, M T
2000-02-01
Five experiments examined circular heading perception with optical flows that departed from the canonical form. Noncanonicity was achieved through nonrigidity of the environment (Experiments 1 and 2), oscillations of the point of observation (Experiment 3), and the bending of light (Experiments 4 and 5). In Experiments 1 and 2, perception was impaired more by nonrigidity of the ground plane than by nonrigidity of the medium. In Experiment 3, perception was unimpaired by noncanonical flows induced by the bounce and sway of observer locomotion. In Experiments 4 and 5, perception was not impaired when light paths were distorted by a spherical projection, but perception was impaired when they were distorted by a sine function. Results are discussed in relation to the hypothesis that the information for perceiving heading is the ordinal pattern of optical flow.
ERIC Educational Resources Information Center
Adeniyi, Folakemi O.; Lawal, R. Adebayo
2012-01-01
The purpose of this study was to find out the relative effects of three instructional Approaches i.e. Multisensory, Metacognitive, and a combination of Multisensory and Metacognitive Instructional Approaches on the Vocabulary achievement of underachieving Secondary School Students. The study adopted the quasi-experimental design in which a…
ERIC Educational Resources Information Center
Taljaard, Johann
2016-01-01
This article reviews the literature on multi-sensory technology and, in particular, looks at answering the question: "What multi-sensory technologies are available to use in a science, technology, engineering, arts and mathematics (STEAM) classroom, and do they affect student engagement and learning outcomes?" Here engagement is defined…
Use your head! Perception of action possibilities by means of an object attached to the head.
Wagman, Jeffrey B; Hajnal, Alen
2016-03-01
Perceiving any environmental property requires spontaneously assembling a smart perceptual instrument-a task-specific measurement device assembled across potentially independent anatomical units. Previous research has shown that to a large degree, perception of a given environmental property is anatomically independent. We attempted to provide stronger evidence for this proposal by investigating perception by an organization of anatomical and inert components that likely requires the spontaneous assembly of a novel smart perceptual instrument-a rod attached to the head. Specifically, we compared cephalic and manual perception of whether an inclined surface affords standing on. In both conditions, perception reflected the action capabilities of the perceiver and not the appendage used to wield the rod. Such results provide stronger evidence for anatomical independence of perception within a given perceptual system and highlight that flexible task-specific detection units can be assembled across units that span the body and inert objects.
Ghose, Dipanwita; Wallace, Mark T.
2013-01-01
Multisensory integration has been widely studied in neurons of the mammalian superior colliculus (SC). This has led to the description of various determinants of multisensory integration, including those based on stimulus- and neuron-specific factors. The most widely characterized of these illustrate the importance of the spatial and temporal relationships of the paired stimuli as well as their relative effectiveness in eliciting a response in determining the final integrated output. Although these stimulus-specific factors have generally been considered in isolation (i.e., manipulating stimulus location while holding all other factors constant), they have an intrinsic interdependency that has yet to be fully elucidated. For example, changes in stimulus location will likely also impact both the temporal profile of response and the effectiveness of the stimulus. The importance of better describing this interdependency is further reinforced by the fact that SC neurons have large receptive fields, and that responses at different locations within these receptive fields are far from equivalent. To address these issues, the current study was designed to examine the interdependency between the stimulus factors of space and effectiveness in dictating the multisensory responses of SC neurons. The results show that neuronal responsiveness changes dramatically with changes in stimulus location – highlighting a marked heterogeneity in the spatial receptive fields of SC neurons. More importantly, this receptive field heterogeneity played a major role in the integrative product exhibited by stimulus pairings, such that pairings at weakly responsive locations of the receptive fields resulted in the largest multisensory interactions. Together these results provide greater insight into the interrelationship of the factors underlying multisensory integration in SC neurons, and may have important mechanistic implications for multisensory integration and the role it plays in shaping SC mediated behaviors. PMID:24183964
Rubber Hands Feel Touch, but Not in Blind Individuals
Ehrsson, H. Henrik
2012-01-01
Psychology and neuroscience have a long-standing tradition of studying blind individuals to investigate how visual experience shapes perception of the external world. Here, we study how blind people experience their own body by exposing them to a multisensory body illusion: the somatic rubber hand illusion. In this illusion, healthy blindfolded participants experience that they are touching their own right hand with their left index finger, when in fact they are touching a rubber hand with their left index finger while the experimenter touches their right hand in a synchronized manner (Ehrsson et al. 2005). We compared the strength of this illusion in a group of blind individuals (n = 10), all of whom had experienced severe visual impairment or complete blindness from birth, and a group of age-matched blindfolded sighted participants (n = 12). The illusion was quantified subjectively using questionnaires and behaviorally by asking participants to point to the felt location of the right hand. The results showed that the sighted participants experienced a strong illusion, whereas the blind participants experienced no illusion at all, a difference that was evident in both tests employed. A further experiment testing the participants' basic ability to localize the right hand in space without vision (proprioception) revealed no difference between the two groups. Taken together, these results suggest that blind individuals with impaired visual development have a more veridical percept of self-touch and a less flexible and dynamic representation of their own body in space compared to sighted individuals. We speculate that the multisensory brain systems that re-map somatosensory signals onto external reference frames are less developed in blind individuals and therefore do not allow efficient fusion of tactile and proprioceptive signals from the two upper limbs into a single illusory experience of self-touch as in sighted individuals. PMID:22558268
Guterstam, Arvid; Zeberg, Hugo; Özçiftci, Vedat Menderes; Ehrsson, H Henrik
2016-10-01
To accurately localize our limbs and guide movements toward external objects, the brain must represent the body and its surrounding (peripersonal) visual space. Specific multisensory neurons encode peripersonal space in the monkey brain, and neurobehavioral studies have suggested the existence of a similar representation in humans. However, because peripersonal space lacks a distinct perceptual correlate, its involvement in spatial and bodily perception remains unclear. Here, we show that applying brushstrokes in mid-air at some distance above a rubber hand-without touching it-in synchrony with brushstrokes applied to a participant's hidden real hand results in the illusory sensation of a "magnetic force" between the brush and the rubber hand, which strongly correlates with the perception of the rubber hand as one's own. In eight experiments, we characterized this "magnetic touch illusion" by using quantitative subjective reports, motion tracking, and behavioral data consisting of pointing errors toward the rubber hand in an intermanual pointing task. We found that the illusion depends on visuo-tactile synchrony and exhibits similarities with the visuo-tactile receptive field properties of peripersonal space neurons, featuring a non-linear decay at 40cm that is independent of gaze direction and follows changes in the rubber hand position. Moreover, the "magnetic force" does not penetrate physical barriers, thus further linking this phenomenon to body-specific visuo-tactile integration processes. These findings provide strong support for the notion that multisensory integration within peripersonal space underlies bodily self-attribution. Furthermore, we propose that the magnetic touch illusion constitutes a perceptual correlate of visuo-tactile integration in peripersonal space. Copyright © 2016 The Author(s). Published by Elsevier B.V. All rights reserved.
Are We Ready for Real-world Neuroscience?
Matusz, Pawel J; Dikker, Suzanne; Huth, Alexander G; Perrodin, Catherine
2018-06-19
Real-world environments are typically dynamic, complex, and multisensory in nature and require the support of top-down attention and memory mechanisms for us to be able to drive a car, make a shopping list, or pour a cup of coffee. Fundamental principles of perception and functional brain organization have been established by research utilizing well-controlled but simplified paradigms with basic stimuli. The last 30 years ushered a revolution in computational power, brain mapping, and signal processing techniques. Drawing on those theoretical and methodological advances, over the years, research has departed more and more from traditional, rigorous, and well-understood paradigms to directly investigate cognitive functions and their underlying brain mechanisms in real-world environments. These investigations typically address the role of one or, more recently, multiple attributes of real-world environments. Fundamental assumptions about perception, attention, or brain functional organization have been challenged-by studies adapting the traditional paradigms to emulate, for example, the multisensory nature or varying relevance of stimulation or dynamically changing task demands. Here, we present the state of the field within the emerging heterogeneous domain of real-world neuroscience. To be precise, the aim of this Special Focus is to bring together a variety of the emerging "real-world neuroscientific" approaches. These approaches differ in their principal aims, assumptions, or even definitions of "real-world neuroscience" research. Here, we showcase the commonalities and distinctive features of the different "real-world neuroscience" approaches. To do so, four early-career researchers and the speakers of the Cognitive Neuroscience Society 2017 Meeting symposium under the same title answer questions pertaining to the added value of such approaches in bringing us closer to accurate models of functional brain organization and cognitive functions.
Rubber hands feel touch, but not in blind individuals.
Petkova, Valeria I; Zetterberg, Hedvig; Ehrsson, H Henrik
2012-01-01
Psychology and neuroscience have a long-standing tradition of studying blind individuals to investigate how visual experience shapes perception of the external world. Here, we study how blind people experience their own body by exposing them to a multisensory body illusion: the somatic rubber hand illusion. In this illusion, healthy blindfolded participants experience that they are touching their own right hand with their left index finger, when in fact they are touching a rubber hand with their left index finger while the experimenter touches their right hand in a synchronized manner (Ehrsson et al. 2005). We compared the strength of this illusion in a group of blind individuals (n = 10), all of whom had experienced severe visual impairment or complete blindness from birth, and a group of age-matched blindfolded sighted participants (n = 12). The illusion was quantified subjectively using questionnaires and behaviorally by asking participants to point to the felt location of the right hand. The results showed that the sighted participants experienced a strong illusion, whereas the blind participants experienced no illusion at all, a difference that was evident in both tests employed. A further experiment testing the participants' basic ability to localize the right hand in space without vision (proprioception) revealed no difference between the two groups. Taken together, these results suggest that blind individuals with impaired visual development have a more veridical percept of self-touch and a less flexible and dynamic representation of their own body in space compared to sighted individuals. We speculate that the multisensory brain systems that re-map somatosensory signals onto external reference frames are less developed in blind individuals and therefore do not allow efficient fusion of tactile and proprioceptive signals from the two upper limbs into a single illusory experience of self-touch as in sighted individuals.
Binding of Sights and Sounds: Age-Related Changes in Multisensory Temporal Processing
ERIC Educational Resources Information Center
Hillock, Andrea R.; Powers, Albert R.; Wallace, Mark T.
2011-01-01
We live in a multisensory world and one of the challenges the brain is faced with is deciding what information belongs together. Our ability to make assumptions about the relatedness of multisensory stimuli is partly based on their temporal and spatial relationships. Stimuli that are proximal in time and space are likely to be bound together by…
The multisensory brain and its ability to learn music.
Zimmerman, Emily; Lahav, Amir
2012-04-01
Playing a musical instrument requires a complex skill set that depends on the brain's ability to quickly integrate information from multiple senses. It has been well documented that intensive musical training alters brain structure and function within and across multisensory brain regions, supporting the experience-dependent plasticity model. Here, we argue that this experience-dependent plasticity occurs because of the multisensory nature of the brain and may be an important contributing factor to musical learning. This review highlights key multisensory regions within the brain and discusses their role in the context of music learning and rehabilitation. © 2012 New York Academy of Sciences.
The fMRI BOLD response to unisensory and multisensory smoking cues in nicotine-dependent adults
Cortese, Bernadette M.; Uhde, Thomas W.; Brady, Kathleen T.; McClernon, F. Joseph; Yang, Qing X.; Collins, Heather R.; LeMatty, Todd; Hartwell, Karen J.
2015-01-01
Given that the vast majority of functional magnetic resonance imaging (fMRI) studies of drug cue reactivity use unisensory visual cues, but that multisensory cues may elicit greater craving-related brain responses, the current study sought to compare the fMRI BOLD response to unisensory visual and multisensory, visual plus odor, smoking cues in 17 nicotine-dependent adult cigarette smokers. Brain activation to smoking-related, compared to neutral, pictures was assessed under cigarette smoke and odorless odor conditions. While smoking pictures elicited a pattern of activation consistent with the addiction literature, the multisensory (odor + picture) smoking cues elicited significantly greater and more widespread activation in mainly frontal and temporal regions. BOLD signal elicited by the multi-sensory, but not unisensory cues, was significantly related to participants’ level of control over craving as well. Results demonstrated that the co-presentation of cigarette smoke odor with smoking-related visual cues, compared to the visual cues alone, elicited greater levels of craving-related brain activation in key regions implicated in reward. These preliminary findings support future research aimed at a better understanding of multisensory integration of drug cues and craving. PMID:26475784
The fMRI BOLD response to unisensory and multisensory smoking cues in nicotine-dependent adults.
Cortese, Bernadette M; Uhde, Thomas W; Brady, Kathleen T; McClernon, F Joseph; Yang, Qing X; Collins, Heather R; LeMatty, Todd; Hartwell, Karen J
2015-12-30
Given that the vast majority of functional magnetic resonance imaging (fMRI) studies of drug cue reactivity use unisensory visual cues, but that multisensory cues may elicit greater craving-related brain responses, the current study sought to compare the fMRI BOLD response to unisensory visual and multisensory, visual plus odor, smoking cues in 17 nicotine-dependent adult cigarette smokers. Brain activation to smoking-related, compared to neutral, pictures was assessed under cigarette smoke and odorless odor conditions. While smoking pictures elicited a pattern of activation consistent with the addiction literature, the multisensory (odor+picture) smoking cues elicited significantly greater and more widespread activation in mainly frontal and temporal regions. BOLD signal elicited by the multisensory, but not unisensory cues, was significantly related to participants' level of control over craving as well. Results demonstrated that the co-presentation of cigarette smoke odor with smoking-related visual cues, compared to the visual cues alone, elicited greater levels of craving-related brain activation in key regions implicated in reward. These preliminary findings support future research aimed at a better understanding of multisensory integration of drug cues and craving. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Dynamics of the G-excess illusion
NASA Technical Reports Server (NTRS)
Baylor, K. A.; Reschke, M.; Guedry, F. E.; Mcgrath, B. J.; Rupert, A. H.
1992-01-01
The G-excess illusion is increasingly recognized as a cause of aviation mishaps especially when pilots perform high-speed, steeply banked turns at low altitudes. Centrifuge studies of this illusion have examined the perception of subject orientation and/or target displacement during maintained hypergravity with the subject's head held stationary. The transient illusory perceptions produced by moving the head in hypergravity are difficult to study onboard centrifuges because the high angular velocity ensures the presence of strong Coriolis cross-coupled semicircular canal effects that mask immediate transient otolith-organ effects. The present study reports perceptions following head movements in hypergravity produced by high-speed aircraft maintaining a banked attitude with low angular velocity to minimize cross-coupled effects. Methods: Fourteen subjects flew on the NASA KC-135 and were exposed to resultant gravity forces of 1.3, 1.5, and 1.8 G for 3 minute periods. On command, seated subjects made controlled head movements in roll, pitch, and yaw at 30 second intervals both in the dark and with faint targets at a distance of 5 feet. Results: head movement produced transient perception of target displacement and velocity at levels as low as 1.3 G. Reports of target velocity without appropriate corresponding displacement were common. At 1.8 G when yaw head movements were made from a face down position, 4 subjects reported oscillatory rotational target displacement with fast and slow alternating components suggestive of torsional nystagmus. Head movements evoked symptoms of nausea in most subjects, with 2 subjects and 1 observer vomiting. Conclusions: The transient percepts present conflicting signals, which introduced confusion in target and subject orientation. Repeated head movements in hypergravity generate nausea by mechanisms distinct from cross-coupled Coriolis effects.
Dalton, Brian H; Rasman, Brandon G; Inglis, J Timothy; Blouin, Jean-Sébastien
2017-04-15
We tested perceived head-on-feet orientation and the direction of vestibular-evoked balance responses in passively and actively held head-turned postures. The direction of vestibular-evoked balance responses was not aligned with perceived head-on-feet orientation while maintaining prolonged passively held head-turned postures. Furthermore, static visual cues of head-on-feet orientation did not update the estimate of head posture for the balance controller. A prolonged actively held head-turned posture did not elicit a rotation in the direction of the vestibular-evoked balance response despite a significant rotation in perceived angular head posture. It is proposed that conscious perception of head posture and the transformation of vestibular signals for standing balance relying on this head posture are not dependent on the same internal representation. Rather, the balance system may operate under its own sensorimotor principles, which are partly independent from perception. Vestibular signals used for balance control must be integrated with other sensorimotor cues to allow transformation of descending signals according to an internal representation of body configuration. We explored two alternative models of sensorimotor integration that propose (1) a single internal representation of head-on-feet orientation is responsible for perceived postural orientation and standing balance or (2) conscious perception and balance control are driven by separate internal representations. During three experiments, participants stood quietly while passively or actively maintaining a prolonged head-turned posture (>10 min). Throughout the trials, participants intermittently reported their perceived head angular position, and subsequently electrical vestibular stimuli were delivered to elicit whole-body balance responses. Visual recalibration of head-on-feet posture was used to determine whether static visual cues are used to update the internal representation of body configuration for perceived orientation and standing balance. All three experiments involved situations in which the vestibular-evoked balance response was not orthogonal to perceived head-on-feet orientation, regardless of the visual information provided. For prolonged head-turned postures, balance responses consistent with actual head-on-feet posture occurred only during the active condition. Our results indicate that conscious perception of head-on-feet posture and vestibular control of balance do not rely on the same internal representation, but instead treat sensorimotor cues in parallel and may arrive at different conclusions regarding head-on-feet posture. The balance system appears to bypass static visual cues of postural orientation and mainly use other sensorimotor signals of head-on-feet position to transform vestibular signals of head motion, a mechanism appropriate for most daily activities. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.
Autoscopic phenomena and one's own body representation in dreams.
Occhionero, Miranda; Cicogna, Piera Carla
2011-12-01
Autoscopic phenomena (AP) are complex experiences that include the visual illusory reduplication of one's own body. From a phenomenological point of view, we can distinguish three conditions: autoscopic hallucinations, heautoscopy, and out-of-body experiences. The dysfunctional pattern involves multisensory disintegration of personal and extrapersonal space perception. The etiology, generally either neurological or psychiatric, is different. Also, the hallucination of Self and own body image is present during dreams and differs according to sleep stage. Specifically, the representation of the Self in REM dreams is frequently similar to the perception of Self in wakefulness, whereas in NREM dreams, a greater polymorphism of Self and own body representation is observed. The parallels between autoscopic phenomena in pathological cases and the Self-hallucination in dreams will be discussed to further the understanding of the particular states of self awareness, especially the complex integration of different memory sources in Self and body representation. Copyright © 2011 Elsevier Inc. All rights reserved.
A neuroscientific perspective on music therapy.
Koelsch, Stefan
2009-07-01
During the last years, a number of studies demonstrated that music listening (and even more so music production) activates a multitude of brain structures involved in cognitive, sensorimotor, and emotional processing. For example, music engages sensory processes, attention, memory-related processes, perception-action mediation ("mirror neuron system" activity), multisensory integration, activity changes in core areas of emotional processing, processing of musical syntax and musical meaning, and social cognition. It is likely that the engagement of these processes by music can have beneficial effects on the psychological and physiological health of individuals, although the mechanisms underlying such effects are currently not well understood. This article gives a brief overview of factors contributing to the effects of music-therapeutic work. Then, neuroscientific studies using music to investigate emotion, perception-action mediation ("mirror function"), and social cognition are reviewed, including illustrations of the relevance of these domains for music therapy.
Mental Imagery Induces Cross-Modal Sensory Plasticity and Changes Future Auditory Perception.
Berger, Christopher C; Ehrsson, H Henrik
2018-04-01
Can what we imagine in our minds change how we perceive the world in the future? A continuous process of multisensory integration and recalibration is responsible for maintaining a correspondence between the senses (e.g., vision, touch, audition) and, ultimately, a stable and coherent perception of our environment. This process depends on the plasticity of our sensory systems. The so-called ventriloquism aftereffect-a shift in the perceived localization of sounds presented alone after repeated exposure to spatially mismatched auditory and visual stimuli-is a clear example of this type of plasticity in the audiovisual domain. In a series of six studies with 24 participants each, we investigated an imagery-induced ventriloquism aftereffect in which imagining a visual stimulus elicits the same frequency-specific auditory aftereffect as actually seeing one. These results demonstrate that mental imagery can recalibrate the senses and induce the same cross-modal sensory plasticity as real sensory stimuli.
Rapid adaptation of multisensory integration in vestibular pathways
Carriot, Jerome; Jamali, Mohsen; Cullen, Kathleen E.
2015-01-01
Sensing gravity is vital for our perception of spatial orientation, the control of upright posture, and generation of our everyday activities. When an astronaut transitions to microgravity or returns to earth, the vestibular input arising from self-motion will not match the brain's expectation. Our recent neurophysiological studies have provided insight into how the nervous system rapidly reorganizes when vestibular input becomes unreliable by both (1) updating its internal model of the sensory consequences of motion and (2) up-weighting more reliable extra-vestibular information. These neural strategies, in turn, are linked to improvements in sensorimotor performance (e.g., gaze and postural stability, locomotion, orienting) and perception characterized by similar time courses. We suggest that furthering our understanding of the neural mechanisms that underlie sensorimotor adaptation will have important implications for optimizing training programs for astronauts before and after space exploration missions and for the design of goal-oriented rehabilitation for patients. PMID:25932009
Illusory ownership of an invisible body reduces autonomic and subjective social anxiety responses
Guterstam, Arvid; Abdulkarim, Zakaryah; Ehrsson, H. Henrik
2015-01-01
What is it like to be invisible? This question has long fascinated man and has been the central theme of many classic literary works. Recent advances in materials science suggest that invisibility cloaking of the human body may be possible in the not-so-distant future. However, it remains unknown how invisibility affects body perception and embodied cognition. To address these questions, we developed a perceptual illusion of having an entire invisible body. Through a series of experiments, we characterized the multisensory rules that govern the elicitation of the illusion and show that the experience of having an invisible body reduces the social anxiety response to standing in front of an audience. This study provides an experimental model of what it is like to be invisible and shows that this experience affects bodily self-perception and social cognition. PMID:25906330
A General Audiovisual Temporal Processing Deficit in Adult Readers With Dyslexia.
Francisco, Ana A; Jesse, Alexandra; Groen, Margriet A; McQueen, James M
2017-01-01
Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of audiovisual speech and nonspeech stimuli, their time window of audiovisual integration for speech (using incongruent /aCa/ syllables), and their audiovisual perception of phonetic categories. Adult readers with dyslexia showed less sensitivity to audiovisual simultaneity than typical readers for both speech and nonspeech events. We found no differences between readers with dyslexia and typical readers in the temporal window of integration for audiovisual speech or in the audiovisual perception of phonetic categories. The results suggest an audiovisual temporal deficit in dyslexia that is not specific to speech-related events. But the differences found for audiovisual temporal sensitivity did not translate into a deficit in audiovisual speech perception. Hence, there seems to be a hiatus between simultaneity judgment and perception, suggesting a multisensory system that uses different mechanisms across tasks. Alternatively, it is possible that the audiovisual deficit in dyslexia is only observable when explicit judgments about audiovisual simultaneity are required.
To what extent do Gestalt grouping principles influence tactile perception?
Gallace, Alberto; Spence, Charles
2011-07-01
Since their formulation by the Gestalt movement more than a century ago, the principles of perceptual grouping have primarily been investigated in the visual modality and, to a lesser extent, in the auditory modality. The present review addresses the question of whether the same grouping principles also affect the perception of tactile stimuli. Although, to date, only a few studies have explicitly investigated the existence of Gestalt grouping principles in the tactile modality, we argue that many more studies have indirectly provided evidence relevant to this topic. Reviewing this body of research, we argue that similar principles to those reported previously in visual and auditory studies also govern the perceptual grouping of tactile stimuli. In particular, we highlight evidence showing that the principles of proximity, similarity, common fate, good continuation, and closure affect tactile perception in both unimodal and crossmodal settings. We also highlight that the grouping of tactile stimuli is often affected by visual and auditory information that happen to be presented simultaneously. Finally, we discuss the theoretical and applied benefits that might pertain to the further study of Gestalt principles operating in both unisensory and multisensory tactile perception.
Out of touch with reality? Social perception in first-episode schizophrenia
Salone, Anatolia; Ferri, Francesca; De Berardis, Domenico; Romani, Gian Luca; Ferro, Filippo M.; Gallese, Vittorio
2013-01-01
Social dysfunction has been recognized as an elementary feature of schizophrenia, but it remains a crucial issue whether social deficits in schizophrenia concern the inter-subjective domain or primarily have their roots in disturbances of self-experience. Social perception comprises vicarious processes grounding an experiential inter-relationship with others as well as self-regulation processes allowing to maintain a coherent sense of self. The present study investigated whether the functional neural basis underlying these processes is altered in first-episode schizophrenia (FES). Twenty-four FES patients and 22 healthy control participants underwent functional magnetic resonance imaging during a social perception task requiring them to watch videos depicting other individuals' inanimate and animate/social tactile stimulations, and a tactile localizer condition. Activation in ventral premotor cortex for observed bodily tactile stimulations was reduced in the FES group and negatively correlated with self-experience disturbances. Moreover, FES patients showed aberrant differential activation in posterior insula for first-person tactile experiences and observed affective tactile stimulations. These findings suggest that social perception in FES at a pre-reflective level is characterized by disturbances of self-experience, including impaired multisensory representations and self-other distinction. However, the results also show that social perception in FES involves more complex alterations of neural activation at multiple processing levels. PMID:22275166
Looking for myself: current multisensory input alters self-face recognition.
Tsakiris, Manos
2008-01-01
How do I know the person I see in the mirror is really me? Is it because I know the person simply looks like me, or is it because the mirror reflection moves when I move, and I see it being touched when I feel touch myself? Studies of face-recognition suggest that visual recognition of stored visual features inform self-face recognition. In contrast, body-recognition studies conclude that multisensory integration is the main cue to selfhood. The present study investigates for the first time the specific contribution of current multisensory input for self-face recognition. Participants were stroked on their face while they were looking at a morphed face being touched in synchrony or asynchrony. Before and after the visuo-tactile stimulation participants performed a self-recognition task. The results show that multisensory signals have a significant effect on self-face recognition. Synchronous tactile stimulation while watching another person's face being similarly touched produced a bias in recognizing one's own face, in the direction of the other person included in the representation of one's own face. Multisensory integration can update cognitive representations of one's body, such as the sense of ownership. The present study extends this converging evidence by showing that the correlation of synchronous multisensory signals also updates the representation of one's face. The face is a key feature of our identity, but at the same time is a source of rich multisensory experiences used to maintain or update self-representations.
Heading perception in patients with advanced retinitis pigmentosa
NASA Technical Reports Server (NTRS)
Li, Li; Peli, Eli; Warren, William H.
2002-01-01
PURPOSE: We investigated whether retinis pigmentosa (RP) patients with residual visual field of < 100 degrees could perceive heading from optic flow. METHODS: Four RP patients and four age-matched normally sighted control subjects viewed displays simulating an observer walking over a ground. In experiment 1, subjects viewed either the entire display with free fixation (full-field condition) or through an aperture with a fixation point at the center (aperture condition). In experiment 2, patients viewed displays of different durations. RESULTS: RP patients' performance was comparable to that of the age-matched control subjects: heading judgment was better in the full-field condition than in the aperture condition. Increasing display duration from 0.5 s to 1 s improved patients' heading performance, but giving them more time (3 s) to gather more visual information did not consistently further improve their performance. CONCLUSIONS: RP patients use active scanning eye movements to compensate for their visual field loss in heading perception; they might be able to gather sufficient optic flow information for heading perception in about 1 s.
Heading perception in patients with advanced retinitis pigmentosa.
Li, Li; Peli, Eli; Warren, William H
2002-09-01
We investigated whether retinis pigmentosa (RP) patients with residual visual field of < 100 degrees could perceive heading from optic flow. Four RP patients and four age-matched normally sighted control subjects viewed displays simulating an observer walking over a ground. In experiment 1, subjects viewed either the entire display with free fixation (full-field condition) or through an aperture with a fixation point at the center (aperture condition). In experiment 2, patients viewed displays of different durations. RP patients' performance was comparable to that of the age-matched control subjects: heading judgment was better in the full-field condition than in the aperture condition. Increasing display duration from 0.5 s to 1 s improved patients' heading performance, but giving them more time (3 s) to gather more visual information did not consistently further improve their performance. RP patients use active scanning eye movements to compensate for their visual field loss in heading perception; they might be able to gather sufficient optic flow information for heading perception in about 1 s.
Multisensory integration: flexible use of general operations
van Atteveldt, Nienke; Murray, Micah M.; Thut, Gregor; Schroeder, Charles
2014-01-01
Research into the anatomical substrates and “principles” for integrating inputs from separate sensory surfaces has yielded divergent findings. This suggests that multisensory integration is flexible and context-dependent, and underlines the need for dynamically adaptive neuronal integration mechanisms. We propose that flexible multisensory integration can be explained by a combination of canonical, population-level integrative operations, such as oscillatory phase-resetting and divisive normalization. These canonical operations subsume multisensory integration into a fundamental set of principles as to how the brain integrates all sorts of information, and they are being used proactively and adaptively. We illustrate this proposition by unifying recent findings from different research themes such as timing, behavioral goal and experience-related differences in integration. PMID:24656248
Cecere, Roberto; Gross, Joachim; Willis, Ashleigh; Thut, Gregor
2017-05-24
In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Intersensory timing is crucial in this process because only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window, revealing asymmetries in its size and plasticity depending on the leading input: auditory-visual (AV) or visual-auditory (VA). Here, we tested whether separate neuronal mechanisms underlie this AV-VA dichotomy in humans. We recorded high-density EEG while participants performed an audiovisual simultaneity judgment task including various AV-VA asynchronies and unisensory control conditions (visual-only, auditory-only) and tested whether AV and VA processing generate different patterns of brain activity. After isolating the multisensory components of AV-VA event-related potentials (ERPs) from the sum of their unisensory constituents, we ran a time-resolved topographical representational similarity analysis (tRSA) comparing the AV and VA ERP maps. Spatial cross-correlation matrices were built from real data to index the similarity between the AV and VA maps at each time point (500 ms window after stimulus) and then correlated with two alternative similarity model matrices: AV maps = VA maps versus AV maps ≠ VA maps The tRSA results favored the AV maps ≠ VA maps model across all time points, suggesting that audiovisual temporal binding (indexed by synchrony perception) engages different neural pathways depending on the leading sense. The existence of such dual route supports recent theoretical accounts proposing that multiple binding mechanisms are implemented in the brain to accommodate different information parsing strategies in auditory and visual sensory systems. SIGNIFICANCE STATEMENT Intersensory timing is a crucial aspect of multisensory integration, determining whether and how inputs in one modality enhance stimulus processing in another modality. Our research demonstrates that evaluating synchrony of auditory-leading (AV) versus visual-leading (VA) audiovisual stimulus pairs is characterized by two distinct patterns of brain activity. This suggests that audiovisual integration is not a unitary process and that different binding mechanisms are recruited in the brain based on the leading sense. These mechanisms may be relevant for supporting different classes of multisensory operations, for example, auditory enhancement of visual attention (AV) and visual enhancement of auditory speech (VA). Copyright © 2017 Cecere et al.
2017-01-01
In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Intersensory timing is crucial in this process because only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window, revealing asymmetries in its size and plasticity depending on the leading input: auditory–visual (AV) or visual–auditory (VA). Here, we tested whether separate neuronal mechanisms underlie this AV–VA dichotomy in humans. We recorded high-density EEG while participants performed an audiovisual simultaneity judgment task including various AV–VA asynchronies and unisensory control conditions (visual-only, auditory-only) and tested whether AV and VA processing generate different patterns of brain activity. After isolating the multisensory components of AV–VA event-related potentials (ERPs) from the sum of their unisensory constituents, we ran a time-resolved topographical representational similarity analysis (tRSA) comparing the AV and VA ERP maps. Spatial cross-correlation matrices were built from real data to index the similarity between the AV and VA maps at each time point (500 ms window after stimulus) and then correlated with two alternative similarity model matrices: AVmaps = VAmaps versus AVmaps ≠ VAmaps. The tRSA results favored the AVmaps ≠ VAmaps model across all time points, suggesting that audiovisual temporal binding (indexed by synchrony perception) engages different neural pathways depending on the leading sense. The existence of such dual route supports recent theoretical accounts proposing that multiple binding mechanisms are implemented in the brain to accommodate different information parsing strategies in auditory and visual sensory systems. SIGNIFICANCE STATEMENT Intersensory timing is a crucial aspect of multisensory integration, determining whether and how inputs in one modality enhance stimulus processing in another modality. Our research demonstrates that evaluating synchrony of auditory-leading (AV) versus visual-leading (VA) audiovisual stimulus pairs is characterized by two distinct patterns of brain activity. This suggests that audiovisual integration is not a unitary process and that different binding mechanisms are recruited in the brain based on the leading sense. These mechanisms may be relevant for supporting different classes of multisensory operations, for example, auditory enhancement of visual attention (AV) and visual enhancement of auditory speech (VA). PMID:28450537
Perception of the Body in Space: Mechanisms
NASA Technical Reports Server (NTRS)
Young, Laurence R.
1991-01-01
The principal topic is the perception of body orientation and motion in space and the extent to which these perceptual abstraction can be related directly to the knowledge of sensory mechanisms, particularly for the vestibular apparatus. Spatial orientation is firmly based on the underlying sensory mechanisms and their central integration. For some of the simplest situations, like rotation about a vertical axis in darkness, the dynamic response of the semicircular canals furnishes almost enough information to explain the sensations of turning and stopping. For more complex conditions involving multiple sensory systems and possible conflicts among their messages, a mechanistic response requires significant speculative assumptions. The models that exist for multisensory spatial orientation are still largely of the non-rational parameter variety. They are capable of predicting relationships among input motions and output perceptions of motion, but they involve computational functions that do not now and perhaps never will have their counterpart in central nervous system machinery. The challenge continues to be in the iterative process of testing models by experiment, correcting them where necessary, and testing them again.
NASA Astrophysics Data System (ADS)
Nomura, Shusaku; Sasaki, Shuntaro; Hirakawa, Masato; Hiwaki, Osamu
2010-11-01
We investigated the brain potential in relation with the recognition of Müller-Lyer (ML) illusionary figure, which was a famous optical illusion. Although it is frequently assumed that the ML illusionary effect could be derived from its geometrical construction, it derives the same length miss-estimation effect on the sense of touch; haptic illusion. Moreover it occurs in people who are blindfolded or congenital blind. Thus somehow higher information processing than the optical one within the brain could be expected to involve with the recognition of ML figure while few brain studies have demonstrated it. We then investigated the brain waves under subjects' perceiving ML illusionary figure. As a result the marked difference of the brain potential between ML and the control condition around the midline of parietal brain, where the multi-modal perception information was thought to associate within the brain, was observed. This result implies that the anticipatory processing on the perception of ML illusionary figures would be derived by integrating multi-sensory information.
Visual adaptation enhances action sound discrimination.
Barraclough, Nick E; Page, Steve A; Keefe, Bruce D
2017-01-01
Prolonged exposure, or adaptation, to a stimulus in 1 modality can bias, but also enhance, perception of a subsequent stimulus presented within the same modality. However, recent research has also found that adaptation in 1 modality can bias perception in another modality. Here, we show a novel crossmodal adaptation effect, where adaptation to a visual stimulus enhances subsequent auditory perception. We found that when compared to no adaptation, prior adaptation to visual, auditory, or audiovisual hand actions enhanced discrimination between 2 subsequently presented hand action sounds. Discrimination was most enhanced when the visual action "matched" the auditory action. In addition, prior adaptation to a visual, auditory, or audiovisual action caused subsequent ambiguous action sounds to be perceived as less like the adaptor. In contrast, these crossmodal action aftereffects were not generated by adaptation to the names of actions. Enhanced crossmodal discrimination and crossmodal perceptual aftereffects may result from separate mechanisms operating in audiovisual action sensitive neurons within perceptual systems. Adaptation-induced crossmodal enhancements cannot be explained by postperceptual responses or decisions. More generally, these results together indicate that adaptation is a ubiquitous mechanism for optimizing perceptual processing of multisensory stimuli.
"Atypical touch perception in MTS may derive from an abnormally plastic self-representation".
Bufalari, Ilaria; Porciello, Giuseppina; Aglioti, Salvatore Maria
2015-01-01
Mirror Touch Synesthetes (MTSs) feel touch while they observe others being touched. According to the authors, two complementary theoretical frameworks, the Threshold Theory and the Self-Other Theory, explain Mirror Touch Synesthesia (MTS). Based on the behavioral evidence that in MTSs the mere observation of touch is sufficient to elicit self-other merging (i.e., self-representation changes), a condition that in non-MTSs just elicits self-other sharing (i.e., mirroring activity without self-other blurring), and on the rTPJ anatomical alterations in MTS, we argue that MTS may derive from an abnormally plastic self-representation and atypical multisensory integrative mechanisms.
The priming function of in-car audio instruction.
Keyes, Helen; Whitmore, Antony; Naneva, Stanislava; McDermott, Daragh
2018-05-01
Studies to date have focused on the priming power of visual road signs, but not the priming potential of audio road scene instruction. Here, the relative priming power of visual, audio, and multisensory road scene instructions was assessed. In a lab-based study, participants responded to target road scene turns following visual, audio, or multisensory road turn primes which were congruent or incongruent to the primes in direction, or control primes. All types of instruction (visual, audio, and multisensory) were successful in priming responses to a road scene. Responses to multisensory-primed targets (both audio and visual) were faster than responses to either audio or visual primes alone. Incongruent audio primes did not affect performance negatively in the manner of incongruent visual or multisensory primes. Results suggest that audio instructions have the potential to prime drivers to respond quickly and safely to their road environment. Peak performance will be observed if audio and visual road instruction primes can be timed to co-occur.
Liu, Pan; Rigoulot, Simon; Pell, Marc D
2017-12-01
To explore how cultural immersion modulates emotion processing, this study examined how Chinese immigrants to Canada process multisensory emotional expressions, which were compared to existing data from two groups, Chinese and North Americans. Stroop and Oddball paradigms were employed to examine different stages of emotion processing. The Stroop task presented face-voice pairs expressing congruent/incongruent emotions and participants actively judged the emotion of one modality while ignoring the other. A significant effect of cultural immersion was observed in the immigrants' behavioral performance, which showed greater interference from to-be-ignored faces, comparable with what was observed in North Americans. However, this effect was absent in their N400 data, which retained the same pattern as the Chinese. In the Oddball task, where immigrants passively viewed facial expressions with/without simultaneous vocal emotions, they exhibited a larger visual MMN for faces accompanied by voices, again mirroring patterns observed in Chinese. Correlation analyses indicated that the immigrants' living duration in Canada was associated with neural patterns (N400 and visual mismatch negativity) more closely resembling North Americans. Our data suggest that in multisensory emotion processing, adopting to a new culture first leads to behavioral accommodation followed by alterations in brain activities, providing new evidence on human's neurocognitive plasticity in communication.
Meyer, Georg F.; Wong, Li Ting; Timson, Emma; Perfect, Philip; White, Mark D.
2012-01-01
We argue that objective fidelity evaluation of virtual environments, such as flight simulation, should be human-performance-centred and task-specific rather than measure the match between simulation and physical reality. We show how principled experimental paradigms and behavioural models to quantify human performance in simulated environments that have emerged from research in multisensory perception provide a framework for the objective evaluation of the contribution of individual cues to human performance measures of fidelity. We present three examples in a flight simulation environment as a case study: Experiment 1: Detection and categorisation of auditory and kinematic motion cues; Experiment 2: Performance evaluation in a target-tracking task; Experiment 3: Transferrable learning of auditory motion cues. We show how the contribution of individual cues to human performance can be robustly evaluated for each task and that the contribution is highly task dependent. The same auditory cues that can be discriminated and are optimally integrated in experiment 1, do not contribute to target-tracking performance in an in-flight refuelling simulation without training, experiment 2. In experiment 3, however, we demonstrate that the auditory cue leads to significant, transferrable, performance improvements with training. We conclude that objective fidelity evaluation requires a task-specific analysis of the contribution of individual cues. PMID:22957068
Li, Qi; Yu, Hongtao; Wu, Yan; Gao, Ning
2016-08-26
The integration of multiple sensory inputs is essential for perception of the external world. The spatial factor is a fundamental property of multisensory audiovisual integration. Previous studies of the spatial constraints on bimodal audiovisual integration have mainly focused on the spatial congruity of audiovisual information. However, the effect of spatial reliability within audiovisual information on bimodal audiovisual integration remains unclear. In this study, we used event-related potentials (ERPs) to examine the effect of spatial reliability of task-irrelevant sounds on audiovisual integration. Three relevant ERP components emerged: the first at 140-200ms over a wide central area, the second at 280-320ms over the fronto-central area, and a third at 380-440ms over the parieto-occipital area. Our results demonstrate that ERP amplitudes elicited by audiovisual stimuli with reliable spatial relationships are larger than those elicited by stimuli with inconsistent spatial relationships. In addition, we hypothesized that spatial reliability within an audiovisual stimulus enhances feedback projections to the primary visual cortex from multisensory integration regions. Overall, our findings suggest that the spatial linking of visual and auditory information depends on spatial reliability within an audiovisual stimulus and occurs at a relatively late stage of processing. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Monkeys and Humans Share a Common Computation for Face/Voice Integration
Chandrasekaran, Chandramouli; Lemus, Luis; Trubanova, Andrea; Gondan, Matthias; Ghazanfar, Asif A.
2011-01-01
Speech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s) could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a “race” model failed to account for their behavior patterns. Conversely, a “superposition model”, positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates. PMID:21998576
Noel, Jean-Paul; Kurela, LeAnne; Baum, Sarah H; Yu, Hong; Neimat, Joseph S; Gallagher, Martin J; Wallace, Mark
2017-05-01
Cognitive and perceptual comorbidities frequently accompany epilepsy and psychogenic nonepileptic events (PNEE). However, and despite the fact that perceptual function is built upon a multisensory foundation, little knowledge exists concerning multisensory function in these populations. Here, we characterized facets of multisensory processing abilities in patients with epilepsy and PNEE, and probed the relationship between individual resting-state EEG complexity and these psychophysical measures in each patient. We prospectively studied a cohort of patients with epilepsy (N=18) and PNEE (N=20) patients who were admitted to Vanderbilt's Epilepsy Monitoring Unit (EMU) and weaned off of anticonvulsant drugs. Unaffected age-matched persons staying with the patients in the EMU (N=15) were also recruited as controls. All participants performed two tests of multisensory function: an audio-visual simultaneity judgment and an audio-visual redundant target task. Further, in the cohort of patients with epilepsy and PNEE we quantified resting state EEG gamma power and complexity. Compared with both patients with epilepsy and control subjects, patients with PNEE exhibited significantly poorer acuity in audiovisual temporal function as evidenced in significantly larger temporal binding windows (i.e., they perceived larger stimulus asynchronies as being presented simultaneously). These differences appeared to be specific for temporal function, as there was no difference among the three groups in a non-temporally based measure of multisensory function - the redundant target task. Further, patients with PNEE exhibited more complex resting state EEG patterns as compared to their patients with epilepsy, and EEG complexity correlated with multisensory temporal performance on a subject-by-subject manner. Taken together, findings seem to indicate that patients with PNEE bind information from audition and vision over larger temporal intervals when compared with control subjects as well as patients with epilepsy. This difference in multisensory function appears to be specific to the temporal domain, and may be a contributing factor to the behavioral and perceptual alterations seen in this population. Published by Elsevier Inc.
Ghose, D; Wallace, M T
2014-01-03
Multisensory integration has been widely studied in neurons of the mammalian superior colliculus (SC). This has led to the description of various determinants of multisensory integration, including those based on stimulus- and neuron-specific factors. The most widely characterized of these illustrate the importance of the spatial and temporal relationships of the paired stimuli as well as their relative effectiveness in eliciting a response in determining the final integrated output. Although these stimulus-specific factors have generally been considered in isolation (i.e., manipulating stimulus location while holding all other factors constant), they have an intrinsic interdependency that has yet to be fully elucidated. For example, changes in stimulus location will likely also impact both the temporal profile of response and the effectiveness of the stimulus. The importance of better describing this interdependency is further reinforced by the fact that SC neurons have large receptive fields, and that responses at different locations within these receptive fields are far from equivalent. To address these issues, the current study was designed to examine the interdependency between the stimulus factors of space and effectiveness in dictating the multisensory responses of SC neurons. The results show that neuronal responsiveness changes dramatically with changes in stimulus location - highlighting a marked heterogeneity in the spatial receptive fields of SC neurons. More importantly, this receptive field heterogeneity played a major role in the integrative product exhibited by stimulus pairings, such that pairings at weakly responsive locations of the receptive fields resulted in the largest multisensory interactions. Together these results provide greater insight into the interrelationship of the factors underlying multisensory integration in SC neurons, and may have important mechanistic implications for multisensory integration and the role it plays in shaping SC-mediated behaviors. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.
Virtually-augmented interfaces for tactical aircraft.
Haas, M W
1995-05-01
The term Fusion Interface is defined as a class of interface which integrally incorporates both virtual and non-virtual concepts and devices across the visual, auditory and haptic sensory modalities. A fusion interface is a multi-sensory virtually-augmented synthetic environment. A new facility has been developed within the Human Engineering Division of the Armstrong Laboratory dedicated to exploratory development of fusion-interface concepts. One of the virtual concepts to be investigated in the Fusion Interfaces for Tactical Environments facility (FITE) is the application of EEG and other physiological measures for virtual control of functions within the flight environment. FITE is a specialized flight simulator which allows efficient concept development through the use of rapid prototyping followed by direct experience of new fusion concepts. The FITE facility also supports evaluation of fusion concepts by operational fighter pilots in a high fidelity simulated air combat environment. The facility was utilized by a multi-disciplinary team composed of operational pilots, human-factors engineers, electronics engineers, computer scientists, and experimental psychologists to prototype and evaluate the first multi-sensory, virtually-augmented cockpit. The cockpit employed LCD-based head-down displays, a helmet-mounted display, three-dimensionally localized audio displays, and a haptic display. This paper will endeavor to describe the FITE facility architecture, some of the characteristics of the FITE virtual display and control devices, and the potential application of EEG and other physiological measures within the FITE facility.
Multisensory Integration in the Virtual Hand Illusion with Active Movement
Satoh, Satoru; Hachimura, Kozaburo
2016-01-01
Improving the sense of immersion is one of the core issues in virtual reality. Perceptual illusions of ownership can be perceived over a virtual body in a multisensory virtual reality environment. Rubber Hand and Virtual Hand Illusions showed that body ownership can be manipulated by applying suitable visual and tactile stimulation. In this study, we investigate the effects of multisensory integration in the Virtual Hand Illusion with active movement. A virtual xylophone playing system which can interactively provide synchronous visual, tactile, and auditory stimulation was constructed. We conducted two experiments regarding different movement conditions and different sensory stimulations. Our results demonstrate that multisensory integration with free active movement can improve the sense of immersion in virtual reality. PMID:27847822
Movement Sonification: Effects on Motor Learning beyond Rhythmic Adjustments.
Effenberg, Alfred O; Fehse, Ursula; Schmitz, Gerd; Krueger, Bjoern; Mechling, Heinz
2016-01-01
Motor learning is based on motor perception and emergent perceptual-motor representations. A lot of behavioral research is related to single perceptual modalities but during last two decades the contribution of multimodal perception on motor behavior was discovered more and more. A growing number of studies indicates an enhanced impact of multimodal stimuli on motor perception, motor control and motor learning in terms of better precision and higher reliability of the related actions. Behavioral research is supported by neurophysiological data, revealing that multisensory integration supports motor control and learning. But the overwhelming part of both research lines is dedicated to basic research. Besides research in the domains of music, dance and motor rehabilitation, there is almost no evidence for enhanced effectiveness of multisensory information on learning of gross motor skills. To reduce this gap, movement sonification is used here in applied research on motor learning in sports. Based on the current knowledge on the multimodal organization of the perceptual system, we generate additional real-time movement information being suitable for integration with perceptual feedback streams of visual and proprioceptive modality. With ongoing training, synchronously processed auditory information should be initially integrated into the emerging internal models, enhancing the efficacy of motor learning. This is achieved by a direct mapping of kinematic and dynamic motion parameters to electronic sounds, resulting in continuous auditory and convergent audiovisual or audio-proprioceptive stimulus arrays. In sharp contrast to other approaches using acoustic information as error-feedback in motor learning settings, we try to generate additional movement information suitable for acceleration and enhancement of adequate sensorimotor representations and processible below the level of consciousness. In the experimental setting, participants were asked to learn a closed motor skill (technique acquisition of indoor rowing). One group was treated with visual information and two groups with audiovisual information (sonification vs. natural sounds). For all three groups learning became evident and remained stable. Participants treated with additional movement sonification showed better performance compared to both other groups. Results indicate that movement sonification enhances motor learning of a complex gross motor skill-even exceeding usually expected acoustic rhythmic effects on motor learning.
Movement Sonification: Effects on Motor Learning beyond Rhythmic Adjustments
Effenberg, Alfred O.; Fehse, Ursula; Schmitz, Gerd; Krueger, Bjoern; Mechling, Heinz
2016-01-01
Motor learning is based on motor perception and emergent perceptual-motor representations. A lot of behavioral research is related to single perceptual modalities but during last two decades the contribution of multimodal perception on motor behavior was discovered more and more. A growing number of studies indicates an enhanced impact of multimodal stimuli on motor perception, motor control and motor learning in terms of better precision and higher reliability of the related actions. Behavioral research is supported by neurophysiological data, revealing that multisensory integration supports motor control and learning. But the overwhelming part of both research lines is dedicated to basic research. Besides research in the domains of music, dance and motor rehabilitation, there is almost no evidence for enhanced effectiveness of multisensory information on learning of gross motor skills. To reduce this gap, movement sonification is used here in applied research on motor learning in sports. Based on the current knowledge on the multimodal organization of the perceptual system, we generate additional real-time movement information being suitable for integration with perceptual feedback streams of visual and proprioceptive modality. With ongoing training, synchronously processed auditory information should be initially integrated into the emerging internal models, enhancing the efficacy of motor learning. This is achieved by a direct mapping of kinematic and dynamic motion parameters to electronic sounds, resulting in continuous auditory and convergent audiovisual or audio-proprioceptive stimulus arrays. In sharp contrast to other approaches using acoustic information as error-feedback in motor learning settings, we try to generate additional movement information suitable for acceleration and enhancement of adequate sensorimotor representations and processible below the level of consciousness. In the experimental setting, participants were asked to learn a closed motor skill (technique acquisition of indoor rowing). One group was treated with visual information and two groups with audiovisual information (sonification vs. natural sounds). For all three groups learning became evident and remained stable. Participants treated with additional movement sonification showed better performance compared to both other groups. Results indicate that movement sonification enhances motor learning of a complex gross motor skill—even exceeding usually expected acoustic rhythmic effects on motor learning. PMID:27303255
DOT National Transportation Integrated Search
2004-03-20
A means of quantifying the cluttering effects of symbols is needed to evaluate the impact of displaying an increasing volume of information on aviation displays such as head-up displays. Human visual perception has been successfully modeled by algori...
Audiovisual integration in depth: multisensory binding and gain as a function of distance.
Noel, Jean-Paul; Modi, Kahan; Wallace, Mark T; Van der Stoep, Nathan
2018-07-01
The integration of information across sensory modalities is dependent on the spatiotemporal characteristics of the stimuli that are paired. Despite large variation in the distance over which events occur in our environment, relatively little is known regarding how stimulus-observer distance affects multisensory integration. Prior work has suggested that exteroceptive stimuli are integrated over larger temporal intervals in near relative to far space, and that larger multisensory facilitations are evident in far relative to near space. Here, we sought to examine the interrelationship between these previously established distance-related features of multisensory processing. Participants performed an audiovisual simultaneity judgment and redundant target task in near and far space, while audiovisual stimuli were presented at a range of temporal delays (i.e., stimulus onset asynchronies). In line with the previous findings, temporal acuity was poorer in near relative to far space. Furthermore, reaction time to asynchronously presented audiovisual targets suggested a temporal window for fast detection-a range of stimuli asynchronies that was also larger in near as compared to far space. However, the range of reaction times over which multisensory response enhancement was observed was limited to a restricted range of relatively small (i.e., 150 ms) asynchronies, and did not differ significantly between near and far space. Furthermore, for synchronous presentations, these distance-related (i.e., near vs. far) modulations in temporal acuity and multisensory gain correlated negatively at an individual subject level. Thus, the findings support the conclusion that multisensory temporal binding and gain are asymmetrically modulated as a function of distance from the observer, and specifies that this relationship is specific for temporally synchronous audiovisual stimulus presentations.
van Atteveldt, Nienke M; Blau, Vera C; Blomert, Leo; Goebel, Rainer
2010-02-02
Efficient multisensory integration is of vital importance for adequate interaction with the environment. In addition to basic binding cues like temporal and spatial coherence, meaningful multisensory information is also bound together by content-based associations. Many functional Magnetic Resonance Imaging (fMRI) studies propose the (posterior) superior temporal cortex (STC) as the key structure for integrating meaningful multisensory information. However, a still unanswered question is how superior temporal cortex encodes content-based associations, especially in light of inconsistent results from studies comparing brain activation to semantically matching (congruent) versus nonmatching (incongruent) multisensory inputs. Here, we used fMR-adaptation (fMR-A) in order to circumvent potential problems with standard fMRI approaches, including spatial averaging and amplitude saturation confounds. We presented repetitions of audiovisual stimuli (letter-speech sound pairs) and manipulated the associative relation between the auditory and visual inputs (congruent/incongruent pairs). We predicted that if multisensory neuronal populations exist in STC and encode audiovisual content relatedness, adaptation should be affected by the manipulated audiovisual relation. The results revealed an occipital-temporal network that adapted independently of the audiovisual relation. Interestingly, several smaller clusters distributed over superior temporal cortex within that network, adapted stronger to congruent than to incongruent audiovisual repetitions, indicating sensitivity to content congruency. These results suggest that the revealed clusters contain multisensory neuronal populations that encode content relatedness by selectively responding to congruent audiovisual inputs, since unisensory neuronal populations are assumed to be insensitive to the audiovisual relation. These findings extend our previously revealed mechanism for the integration of letters and speech sounds and demonstrate that fMR-A is sensitive to multisensory congruency effects that may not be revealed in BOLD amplitude per se.
Yildirim, Ilker; Jacobs, Robert A
2015-06-01
If a person is trained to recognize or categorize objects or events using one sensory modality, the person can often recognize or categorize those same (or similar) objects and events via a novel modality. This phenomenon is an instance of cross-modal transfer of knowledge. Here, we study the Multisensory Hypothesis which states that people extract the intrinsic, modality-independent properties of objects and events, and represent these properties in multisensory representations. These representations underlie cross-modal transfer of knowledge. We conducted an experiment evaluating whether people transfer sequence category knowledge across auditory and visual domains. Our experimental data clearly indicate that we do. We also developed a computational model accounting for our experimental results. Consistent with the probabilistic language of thought approach to cognitive modeling, our model formalizes multisensory representations as symbolic "computer programs" and uses Bayesian inference to learn these representations. Because the model demonstrates how the acquisition and use of amodal, multisensory representations can underlie cross-modal transfer of knowledge, and because the model accounts for subjects' experimental performances, our work lends credence to the Multisensory Hypothesis. Overall, our work suggests that people automatically extract and represent objects' and events' intrinsic properties, and use these properties to process and understand the same (and similar) objects and events when they are perceived through novel sensory modalities.
Foxe, John J.; Molholm, Sophie; Del Bene, Victor A.; Frey, Hans-Peter; Russo, Natalie N.; Blanco, Daniella; Saint-Amour, Dave; Ross, Lars A.
2015-01-01
Under noisy listening conditions, visualizing a speaker's articulations substantially improves speech intelligibility. This multisensory speech integration ability is crucial to effective communication, and the appropriate development of this capacity greatly impacts a child's ability to successfully navigate educational and social settings. Research shows that multisensory integration abilities continue developing late into childhood. The primary aim here was to track the development of these abilities in children with autism, since multisensory deficits are increasingly recognized as a component of the autism spectrum disorder (ASD) phenotype. The abilities of high-functioning ASD children (n = 84) to integrate seen and heard speech were assessed cross-sectionally, while environmental noise levels were systematically manipulated, comparing them with age-matched neurotypical children (n = 142). Severe integration deficits were uncovered in ASD, which were increasingly pronounced as background noise increased. These deficits were evident in school-aged ASD children (5–12 year olds), but were fully ameliorated in ASD children entering adolescence (13–15 year olds). The severity of multisensory deficits uncovered has important implications for educators and clinicians working in ASD. We consider the observation that the multisensory speech system recovers substantially in adolescence as an indication that it is likely amenable to intervention during earlier childhood, with potentially profound implications for the development of social communication abilities in ASD children. PMID:23985136
Multisensory environments for leisure: promoting well-being in nursing home residents with dementia.
Cox, Helen; Burns, Ian; Savage, Sally
2004-02-01
Multisensory environments such as Snoezelen rooms are becoming increasingly popular in health care facilities for older individuals. There is limited reliable evidence of the benefits of such innovations, and the effect they have on residents, caregivers, and visitors in these facilities. This two-stage project examined how effective two types of multisensory environments were in improving the well-being of older individuals with dementia. The two multisensory environments were a Snoezelen room and a landscaped garden. These environments were compared to the experience of the normal living environment. The observed response of 24 residents with dementia in a nursing home was measured during time spent in the Snoezelen room, in the garden, and in the living room. In the second part of the project, face-to-face interviews were conducted with six caregivers and six visitors to obtain their responses to the multisensory environments. These interviews identified the components of the environments most used and enjoyed by residents and the ways in which they could be improved to maximize well-being.
Coordinates of Human Visual and Inertial Heading Perception.
Crane, Benjamin Thomas
2015-01-01
Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results.
Coordinates of Human Visual and Inertial Heading Perception
Crane, Benjamin Thomas
2015-01-01
Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results. PMID:26267865
NASA Astrophysics Data System (ADS)
Hyde, Jerald R.
2004-05-01
It is clear to those who ``listen'' to concert halls and evaluate their degree of acoustical success that it is quite difficult to separate the acoustical response at a given seat from the multi-modal perception of the whole event. Objective concert hall data have been collected for the purpose of finding a link with their related subjective evaluation and ultimately with the architectural correlates which produce the sound field. This exercise, while important, tends to miss the point that a concert or opera event utilizes all the senses of which the sound field and visual stimuli are both major contributors to the experience. Objective acoustical factors point to visual input as being significant in the perception of ``acoustical intimacy'' and with the perception of loudness versus distance in large halls. This paper will review the evidence of visual input as a factor in what we ``hear'' and introduce concepts of perceptual constancy, distance perception, static and dynamic visual stimuli, and the general process of the psychology of the integrated experience. A survey of acousticians on their opinions about the auditory-visual aspects of the concert hall experience will be presented. [Work supported in part from the Veneklasen Research Foundation and Veneklasen Associates.
Electrophysiological evidence for a self-processing advantage during audiovisual speech integration.
Treille, Avril; Vilain, Coriandre; Kandel, Sonia; Sato, Marc
2017-09-01
Previous electrophysiological studies have provided strong evidence for early multisensory integrative mechanisms during audiovisual speech perception. From these studies, one unanswered issue is whether hearing our own voice and seeing our own articulatory gestures facilitate speech perception, possibly through a better processing and integration of sensory inputs with our own sensory-motor knowledge. The present EEG study examined the impact of self-knowledge during the perception of auditory (A), visual (V) and audiovisual (AV) speech stimuli that were previously recorded from the participant or from a speaker he/she had never met. Audiovisual interactions were estimated by comparing N1 and P2 auditory evoked potentials during the bimodal condition (AV) with the sum of those observed in the unimodal conditions (A + V). In line with previous EEG studies, our results revealed an amplitude decrease of P2 auditory evoked potentials in AV compared to A + V conditions. Crucially, a temporal facilitation of N1 responses was observed during the visual perception of self speech movements compared to those of another speaker. This facilitation was negatively correlated with the saliency of visual stimuli. These results provide evidence for a temporal facilitation of the integration of auditory and visual speech signals when the visual situation involves our own speech gestures.
ERIC Educational Resources Information Center
Jambunathan, Saigeetha
2012-01-01
The present project studied the relationship between the use of developmentally appropriate practices and children's perception of self-competence in Head Start classrooms. Self-competence is defined as children's confidence in succeeding in certain tasks. Developmentally appropriate practices (DAP) as proposed by the National Association for the…
Peterson, Amanda D; Goodell, L Suzanne; Hegde, Archana; Stage, Virginia C
2017-05-01
To develop a theory that explains the process of how teachers' perception of multilevel policies may influence nutrition education (NE) teaching strategies in Head Start preschools. Semistructured telephone interviews. North Carolina Head Start preschools. Thirty-two Head Start teachers. All interviews were transcribed verbatim. Following a grounded theory approach, authors coded interview data for emergent themes. Two primary themes emerged during analysis, including teachers' policy perceptions and teacher-perceived influence of policy on NE. A theoretical model was developed to explain how teachers' perceptions of policies influenced NE (eg, teaching strategies) in the classroom. Teachers discussed multiple policy areas governing their classrooms and limiting their ability to provide meaningful and consistent NE. How teachers perceived the level of regulation in the classroom (ie, high or low) influenced the frequency with which they used specific teaching strategies. Despite federal policies supporting the provision of NE, teachers face competing priorities in the classroom (eg, school readiness vs NE) and policies may conflict with standardized NE curricula. To understand how Head Start centers develop local policies, additional research should investigate how administrators interpret federal and state policies. Copyright © 2017 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
Combined mirror visual and auditory feedback therapy for upper limb phantom pain: a case report
2011-01-01
Introduction Phantom limb sensation and phantom limb pain is a very common issue after amputations. In recent years there has been accumulating data implicating 'mirror visual feedback' or 'mirror therapy' as helpful in the treatment of phantom limb sensation and phantom limb pain. Case presentation We present the case of a 24-year-old Caucasian man, a left upper limb amputee, treated with mirror visual feedback combined with auditory feedback with improved pain relief. Conclusion This case may suggest that auditory feedback might enhance the effectiveness of mirror visual feedback and serve as a valuable addition to the complex multi-sensory processing of body perception in patients who are amputees. PMID:21272334
Sensorimotor Adaptation Following Exposure to Ambiguous Inertial Motion Cues
NASA Technical Reports Server (NTRS)
Wood, S. J.; Clement, G. R.; Harm, D L.; Rupert, A. H.; Guedry, F. E.; Reschke, M. F.
2005-01-01
The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive accurate spatial orientation awareness. Our general hypothesis is that the central nervous system utilizes both multi-sensory integration and frequency segregation as neural strategies to resolve the ambiguity of tilt and translation stimuli. Movement in an altered gravity environment, such as weightlessness without a stable gravity reference, results in new patterns of sensory cues. For example, the semicircular canals, vision and neck proprioception provide information about head tilt on orbit without the normal otolith head-tilt position that is omnipresent on Earth. Adaptive changes in how inertial cues from the otolith system are integrated with other sensory information lead to perceptual and postural disturbances upon return to Earth s gravity. The primary goals of this ground-based research investigation are to explore physiological mechanisms and operational implications of disorientation and tilt-translation disturbances reported by crewmembers during and following re-entry, and to evaluate a tactile prosthesis as a countermeasure for improving control of whole-body orientation during tilt and translation motion.
Sensorimotor Adaptation Following Exposure to Ambiguous Inertial Motion Cues
NASA Technical Reports Server (NTRS)
Wood, S. J.; Clement, G. R.; Harm, D. L.; Rupert, A. H.; Guedry, F. E.; Reschke, M. F.
2005-01-01
The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive accurate spatial orientation awareness. Our general hypothesis is that the central nervous system utilizes both multi-sensory integration and frequency segregation as neural strategies to resolve the ambiguity of tilt and translation stimuli. Movement in an altered gravity environment, such as weightlessness without a stable gravity reference, results in new patterns of sensory cues. For example, the semicircular canals, vision and neck proprioception provide information about head tilt on orbit without the normal otolith head-tilt position that is omnipresent on Earth. Adaptive changes in how inertial cues from the otolith system are integrated with other sensory information lead to perceptual and postural disturbances upon return to Earth's gravity. The primary goals of this ground-based research investigation are to explore physiological mechanisms and operational implications of disorientation and tilt-translation disturbances reported by crewmembers during and following re-entry, and to evaluate a tactile prosthesis as a countermeasure for improving control of whole-body orientation during tilt and translation motion.
Effect of gravito-inertial cues on the coding of orientation in pre-attentive vision.
Stivalet, P; Marendaz, C; Barraclough, L; Mourareau, C
1995-01-01
To see if the spatial reference frame used by pre-attentive vision is specified in a retino-centered frame or in a reference frame integrating visual and nonvisual information (vestibular and somatosensory), subjects were centrifuged in a non-pendular cabin and were asked to search for a target distinguishable from distractors by difference in orientation (Treisman's "pop-out" paradigm [1]). In a control condition, in which subjects were sitting immobilized but not centrifuged, this task gave an asymmetric search pattern: Search was rapid and pre-attentional except when the target was aligned with the horizontal retinal/head axis, in which case search was slow and attentional (2). Results using a centrifuge showed that slow/serial search patterns were obtained when the target was aligned with the subjective horizontal axis (and not with the horizontal retinal/head axis). These data suggest that a multisensory reference frame is used in pre-attentive vision. The results are interpreted in terms of Riccio and Stoffregen's "ecological theory" of orientation in which the vertical and horizontal axes constitute independent reference frames (3).
Multisensory information boosts numerical matching abilities in young children.
Jordan, Kerry E; Baker, Joseph
2011-03-01
This study presents the first evidence that preschool children perform more accurately in a numerical matching task when given multisensory rather than unisensory information about number. Three- to 5-year-old children learned to play a numerical matching game on a touchscreen computer, which asked them to match a sample numerosity with a numerically equivalent choice numerosity. Samples consisted of a series of visual squares on some trials, a series of auditory tones on other trials, and synchronized squares and tones on still other trials. Children performed at chance on this matching task when provided with either type of unisensory sample, but improved significantly when provided with multisensory samples. There was no speed–accuracy tradeoff between unisensory and multisensory trial types. Thus, these findings suggest that intersensory redundancy may improve young children’s abilities to match numerosities.
Heyn, Patricia
2003-01-01
A multisensory exercise approach that evokes the stimulation and use of various senses, such as combining physical and cognitive stimuli, can assist in the management of persons with Alzheimer's disease (AD). The objective of this study was to evaluate the outcomes of a multisensory exercise program on cognitive function (engagement), behavior (mood), and physiological indices (blood pressure, resting heart rate, and weight) in 13 nursing home residents diagnosed with moderate to severe AD. A one-group pretest/post-test, quasi-experimental design was used. The program combined a variety of sensory stimulations, integrating storytelling and imaging strategies. Results showed an improvement in resting heart rate, overall mood, and in engagement of physical activity. The findings suggest that a multisensory exercise approach can be beneficial for individuals with AD.
Tinga, Angelica Maria; Visser-Meily, Johanna Maria Augusta; van der Smagt, Maarten Jeroen; Van der Stigchel, Stefan; van Ee, Raymond; Nijboer, Tanja Cornelia Wilhelmina
2016-03-01
The aim of this systematic review was to integrate and assess evidence for the effectiveness of multisensory stimulation (i.e., stimulating at least two of the following sensory systems: visual, auditory, and somatosensory) as a possible rehabilitation method after stroke. Evidence was considered with a focus on low-level, perceptual (visual, auditory and somatosensory deficits), as well as higher-level, cognitive, sensory deficits. We referred to the electronic databases Scopus and PubMed to search for articles that were published before May 2015. Studies were included which evaluated the effects of multisensory stimulation on patients with low- or higher-level sensory deficits caused by stroke. Twenty-one studies were included in this review and the quality of these studies was assessed (based on eight elements: randomization, inclusion of control patient group, blinding of participants, blinding of researchers, follow-up, group size, reporting effect sizes, and reporting time post-stroke). Twenty of the twenty-one included studies demonstrate beneficial effects on low- and/or higher-level sensory deficits after stroke. Notwithstanding these beneficial effects, the quality of the studies is insufficient for valid conclusion that multisensory stimulation can be successfully applied as an effective intervention. A valuable and necessary next step would be to set up well-designed randomized controlled trials to examine the effectiveness of multisensory stimulation as an intervention for low- and/or higher-level sensory deficits after stroke. Finally, we consider the potential mechanisms of multisensory stimulation for rehabilitation to guide this future research.
The effect of face eccentricity on the perception of gaze direction.
Todorović, Dejan
2009-01-01
The perception of a looker's gaze direction depends not only on iris eccentricity (the position of the looker's irises within the sclera) but also on the orientation of the lookers' head. One among several potential cues of head orientation is face eccentricity, the position of the inner features of the face (eyes, nose, mouth) within the head contour, as viewed by the observer. For natural faces this cue is confounded with many other head-orientation cues, but in schematic faces it can be studied in isolation. Salient novel illustrations of the effectiveness of face eccentricity are 'Necker faces', which involve equal iris eccentricities but multiple perceived gaze directions. In four experiments, iris and face eccentricity in schematic faces were manipulated, revealing strong and consistent effects of face eccentricity on perceived gaze direction, with different types of tasks. An additional experiment confirmed the 'Mona Lisa' effect with this type of stimuli. Face eccentricity most likely acted as a simple but robust cue of head turn. A simple computational account of combined effects of cues of eye and head turn on perceived gaze direction is presented, including a formal condition for the perception of direct gaze. An account of the 'Mona Lisa' effect is presented.
Brake, Maria K; Jain, Lauren; Hart, Robert D; Trites, Jonathan R B; Rigby, Matthew; Taylor, S Mark
2014-08-01
Patients who have undergone treatment for head and neck cancer are at risk for neck lymphedema, which can severely affect quality of life. Liposuction has been used successfully for cancer patients who suffer from posttreatment limb lymphedema. The purpose of our study was to review the outcomes of head and neck cancer patients at our center who have undergone submental liposuction for posttreatment lymphedema. Prospective cohort study. Oncology center in tertiary hospital setting. Head and neck cancer patients who underwent submental liposuction for posttreatment lymphedema were included. Nine patients met the study criteria. Patients completed 2 surveys (Modified Blepharoplasty Outcome Evaluation and the validated Derriford Appearance Scale) pre- and postoperatively to assess satisfaction. Patients' pre- and postoperative photos were graded by independent observers to assess outcomes objectively. Our study demonstrated a statistically significant improvement in patients' self-perception of appearance and statistically significant objective scoring of appearance following submental liposuction. Submental liposuction improves the appearance and quality of life for head and neck cancer patients suffering from posttreatment lymphedema by way of improving their self-perception and self-confidence. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2014.
Sensory convergence in the parieto-insular vestibular cortex
Shinder, Michael E.
2014-01-01
Vestibular signals are pervasive throughout the central nervous system, including the cortex, where they likely play different roles than they do in the better studied brainstem. Little is known about the parieto-insular vestibular cortex (PIVC), an area of the cortex with prominent vestibular inputs. Neural activity was recorded in the PIVC of rhesus macaques during combinations of head, body, and visual target rotations. Activity of many PIVC neurons was correlated with the motion of the head in space (vestibular), the twist of the neck (proprioceptive), and the motion of a visual target, but was not associated with eye movement. PIVC neurons responded most commonly to more than one stimulus, and responses to combined movements could often be approximated by a combination of the individual sensitivities to head, neck, and target motion. The pattern of visual, vestibular, and somatic sensitivities on PIVC neurons displayed a continuous range, with some cells strongly responding to one or two of the stimulus modalities while other cells responded to any type of motion equivalently. The PIVC contains multisensory convergence of self-motion cues with external visual object motion information, such that neurons do not represent a specific transformation of any one sensory input. Instead, the PIVC neuron population may define the movement of head, body, and external visual objects in space and relative to one another. This comparison of self and external movement is consistent with insular cortex functions related to monitoring and explains many disparate findings of previous studies. PMID:24671533
Bayesian networks and information theory for audio-visual perception modeling.
Besson, Patricia; Richiardi, Jonas; Bourdin, Christophe; Bringoux, Lionel; Mestre, Daniel R; Vercher, Jean-Louis
2010-09-01
Thanks to their different senses, human observers acquire multiple information coming from their environment. Complex cross-modal interactions occur during this perceptual process. This article proposes a framework to analyze and model these interactions through a rigorous and systematic data-driven process. This requires considering the general relationships between the physical events or factors involved in the process, not only in quantitative terms, but also in term of the influence of one factor on another. We use tools from information theory and probabilistic reasoning to derive relationships between the random variables of interest, where the central notion is that of conditional independence. Using mutual information analysis to guide the model elicitation process, a probabilistic causal model encoded as a Bayesian network is obtained. We exemplify the method by using data collected in an audio-visual localization task for human subjects, and we show that it yields a well-motivated model with good predictive ability. The model elicitation process offers new prospects for the investigation of the cognitive mechanisms of multisensory perception.
Kinesthesis can make an invisible hand visible
Dieter, Kevin C.; Hu, Bo; Knill, David C.; Blake, Randolph; Tadin, Duje
2014-01-01
Self-generated body movements have reliable visual consequences. This predictive association between vision and action likely underlies modulatory effects of action on visual processing. However, it is unknown if our own actions can have generative effects on visual perception. We asked whether, in total darkness, self-generated body movements are sufficient to evoke normally concomitant visual perceptions. Using a deceptive experimental design, we discovered that waving one’s own hand in front of one’s covered eyes can cause visual sensations of motion. Conjecturing that these visual sensations arise from multisensory connectivity, we showed that individuals with synesthesia experience substantially stronger kinesthesis-induced visual sensations. Finally, we found that the perceived vividness of kinesthesis-induced visual sensations predicted participants’ ability to smoothly eye-track self-generated hand movements in darkness, indicating that these sensations function like typical retinally-driven visual sensations. Evidently, even in the complete absence of external visual input, our brains predict visual consequences of our actions. PMID:24171930
A simple and efficient method to enhance audiovisual binding tendencies
Wozny, David R.; Shams, Ladan
2017-01-01
Individuals vary in their tendency to bind signals from multiple senses. For the same set of sights and sounds, one individual may frequently integrate multisensory signals and experience a unified percept, whereas another individual may rarely bind them and often experience two distinct sensations. Thus, while this binding/integration tendency is specific to each individual, it is not clear how plastic this tendency is in adulthood, and how sensory experiences may cause it to change. Here, we conducted an exploratory investigation which provides evidence that (1) the brain’s tendency to bind in spatial perception is plastic, (2) that it can change following brief exposure to simple audiovisual stimuli, and (3) that exposure to temporally synchronous, spatially discrepant stimuli provides the most effective method to modify it. These results can inform current theories about how the brain updates its internal model of the surrounding sensory world, as well as future investigations seeking to increase integration tendencies. PMID:28462016
A smart room for hospitalised elderly people: essay of modelling and first steps of an experiment.
Rialle, V; Lauvernay, N; Franco, A; Piquard, J F; Couturier, P
1999-01-01
We present a study of modelling and the first steps of an experiment of a smart room for hospitalised elderly people. The system aims at detecting falls and sicknesses, and implements four main functions: perception of patient and environment through sensors, reasoning from perceived events and patient clinical findings, action by way of alarm triggering and message passing to medical staff, and adaptation to various patient profiles, sensor layouts, house fixtures and architecture. It includes a physical multisensory device located in the patient's room, and a multi-agent system for fall detection and alarm triggering. This system encompasses a perception agent, and a reasoning agent. The latter has two complementary capacities implemented by sub-agents: deduction of type of alarm from incoming events, and knowledge induction from recorded events. The system has been tested with a few patients in real clinical situation, and the first experiment provides encouraging results which are described in a precise manner.
NASA Astrophysics Data System (ADS)
Wang, He; Zhang, Wen-Hao; Wong, K. Y. Michael; Wu, Si
Extensive studies suggest that the brain integrates multisensory signals in a Bayesian optimal way. However, it remains largely unknown how the sensory reliability and the prior information shape the neural architecture. In this work, we propose a biologically plausible neural field model, which can perform optimal multisensory integration and encode the whole profile of the posterior. Our model is composed of two modules, each for one modality. The crosstalks between the two modules can be carried out through feedforwad cross-links and reciprocal connections. We found that the reciprocal couplings are crucial to optimal multisensory integration in that the reciprocal coupling pattern is shaped by the correlation in the joint prior distribution of the sensory stimuli. A perturbative approach is developed to illustrate the relation between the prior information and features in coupling patterns quantitatively. Our results show that a decentralized architecture based on reciprocal connections is able to accommodate complex correlation structures across modalities and utilize this prior information in optimal multisensory integration. This work is supported by the Research Grants Council of Hong Kong (N_HKUST606/12 and 605813) and National Basic Research Program of China (2014CB846101) and the Natural Science Foundation of China (31261160495).
Intracranial Cortical Responses during Visual–Tactile Integration in Humans
Quinn, Brian T.; Carlson, Chad; Doyle, Werner; Cash, Sydney S.; Devinsky, Orrin; Spence, Charles; Halgren, Eric
2014-01-01
Sensory integration of touch and sight is crucial to perceiving and navigating the environment. While recent evidence from other sensory modality combinations suggests that low-level sensory areas integrate multisensory information at early processing stages, little is known about how the brain combines visual and tactile information. We investigated the dynamics of multisensory integration between vision and touch using the high spatial and temporal resolution of intracranial electrocorticography in humans. We present a novel, two-step metric for defining multisensory integration. The first step compares the sum of the unisensory responses to the bimodal response as multisensory responses. The second step eliminates the possibility that double addition of sensory responses could be misinterpreted as interactions. Using these criteria, averaged local field potentials and high-gamma-band power demonstrate a functional processing cascade whereby sensory integration occurs late, both anatomically and temporally, in the temporo–parieto–occipital junction (TPOJ) and dorsolateral prefrontal cortex. Results further suggest two neurophysiologically distinct and temporally separated integration mechanisms in TPOJ, while providing direct evidence for local suppression as a dominant mechanism for synthesizing visual and tactile input. These results tend to support earlier concepts of multisensory integration as relatively late and centered in tertiary multimodal association cortices. PMID:24381279
Alterations to multisensory and unisensory integration by stimulus competition
Rowland, Benjamin A.; Stanford, Terrence R.; Stein, Barry E.
2011-01-01
In environments containing sensory events at competing locations, selecting a target for orienting requires prioritization of stimulus values. Although the superior colliculus (SC) is causally linked to the stimulus selection process, the manner in which SC multisensory integration operates in a competitive stimulus environment is unknown. Here we examined how the activity of visual-auditory SC neurons is affected by placement of a competing target in the opposite hemifield, a stimulus configuration that would, in principle, promote interhemispheric competition for access to downstream motor circuitry. Competitive interactions between the targets were evident in how they altered unisensory and multisensory responses of individual neurons. Responses elicited by a cross-modal stimulus (multisensory responses) proved to be substantially more resistant to competitor-induced depression than were unisensory responses (evoked by the component modality-specific stimuli). Similarly, when a cross-modal stimulus served as the competitor, it exerted considerably more depression than did its individual component stimuli, in some cases producing more depression than predicted by their linear sum. These findings suggest that multisensory integration can help resolve competition among multiple targets by enhancing orientation to the location of cross-modal events while simultaneously suppressing orientation to events at alternate locations. PMID:21957224
Alterations to multisensory and unisensory integration by stimulus competition.
Pluta, Scott R; Rowland, Benjamin A; Stanford, Terrence R; Stein, Barry E
2011-12-01
In environments containing sensory events at competing locations, selecting a target for orienting requires prioritization of stimulus values. Although the superior colliculus (SC) is causally linked to the stimulus selection process, the manner in which SC multisensory integration operates in a competitive stimulus environment is unknown. Here we examined how the activity of visual-auditory SC neurons is affected by placement of a competing target in the opposite hemifield, a stimulus configuration that would, in principle, promote interhemispheric competition for access to downstream motor circuitry. Competitive interactions between the targets were evident in how they altered unisensory and multisensory responses of individual neurons. Responses elicited by a cross-modal stimulus (multisensory responses) proved to be substantially more resistant to competitor-induced depression than were unisensory responses (evoked by the component modality-specific stimuli). Similarly, when a cross-modal stimulus served as the competitor, it exerted considerably more depression than did its individual component stimuli, in some cases producing more depression than predicted by their linear sum. These findings suggest that multisensory integration can help resolve competition among multiple targets by enhancing orientation to the location of cross-modal events while simultaneously suppressing orientation to events at alternate locations.
Nonvisual influences on visual-information processing in the superior colliculus.
Stein, B E; Jiang, W; Wallace, M T; Stanford, T R
2001-01-01
Although visually responsive neurons predominate in the deep layers of the superior colliculus (SC), the majority of them also receive sensory inputs from nonvisual sources (i.e. auditory and/or somatosensory). Most of these 'multisensory' neurons are able to synthesize their cross-modal inputs and, as a consequence, their responses to visual stimuli can be profoundly enhanced or depressed in the presence of a nonvisual cue. Whether response enhancement or response depression is produced by this multisensory interaction is predictable based on several factors. These include: the organization of a neuron's visual and nonvisual receptive fields; the relative spatial relationships of the different stimuli (to their respective receptive fields and to one another); and whether or not the neuron is innervated by a select population of cortical neurons. The response enhancement or depression of SC neurons via multisensory integration has significant survival value via its profound impact on overt attentive/orientation behaviors. Nevertheless, these multisensory processes are not present at birth, and require an extensive period of postnatal maturation. It seems likely that the sensory experiences obtained during this period play an important role in crafting the processes underlying these multisensory interactions.
Nonlinear Bayesian filtering and learning: a neuronal dynamics for perception.
Kutschireiter, Anna; Surace, Simone Carlo; Sprekeler, Henning; Pfister, Jean-Pascal
2017-08-18
The robust estimation of dynamical hidden features, such as the position of prey, based on sensory inputs is one of the hallmarks of perception. This dynamical estimation can be rigorously formulated by nonlinear Bayesian filtering theory. Recent experimental and behavioral studies have shown that animals' performance in many tasks is consistent with such a Bayesian statistical interpretation. However, it is presently unclear how a nonlinear Bayesian filter can be efficiently implemented in a network of neurons that satisfies some minimum constraints of biological plausibility. Here, we propose the Neural Particle Filter (NPF), a sampling-based nonlinear Bayesian filter, which does not rely on importance weights. We show that this filter can be interpreted as the neuronal dynamics of a recurrently connected rate-based neural network receiving feed-forward input from sensory neurons. Further, it captures properties of temporal and multi-sensory integration that are crucial for perception, and it allows for online parameter learning with a maximum likelihood approach. The NPF holds the promise to avoid the 'curse of dimensionality', and we demonstrate numerically its capability to outperform weighted particle filters in higher dimensions and when the number of particles is limited.
Index finger somatosensory evoked potentials in blind Braille readers.
Giriyappa, Dayananda; Subrahmanyam, Roopakala Mysore; Rangashetty, Srinivasa; Sharma, Rajeev
2009-01-01
Traditionally, vision has been considered the dominant modality in our multi-sensory perception of the surrounding world. Sensory input via non-visual tracts becomes of greater behavioural relevance in totally blind individuals to enable effective interaction with the world around them. These include audition and tactile perceptions, leading to an augmentation in these perceptions when compared with normal sighted individuals. The objective of the present work was to study the index finger somatosensory evoked potentials (SEPs) in totally blind and normal sighted individuals. SEPs were recorded in 15 Braille reading totally blind females and compared with 15 age-matched normal sighted females. Latency and amplitudes of somatosensory evoked potential waveforms (N9, N13, and N20) were measured. Amplitude of N20 SEP (a cortical somatosensory evoked potential) was significantly larger in the totally blind than in normal sighted individuals (p < 0.05). The amplitudes of N9 and N13 SEP and the latencies of all recorded SEPs showed no significant differences. Blindness has a profound effect on the Braille reading right index finger. Totally blind Braille readers have larger N20 amplitude, suggestive of greater somatosensory cortical representation of the Braille reading index finger.
Performance Evaluation of Passive Haptic Feedback for Tactile HMI Design in CAVEs.
Lassagne, Antoine; Kemeny, Andras; Posselt, Javier; Merienne, Frederic
2018-01-01
This article presents a comparison of different haptic systems, which are designed to simulate flat Human Machine Interfaces (HMIs) like touchscreens in virtual environments (VEs) such as CAVEs, and their respective performance. We compare a tangible passive transparent slate to a classic tablet and a sensory substitution system. These systems were tested during a controlled experiment. The performance and impressions from 20 subjects were collected to understand more about the modalities in the given context. The results show that the preferences of the subjects are strongly related to the use-cases and needs. In terms of performance, passive haptics proved to be significantly useful, acting as a space reference and a real-time continuous calibration system, allowing subjects to have lower execution durations and relative errors. Sensory substitution induced perception drifts during the experiment, causing significant performance disparities, demonstrating the low robustness of perception when spatial cues are insufficiently available. Our findings offer a better understanding on the nature of perception drifts and the need of strong multisensory spatial markers for such use-cases in CAVEs. The importance of a relevant haptic modality specifically designed to match a precise use-case is also emphasized.
Attractiveness Is Multimodal: Beauty Is Also in the Nose and Ear of the Beholder.
Groyecka, Agata; Pisanski, Katarzyna; Sorokowska, Agnieszka; Havlíček, Jan; Karwowski, Maciej; Puts, David; Roberts, S Craig; Sorokowski, Piotr
2017-01-01
Attractiveness plays a central role in human non-verbal communication and has been broadly examined in diverse subfields of contemporary psychology. Researchers have garnered compelling evidence in support of the evolutionary functions of physical attractiveness and its role in our daily lives, while at the same time, having largely ignored the significant contribution of non-visual modalities and the relationships among them. Acoustic and olfactory cues can, separately or in combination, strongly influence the perceived attractiveness of an individual and therefore attitudes and actions toward that person. Here, we discuss the relative importance of visual, auditory and olfactory traits in judgments of attractiveness, and review neural and behavioral studies that support the highly complex and multimodal nature of person perception. Further, we discuss three alternative evolutionary hypotheses aimed at explaining the function of multiple indices of attractiveness. In this review, we provide several lines of evidence supporting the importance of the voice, body odor, and facial and body appearance in the perception of attractiveness and mate preferences, and therefore the critical need to incorporate cross-modal perception and multisensory integration into future research on human physical attractiveness.
Hoefer, M; Tyll, S; Kanowski, M; Brosch, M; Schoenfeld, M A; Heinze, H-J; Noesselt, T
2013-10-01
Although multisensory integration has been an important area of recent research, most studies focused on audiovisual integration. Importantly, however, the combination of audition and touch can guide our behavior as effectively which we studied here using psychophysics and functional magnetic resonance imaging (fMRI). We tested whether task-irrelevant tactile stimuli would enhance auditory detection, and whether hemispheric asymmetries would modulate these audiotactile benefits using lateralized sounds. Spatially aligned task-irrelevant tactile stimuli could occur either synchronously or asynchronously with the sounds. Auditory detection was enhanced by non-informative synchronous and asynchronous tactile stimuli, if presented on the left side. Elevated fMRI-signals to left-sided synchronous bimodal stimulation were found in primary auditory cortex (A1). Adjacent regions (planum temporale, PT) expressed enhanced BOLD-responses for synchronous and asynchronous left-sided bimodal conditions. Additional connectivity analyses seeded in right-hemispheric A1 and PT for both bimodal conditions showed enhanced connectivity with right-hemispheric thalamic, somatosensory and multisensory areas that scaled with subjects' performance. Our results indicate that functional asymmetries interact with audiotactile interplay which can be observed for left-lateralized stimulation in the right hemisphere. There, audiotactile interplay recruits a functional network of unisensory cortices, and the strength of these functional network connections is directly related to subjects' perceptual sensitivity. Copyright © 2013 Elsevier Inc. All rights reserved.
Méndez-Balbuena, Ignacio; Huidobro, Nayeli; Silva, Mayte; Flores, Amira; Trenado, Carlos; Quintanar, Luis; Arias-Carrión, Oscar; Kristeva, Rumyana; Manjarrez, Elias
2015-10-01
The present investigation documents the electrophysiological occurrence of multisensory stochastic resonance in the human visual pathway elicited by tactile noise. We define multisensory stochastic resonance of brain evoked potentials as the phenomenon in which an intermediate level of input noise of one sensory modality enhances the brain evoked response of another sensory modality. Here we examined this phenomenon in visual evoked potentials (VEPs) modulated by the addition of tactile noise. Specifically, we examined whether a particular level of mechanical Gaussian noise applied to the index finger can improve the amplitude of the VEP. We compared the amplitude of the positive P100 VEP component between zero noise (ZN), optimal noise (ON), and high mechanical noise (HN). The data disclosed an inverted U-like graph for all the subjects, thus demonstrating the occurrence of a multisensory stochastic resonance in the P100 VEP. Copyright © 2015 the American Physiological Society.
Functional neuroimaging studies in addiction: multisensory drug stimuli and neural cue reactivity.
Yalachkov, Yavor; Kaiser, Jochen; Naumer, Marcus J
2012-02-01
Neuroimaging studies on cue reactivity have substantially contributed to the understanding of addiction. In the majority of studies drug cues were presented in the visual modality. However, exposure to conditioned cues in real life occurs often simultaneously in more than one sensory modality. Therefore, multisensory cues should elicit cue reactivity more consistently than unisensory stimuli and increase the ecological validity and the reliability of brain activation measurements. This review includes the data from 44 whole-brain functional neuroimaging studies with a total of 1168 subjects (812 patients and 356 controls). Correlations between neural cue reactivity and clinical covariates such as craving have been reported significantly more often for multisensory than unisensory cues in the motor cortex, insula and posterior cingulate cortex. Thus, multisensory drug cues are particularly effective in revealing brain-behavior relationships in neurocircuits of addiction responsible for motivation, craving awareness and self-related processing. Copyright © 2011 Elsevier Ltd. All rights reserved.
Head Start Instructional Professionals' Inclusion Perceptions and Practices
ERIC Educational Resources Information Center
Muccio, Leah S.; Kidd, Julie K.; White, C. Stephen; Burns, M. Susan
2014-01-01
This study considered the facilitators and barriers of successful inclusion in Head Start classrooms by examining the perspectives and practices of instructional professionals. A cross-sectional survey design was combined with direct observation in inclusive Head Start classrooms. Survey data were collected from 71 Head Start instructional…
Werner, Sebastian; Noppeney, Uta
2010-02-17
Multisensory interactions have been demonstrated in a distributed neural system encompassing primary sensory and higher-order association areas. However, their distinct functional roles in multisensory integration remain unclear. This functional magnetic resonance imaging study dissociated the functional contributions of three cortical levels to multisensory integration in object categorization. Subjects actively categorized or passively perceived noisy auditory and visual signals emanating from everyday actions with objects. The experiment included two 2 x 2 factorial designs that manipulated either (1) the presence/absence or (2) the informativeness of the sensory inputs. These experimental manipulations revealed three patterns of audiovisual interactions. (1) In primary auditory cortices (PACs), a concurrent visual input increased the stimulus salience by amplifying the auditory response regardless of task-context. Effective connectivity analyses demonstrated that this automatic response amplification is mediated via both direct and indirect [via superior temporal sulcus (STS)] connectivity to visual cortices. (2) In STS and intraparietal sulcus (IPS), audiovisual interactions sustained the integration of higher-order object features and predicted subjects' audiovisual benefits in object categorization. (3) In the left ventrolateral prefrontal cortex (vlPFC), explicit semantic categorization resulted in suppressive audiovisual interactions as an index for multisensory facilitation of semantic retrieval and response selection. In conclusion, multisensory integration emerges at multiple processing stages within the cortical hierarchy. The distinct profiles of audiovisual interactions dissociate audiovisual salience effects in PACs, formation of object representations in STS/IPS and audiovisual facilitation of semantic categorization in vlPFC. Furthermore, in STS/IPS, the profiles of audiovisual interactions were behaviorally relevant and predicted subjects' multisensory benefits in performance accuracy.
NASA Astrophysics Data System (ADS)
Bauer, Johannes; Dávila-Chacón, Jorge; Wermter, Stefan
2015-10-01
Humans and other animals have been shown to perform near-optimally in multi-sensory integration tasks. Probabilistic population codes (PPCs) have been proposed as a mechanism by which optimal integration can be accomplished. Previous approaches have focussed on how neural networks might produce PPCs from sensory input or perform calculations using them, like combining multiple PPCs. Less attention has been given to the question of how the necessary organisation of neurons can arise and how the required knowledge about the input statistics can be learned. In this paper, we propose a model of learning multi-sensory integration based on an unsupervised learning algorithm in which an artificial neural network learns the noise characteristics of each of its sources of input. Our algorithm borrows from the self-organising map the ability to learn latent-variable models of the input and extends it to learning to produce a PPC approximating a probability density function over the latent variable behind its (noisy) input. The neurons in our network are only required to perform simple calculations and we make few assumptions about input noise properties and tuning functions. We report on a neurorobotic experiment in which we apply our algorithm to multi-sensory integration in a humanoid robot to demonstrate its effectiveness and compare it to human multi-sensory integration on the behavioural level. We also show in simulations that our algorithm performs near-optimally under certain plausible conditions, and that it reproduces important aspects of natural multi-sensory integration on the neural level.
ERIC Educational Resources Information Center
Stephane, Massoud; Hill, Thomas; Matthew, Elizabeth; Folstein, Marshal
2004-01-01
We report the case of an immigrant who suffered from death threats and head trauma while a prisoner of war in Kuwait. Two months later, he began to hear conversations that had taken place previously. These perceptions occurred spontaneously or were induced by the patient's effortful concentration. The single photon emission computerized tomography…
ERIC Educational Resources Information Center
Boden, Dana W. R.
This qualitative study examined the perceptions that university library faculty members hold regarding the role of the department head in promoting faculty growth and development. Four faculty members at the University of Nebraska-Lincoln were interviewed. Axial coding of the individuals' perceptions revealed six categories of perceived roles for…
Primary Grade Teachers' Knowledge and Perceptions of Head Lice.
ERIC Educational Resources Information Center
Kirchofer, Gregg M.; Price, James H.; Telljohann, Susan K.
2001-01-01
Surveyed primary school teachers regarding knowledge of head lice, self-efficacy in handling head lice, and preferred information sources. Teachers needed more knowledge about head lice. About half had high efficacy expectations regarding their ability to control the spread of lice. Most reported receiving information from school nurses. Knowledge…
Jacklin, Derek L; Cloke, Jacob M; Potvin, Alphonse; Garrett, Inara; Winters, Boyer D
2016-01-27
Rats, humans, and monkeys demonstrate robust crossmodal object recognition (CMOR), identifying objects across sensory modalities. We have shown that rats' performance of a spontaneous tactile-to-visual CMOR task requires functional integration of perirhinal (PRh) and posterior parietal (PPC) cortices, which seemingly provide visual and tactile object feature processing, respectively. However, research with primates has suggested that PRh is sufficient for multisensory object representation. We tested this hypothesis in rats using a modification of the CMOR task in which multimodal preexposure to the to-be-remembered objects significantly facilitates performance. In the original CMOR task, with no preexposure, reversible lesions of PRh or PPC produced patterns of impairment consistent with modality-specific contributions. Conversely, in the CMOR task with preexposure, PPC lesions had no effect, whereas PRh involvement was robust, proving necessary for phases of the task that did not require PRh activity when rats did not have preexposure; this pattern was supported by results from c-fos imaging. We suggest that multimodal preexposure alters the circuitry responsible for object recognition, in this case obviating the need for PPC contributions and expanding PRh involvement, consistent with the polymodal nature of PRh connections and results from primates indicating a key role for PRh in multisensory object representation. These findings have significant implications for our understanding of multisensory information processing, suggesting that the nature of an individual's past experience with an object strongly determines the brain circuitry involved in representing that object's multisensory features in memory. The ability to integrate information from multiple sensory modalities is crucial to the survival of organisms living in complex environments. Appropriate responses to behaviorally relevant objects are informed by integration of multisensory object features. We used crossmodal object recognition tasks in rats to study the neurobiological basis of multisensory object representation. When rats had no prior exposure to the to-be-remembered objects, the spontaneous ability to recognize objects across sensory modalities relied on functional interaction between multiple cortical regions. However, prior multisensory exploration of the task-relevant objects remapped cortical contributions, negating the involvement of one region and significantly expanding the role of another. This finding emphasizes the dynamic nature of cortical representation of objects in relation to past experience. Copyright © 2016 the authors 0270-6474/16/361273-17$15.00/0.
Controlled interaction: strategies for using virtual reality to study perception.
Durgin, Frank H; Li, Zhi
2010-05-01
Immersive virtual reality systems employing head-mounted displays offer great promise for the investigation of perception and action, but there are well-documented limitations to most virtual reality systems. In the present article, we suggest strategies for studying perception/action interactions that try to depend on both scale-invariant metrics (such as power function exponents) and careful consideration of the requirements of the interactions under investigation. New data concerning the effect of pincushion distortion on the perception of surface orientation are presented, as well as data documenting the perception of dynamic distortions associated with head movements with uncorrected optics. A review of several successful uses of virtual reality to study the interaction of perception and action emphasizes scale-free analysis strategies that can achieve theoretical goals while minimizing assumptions about the accuracy of virtual simulations.
Vatakis, Argiro; Maragos, Petros; Rodomagoulakis, Isidoros; Spence, Charles
2012-01-01
We investigated how the physical differences associated with the articulation of speech affect the temporal aspects of audiovisual speech perception. Video clips of consonants and vowels uttered by three different speakers were presented. The video clips were analyzed using an auditory-visual signal saliency model in order to compare signal saliency and behavioral data. Participants made temporal order judgments (TOJs) regarding which speech-stream (auditory or visual) had been presented first. The sensitivity of participants' TOJs and the point of subjective simultaneity (PSS) were analyzed as a function of the place, manner of articulation, and voicing for consonants, and the height/backness of the tongue and lip-roundedness for vowels. We expected that in the case of the place of articulation and roundedness, where the visual-speech signal is more salient, temporal perception of speech would be modulated by the visual-speech signal. No such effect was expected for the manner of articulation or height. The results demonstrate that for place and manner of articulation, participants' temporal percept was affected (although not always significantly) by highly-salient speech-signals with the visual-signals requiring smaller visual-leads at the PSS. This was not the case when height was evaluated. These findings suggest that in the case of audiovisual speech perception, a highly salient visual-speech signal may lead to higher probabilities regarding the identity of the auditory-signal that modulate the temporal window of multisensory integration of the speech-stimulus. PMID:23060756
Multisensory integration across the senses in young and old adults
Mahoney, Jeannette R.; Li, Po Ching Clara; Oh-Park, Mooyeon; Verghese, Joe; Holtzer, Roee
2011-01-01
Stimuli are processed concurrently and across multiple sensory inputs. Here we directly compared the effect of multisensory integration (MSI) on reaction time across three paired sensory inputs in eighteen young (M=19.17 yrs) and eighteen old (M=76.44 yrs) individuals. Participants were determined to be non-demented and without any medical or psychiatric conditions that would affect their performance. Participants responded to randomly presented unisensory (auditory, visual, somatosensory) stimuli and three paired sensory inputs consisting of auditory-somatosensory (AS) auditory-visual (AV) and visual-somatosensory (VS) stimuli. Results revealed that reaction time (RT) to all multisensory pairings was significantly faster than those elicited to the constituent unisensory conditions across age groups; findings that could not be accounted for by simple probability summation. Both young and old participants responded the fastest to multisensory pairings containing somatosensory input. Compared to younger adults, older adults demonstrated a significantly greater RT benefit when processing concurrent VS information. In terms of co-activation, older adults demonstrated a significant increase in the magnitude of visual-somatosensory co-activation (i.e., multisensory integration), while younger adults demonstrated a significant increase in the magnitude of auditory-visual and auditory-somatosensory co-activation. This study provides first evidence in support of the facilitative effect of pairing somatosensory with visual stimuli in older adults. PMID:22024545
Magnotti, John F; Beauchamp, Michael S
2017-02-01
Audiovisual speech integration combines information from auditory speech (talker's voice) and visual speech (talker's mouth movements) to improve perceptual accuracy. However, if the auditory and visual speech emanate from different talkers, integration decreases accuracy. Therefore, a key step in audiovisual speech perception is deciding whether auditory and visual speech have the same source, a process known as causal inference. A well-known illusion, the McGurk Effect, consists of incongruent audiovisual syllables, such as auditory "ba" + visual "ga" (AbaVga), that are integrated to produce a fused percept ("da"). This illusion raises two fundamental questions: first, given the incongruence between the auditory and visual syllables in the McGurk stimulus, why are they integrated; and second, why does the McGurk effect not occur for other, very similar syllables (e.g., AgaVba). We describe a simplified model of causal inference in multisensory speech perception (CIMS) that predicts the perception of arbitrary combinations of auditory and visual speech. We applied this model to behavioral data collected from 60 subjects perceiving both McGurk and non-McGurk incongruent speech stimuli. The CIMS model successfully predicted both the audiovisual integration observed for McGurk stimuli and the lack of integration observed for non-McGurk stimuli. An identical model without causal inference failed to accurately predict perception for either form of incongruent speech. The CIMS model uses causal inference to provide a computational framework for studying how the brain performs one of its most important tasks, integrating auditory and visual speech cues to allow us to communicate with others.
Chen, Yi-Chuan; Lewis, Terri L; Shore, David I; Maurer, Daphne
2017-02-20
Temporal simultaneity provides an essential cue for integrating multisensory signals into a unified perception. Early visual deprivation, in both animals and humans, leads to abnormal neural responses to audiovisual signals in subcortical and cortical areas [1-5]. Behavioral deficits in integrating complex audiovisual stimuli in humans are also observed [6, 7]. It remains unclear whether early visual deprivation affects visuotactile perception similarly to audiovisual perception and whether the consequences for either pairing differ after monocular versus binocular deprivation [8-11]. Here, we evaluated the impact of early visual deprivation on the perception of simultaneity for audiovisual and visuotactile stimuli in humans. We tested patients born with dense cataracts in one or both eyes that blocked all patterned visual input until the cataractous lenses were removed and the affected eyes fitted with compensatory contact lenses (mean duration of deprivation = 4.4 months; range = 0.3-28.8 months). Both monocularly and binocularly deprived patients demonstrated lower precision in judging audiovisual simultaneity. However, qualitatively different outcomes were observed for the two patient groups: the performance of monocularly deprived patients matched that of young children at immature stages, whereas that of binocularly deprived patients did not match any stage in typical development. Surprisingly, patients performed normally in judging visuotactile simultaneity after either monocular or binocular deprivation. Therefore, early binocular input is necessary to develop normal neural substrates for simultaneity perception of visual and auditory events but not visual and tactile events. Copyright © 2017 Elsevier Ltd. All rights reserved.
Spatiotemporal Processing in Crossmodal Interactions for Perception of the External World: A Review
Hidaka, Souta; Teramoto, Wataru; Sugita, Yoichi
2015-01-01
Research regarding crossmodal interactions has garnered much interest in the last few decades. A variety of studies have demonstrated that multisensory information (vision, audition, tactile sensation, and so on) can perceptually interact with each other in the spatial and temporal domains. Findings regarding crossmodal interactions in the spatiotemporal domain (i.e., motion processing) have also been reported, with updates in the last few years. In this review, we summarize past and recent findings on spatiotemporal processing in crossmodal interactions regarding perception of the external world. A traditional view regarding crossmodal interactions holds that vision is superior to audition in spatial processing, but audition is dominant over vision in temporal processing. Similarly, vision is considered to have dominant effects over the other sensory modalities (i.e., visual capture) in spatiotemporal processing. However, recent findings demonstrate that sound could have a driving effect on visual motion perception. Moreover, studies regarding perceptual associative learning reported that, after association is established between a sound sequence without spatial information and visual motion information, the sound sequence could trigger visual motion perception. Other sensory information, such as motor action or smell, has also exhibited similar driving effects on visual motion perception. Additionally, recent brain imaging studies demonstrate that similar activation patterns could be observed in several brain areas, including the motion processing areas, between spatiotemporal information from different sensory modalities. Based on these findings, we suggest that multimodal information could mutually interact in spatiotemporal processing in the percept of the external world and that common perceptual and neural underlying mechanisms would exist for spatiotemporal processing. PMID:26733827
ERIC Educational Resources Information Center
Guihen, Laura
2017-01-01
Men continue to outnumber women at the secondary head teacher level. This article reports on some of the preliminary findings of a larger study exploring the ways in which women deputy head teachers, as potential aspirants to headship, perceive the secondary head teacher role. Using an Interpretative Phenomenological Analysis methodology,…
Seeing a singer helps comprehension of the song's lyrics.
Jesse, Alexandra; Massaro, Dominic W
2010-06-01
When listening to speech, we often benefit when also seeing the speaker's face. If this advantage is not domain specific for speech, the recognition of sung lyrics should also benefit from seeing the singer's face. By independently varying the sight and sound of the lyrics, we found a substantial comprehension benefit of seeing a singer. This benefit was robust across participants, lyrics, and repetition of the test materials. This benefit was much larger than the benefit for sung lyrics obtained in previous research, which had not provided the visual information normally present in singing. Given that the comprehension of sung lyrics benefits from seeing the singer, just like speech comprehension benefits from seeing the speaker, both speech and music perception appear to be multisensory processes.
Beer, Anton L.; Plank, Tina; Meyer, Georg; Greenlee, Mark W.
2013-01-01
Functional magnetic resonance imaging (MRI) showed that the superior temporal and occipital cortex are involved in multisensory integration. Probabilistic fiber tracking based on diffusion-weighted MRI suggests that multisensory processing is supported by white matter connections between auditory cortex and the temporal and occipital lobe. Here, we present a combined functional MRI and probabilistic fiber tracking study that reveals multisensory processing mechanisms that remained undetected by either technique alone. Ten healthy participants passively observed visually presented lip or body movements, heard speech or body action sounds, or were exposed to a combination of both. Bimodal stimulation engaged a temporal-occipital brain network including the multisensory superior temporal sulcus (msSTS), the lateral superior temporal gyrus (lSTG), and the extrastriate body area (EBA). A region-of-interest (ROI) analysis showed multisensory interactions (e.g., subadditive responses to bimodal compared to unimodal stimuli) in the msSTS, the lSTG, and the EBA region. Moreover, sounds elicited responses in the medial occipital cortex. Probabilistic tracking revealed white matter tracts between the auditory cortex and the medial occipital cortex, the inferior occipital cortex (IOC), and the superior temporal sulcus (STS). However, STS terminations of auditory cortex tracts showed limited overlap with the msSTS region. Instead, msSTS was connected to primary sensory regions via intermediate nodes in the temporal and occipital cortex. Similarly, the lSTG and EBA regions showed limited direct white matter connections but instead were connected via intermediate nodes. Our results suggest that multisensory processing in the STS is mediated by separate brain areas that form a distinct network in the lateral temporal and inferior occipital cortex. PMID:23407860
Neural Mechanisms Underlying Cross-Modal Phonetic Encoding.
Shahin, Antoine J; Backer, Kristina C; Rosenblum, Lawrence D; Kerlin, Jess R
2018-02-14
Audiovisual (AV) integration is essential for speech comprehension, especially in adverse listening situations. Divergent, but not mutually exclusive, theories have been proposed to explain the neural mechanisms underlying AV integration. One theory advocates that this process occurs via interactions between the auditory and visual cortices, as opposed to fusion of AV percepts in a multisensory integrator. Building upon this idea, we proposed that AV integration in spoken language reflects visually induced weighting of phonetic representations at the auditory cortex. EEG was recorded while male and female human subjects watched and listened to videos of a speaker uttering consonant vowel (CV) syllables /ba/ and /fa/, presented in Auditory-only, AV congruent or incongruent contexts. Subjects reported whether they heard /ba/ or /fa/. We hypothesized that vision alters phonetic encoding by dynamically weighting which phonetic representation in the auditory cortex is strengthened or weakened. That is, when subjects are presented with visual /fa/ and acoustic /ba/ and hear /fa/ ( illusion-fa ), the visual input strengthens the weighting of the phone /f/ representation. When subjects are presented with visual /ba/ and acoustic /fa/ and hear /ba/ ( illusion-ba ), the visual input weakens the weighting of the phone /f/ representation. Indeed, we found an enlarged N1 auditory evoked potential when subjects perceived illusion-ba , and a reduced N1 when they perceived illusion-fa , mirroring the N1 behavior for /ba/ and /fa/ in Auditory-only settings. These effects were especially pronounced in individuals with more robust illusory perception. These findings provide evidence that visual speech modifies phonetic encoding at the auditory cortex. SIGNIFICANCE STATEMENT The current study presents evidence that audiovisual integration in spoken language occurs when one modality (vision) acts on representations of a second modality (audition). Using the McGurk illusion, we show that visual context primes phonetic representations at the auditory cortex, altering the auditory percept, evidenced by changes in the N1 auditory evoked potential. This finding reinforces the theory that audiovisual integration occurs via visual networks influencing phonetic representations in the auditory cortex. We believe that this will lead to the generation of new hypotheses regarding cross-modal mapping, particularly whether it occurs via direct or indirect routes (e.g., via a multisensory mediator). Copyright © 2018 the authors 0270-6474/18/381835-15$15.00/0.
Adaptive Changes in the Perception of Fast and Slow Movement at Different Head Positions.
Panichi, Roberto; Occhigrossi, Chiara; Ferraresi, Aldo; Faralli, Mario; Lucertini, Marco; Pettorossi, Vito E
2017-05-01
This paper examines the subjective sense of orientation during asymmetric body rotations in normal subjects. Self-motion perception was investigated in 10 healthy individuals during asymmetric whole-body rotation with different head orientations. Both on-vertical axis and off-vertical axis rotations were employed. Subjects tracked a remembered earth-fixed visual target while rotating in the dark for four cycles of asymmetric rotation (two half-sinusoidal cycles of the same amplitude, but of different duration). The rotations induced a bias in the perception of velocity (more pronounced with fast than with slow motion). At the end of rotation, a marked target position error (TPE) was present. For the on-vertical axis rotations, the TPE was no different if the rotations were performed with a 30° nose-down, a 60° nose-up, or a 90° side-down head tilt. With off-vertical axis rotations, the simultaneous activation of the semicircular canals and otolithic receptors produced a significant increase of TPE for all head positions. This difference between on-vertical and off-vertical axis rotation was probably partly due to the vestibular transfer function and partly due to different adaptation to the speed of rotation. Such a phenomenon might be generated in different components of the vestibular system. The adaptive process enhancing the perception of dynamic movement around the vertical axis is not related to the specific semicircular canals that are activated; the addition of an otolithic component results in a significant increase of the TPE.Panichi R, Occhigrossi C, Ferraresi A, Faralli M, Lucertini M, Pettorossi VE. Adaptive changes in the perception of fast and slow movement at different head positions. Aerosp Med Hum Perform. 2017; 88(5):463-468.
The Perception of Auditory Motion
Leung, Johahn
2016-01-01
The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029
Multisensory Strategies for Science Vocabulary
ERIC Educational Resources Information Center
Husty, Sandra; Jackson, Julie
2008-01-01
Seeing, touching, smelling, hearing, and learning! The authors observed that their English Language Learner (ELL) students achieved a deeper understanding of the properties of matter, as well as enhanced vocabulary development, when they were guided through inquiry-based, multisensory explorations that repeatedly exposed them to words and…
Multisensory Emplaced Learning: Resituating Situated Learning in a Moving World
ERIC Educational Resources Information Center
Fors, Vaike; Backstrom, Asa; Pink, Sarah
2013-01-01
This article outlines the implications of a theory of "sensory-emplaced learning" for understanding the interrelationships between the embodied and environmental in learning processes. Understanding learning as multisensory and contingent within everyday place-events, this framework analytically describes how people establish themselves as…
Multisensory Emplaced Learning: Resituating Situated Learning in a Moving World
ERIC Educational Resources Information Center
Fors, Vaike; Backstrom, Asa; Pink, Sarah
2013-01-01
This article outlines the implications of a theory of "sensory-emplaced learning" for understanding the interrelationships between the embodied and environmental in learning processes. Understanding learning as multisensory and contingent within everyday place-events, this framework analytically describes how people establish themselves…
Ohshiro, Tomokazu; Angelaki, Dora E; DeAngelis, Gregory C
2017-07-19
Studies of multisensory integration by single neurons have traditionally emphasized empirical principles that describe nonlinear interactions between inputs from two sensory modalities. We previously proposed that many of these empirical principles could be explained by a divisive normalization mechanism operating in brain regions where multisensory integration occurs. This normalization model makes a critical diagnostic prediction: a non-preferred sensory input from one modality, which activates the neuron on its own, should suppress the response to a preferred input from another modality. We tested this prediction by recording from neurons in macaque area MSTd that integrate visual and vestibular cues regarding self-motion. We show that many MSTd neurons exhibit the diagnostic form of cross-modal suppression, whereas unisensory neurons in area MT do not. The normalization model also fits population responses better than a model based on subtractive inhibition. These findings provide strong support for a divisive normalization mechanism in multisensory integration. Copyright © 2017 Elsevier Inc. All rights reserved.
Moors, Pieter; Huygelier, Hanne; Wagemans, Johan; de-Wit, Lee; van Ee, Raymond
2015-01-01
Previous studies using binocular rivalry have shown that signals in a modality other than the visual can bias dominance durations depending on their congruency with the rivaling stimuli. More recently, studies using continuous flash suppression (CFS) have reported that multisensory integration influences how long visual stimuli remain suppressed. In this study, using CFS, we examined whether the contrast thresholds for detecting visual looming stimuli are influenced by a congruent auditory stimulus. In Experiment 1, we show that a looming visual stimulus can result in lower detection thresholds compared to a static concentric grating, but that auditory tone pips congruent with the looming stimulus did not lower suppression thresholds any further. In Experiments 2, 3, and 4, we again observed no advantage for congruent multisensory stimuli. These results add to our understanding of the conditions under which multisensory integration is possible, and suggest that certain forms of multisensory integration are not evident when the visual stimulus is suppressed from awareness using CFS.
NASA Astrophysics Data System (ADS)
Roberts, Patrice Helen
This research was designed to determine the relationships among students' achievement scores on grade-level science content, on science content that was three years above-grade level, on attitudes toward instructional approaches, and learning-styles perceptual preferences when instructional approaches were multisensory versus traditional. The dependent variables for this investigation were scores on achievement posttests and scores on the attitude survey. The independent variables were the instructional strategy and students' perceptual preferences. The sample consisted of 74 educationally oriented seventh-grade students. The Learning Styles Inventory (LSI) (Dunn, Dunn, & Price, 1990) was administered to determine perceptual preferences. The control group was taught seventh-grade and tenth-grade science units using a traditional approach and the experimental group was instructed on the same units using multisensory instructional resources. The Semantic Differential Scale (SDS) (Pizzo, 1981) was administered to reveal attitudinal differences. The traditional unit included oral reading from the textbook, completing outlines, labeling diagrams, and correcting the outlines and diagrams as a class. The multisensory unit included five instructional stations established in different sections of the classroom to allow students to learn by: (a) manipulating Flip Chutes, (b) using Electroboards, (c) assembling Task Cards, (d) playing a kinesthetic Floor Game, and (e) reading an individual Programmed Learning Sequence. Audio tapes and scripts were provided at each location. Students circulated in groups of four from station to station. The data subjected to statistical analyses supported the use of a multisensory, rather than a traditional approach, for teaching science content that is above-grade level. T-tests revealed a positive and significant impact on achievement scores (p < 0.0007). No significance was detected on grade-level achievement nor on the perceptual-preference effect. Furthermore, the students indicated significantly more positive attitudes when instructed with a multisensory approach on either grade-level or above-grade level science content (p < 0.0001). The findings supported using a multisensory approach when teaching science concepts that are new to and difficult for students (Martini, 1986).
Virtual environment display for a 3D audio room simulation
NASA Astrophysics Data System (ADS)
Chapin, William L.; Foster, Scott
1992-06-01
Recent developments in virtual 3D audio and synthetic aural environments have produced a complex acoustical room simulation. The acoustical simulation models a room with walls, ceiling, and floor of selected sound reflecting/absorbing characteristics and unlimited independent localizable sound sources. This non-visual acoustic simulation, implemented with 4 audio ConvolvotronsTM by Crystal River Engineering and coupled to the listener with a Poihemus IsotrakTM, tracking the listener's head position and orientation, and stereo headphones returning binaural sound, is quite compelling to most listeners with eyes closed. This immersive effect should be reinforced when properly integrated into a full, multi-sensory virtual environment presentation. This paper discusses the design of an interactive, visual virtual environment, complementing the acoustic model and specified to: 1) allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; 2) reinforce the listener's feeling of telepresence into the acoustical environment with visual and proprioceptive sensations; 3) enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and 4) serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations. The installed system implements a head-coupled, wide-angle, stereo-optic tracker/viewer and multi-computer simulation control. The portable demonstration system implements a head-mounted wide-angle, stereo-optic display, separate head and pointer electro-magnetic position trackers, a heterogeneous parallel graphics processing system, and object oriented C++ program code.
Multisensory Associative Guided Instruction Components-Spelling
ERIC Educational Resources Information Center
Hamilton, Harley
2016-01-01
This article describes a multisensory presentation and response system for enhancing the spelling ability of dyslexic children. The unique aspect of MAGICSpell is its system of finger-letter associations and simplified keyboard configuration. Sixteen 10- and 11-year-old dyslexic students practiced the finger-letter associations via various typing…
Chen, Yi-Chuan; Spence, Charles
2017-06-01
The extent to which attention modulates multisensory processing in a top-down fashion is still a subject of debate among researchers. Typically, cognitive psychologists interested in this question have manipulated the participants' attention in terms of single/dual tasking or focal/divided attention between sensory modalities. We suggest an alternative approach, one that builds on the extensive older literature highlighting hemispheric asymmetries in the distribution of spatial attention. Specifically, spatial attention in vision, audition, and touch is typically biased preferentially toward the right hemispace, especially under conditions of high perceptual load. We review the evidence demonstrating such an attentional bias toward the right in extinction patients and healthy adults, along with the evidence of such rightward-biased attention in multisensory experimental settings. We then evaluate those studies that have demonstrated either a more pronounced multisensory effect in right than in left hemispace, or else similar effects in the two hemispaces. The results suggest that the influence of rightward-biased attention is more likely to be observed when the crossmodal signals interact at later stages of information processing and under conditions of higher perceptual load-that is, conditions under which attention is perhaps a compulsory enhancer of information processing. We therefore suggest that the spatial asymmetry in attention may provide a useful signature of top-down attentional modulation in multisensory processing.
A Role for MST Neurons in Heading Estimation
NASA Technical Reports Server (NTRS)
Stone, L. S.; Perrone, J. A.
1994-01-01
A template model of human visual self-motion perception, which uses neurophysiologically realistic "heading detectors", is consistent with numerous human psychophysical results including the failure of humans to estimate their heading (direction of forward translation) accurately under certain visual conditions. We tested the model detectors with stimuli used by others in single-unit studies. The detectors showed emergent properties similar to those of MST neurons: (1) Sensitivity to non-preferred flow; Each detector is tuned to a specific combination of flow components and its response is systematically reduced by the addition of nonpreferred flow, and (2) Position invariance; The detectors maintain their apparent preference for particular flow components over large regions of their receptive fields. It has been argued that this latter property is incompatible with MST playing a role in heading perception. The model however demonstrates how neurons with the above response properties could still support accurate heading estimation within extrastriate cortical maps.
Current diagnostic procedures for diagnosing vertigo and dizziness
Walther, Leif Erik
2017-01-01
Vertigo is a multisensory syndrome that otolaryngologists are confronted with every day. With regard to the complex functions of the sense of orientation, vertigo is considered today as a disorder of the sense of direction, a disturbed spatial perception of the body. Beside the frequent classical syndromes for which vertigo is the leading symptom (e.g. positional vertigo, vestibular neuritis, Menière’s disease), vertigo may occur as main or accompanying symptom of a multitude of ENT-related diseases involving the inner ear. It also concerns for example acute and chronic viral or bacterial infections of the ear with serous or bacterial labyrinthitis, disorders due to injury (e.g. barotrauma, fracture of the oto-base, contusion of the labyrinth), chronic-inflammatory bone processes as well as inner ear affections in the perioperative course. In the last years, diagnostics of vertigo have experienced a paradigm shift due to new diagnostic possibilities. In the diagnostics of emergency cases, peripheral and central disorders of vertigo (acute vestibular syndrome) may be differentiated with simple algorithms. The introduction of modern vestibular test procedures (video head impulse test, vestibular evoked myogenic potentials) in the clinical practice led to new diagnostic options that for the first time allow a complex objective assessment of all components of the vestibular organ with relatively low effort. Combined with established methods, a frequency-specific assessment of the function of vestibular reflexes is possible. New classifications allow a clinically better differentiation of vertigo syndromes. Modern radiological procedures such as for example intratympanic gadolinium application for Menière’s disease with visualization of an endolymphatic hydrops also influence current medical standards. Recent methodical developments significantly contributed to the possibilities that nowadays vertigo can be better and more quickly clarified in particular in otolaryngology. PMID:29279722
Bonemei, Rob; Costantino, Andrea I; Battistel, Ilenia; Rivolta, Davide
2018-05-01
Faces and bodies are more difficult to perceive when presented inverted than when presented upright (i.e., stimulus inversion effect), an effect that has been attributed to the disruption of holistic processing. The features that can trigger holistic processing in faces and bodies, however, still remain elusive. In this study, using a sequential matching task, we tested whether stimulus inversion affects various categories of visual stimuli: faces, faceless heads, faceless heads in body context, headless bodies naked, whole bodies naked, headless bodies clothed, and whole bodies clothed. Both accuracy and inversion efficiency score results show inversion effects for all categories but for clothed bodies (with and without heads). In addition, the magnitude of the inversion effect for face, naked body, and faceless heads was similar. Our findings demonstrate that the perception of faces, faceless heads, and naked bodies relies on holistic processing. Clothed bodies (with and without heads), on the other side, may trigger clothes-sensitive rather than body-sensitive perceptual mechanisms. © 2017 The British Psychological Society.
Multisensory Teaching of Basic Language Skills. Third Edition
ERIC Educational Resources Information Center
Birsh, Judith R., Ed.
2011-01-01
As new research shows how effective systematic and explicit teaching of language-based skills is for students with learning disabilities--along with the added benefits of multisensory techniques--discover the latest on this popular teaching approach with the third edition of this bestselling textbook. Adopted by colleges and universities across…
Program Evaluation of a School District's Multisensory Reading Initiative
ERIC Educational Resources Information Center
Asip, Michael Patrick
2012-01-01
The purpose of this study was to conduct a formative program evaluation of a school district's multisensory reading initiative. The mixed methods study involved semi-structured interviews, online survey, focus groups, document review, and analysis of extant special education student reading achievement data. Participants included elementary…
Investigation of Proprioceptor Stimulation.
ERIC Educational Resources Information Center
Caukins, Sivan E.; And Others
A research proposal to study the effect of multisensory teaching methods in first-grade reading is presented. The focus is on sex differences in learning and in multisensory approaches to teaching. The project will involve 10 experimental and 10 control first-grade classes in several Southern California schools. Both groups will be given IQ,…
One Approach to Teaching the Specific Language Disabled Adult Language Arts.
ERIC Educational Resources Information Center
Peterson, Binnie L.
1981-01-01
One approach never before used in adult language arts instruction--the Slingerland Simultaneous Multisensory Technique--has been found useful for specific language disabled adults in multisensory programs at Anchorage Community College. The Slingerland method builds from single sight, sound, and feel of letters through combinations, encoding,…
Age-Related Differences in Audiovisual Interactions of Semantically Different Stimuli
ERIC Educational Resources Information Center
Viggiano, Maria Pia; Giovannelli, Fabio; Giganti, Fiorenza; Rossi, Arianna; Metitieri, Tiziana; Rebai, Mohamed; Guerrini, Renzo; Cincotta, Massimo
2017-01-01
Converging results have shown that adults benefit from congruent multisensory stimulation in the identification of complex stimuli, whereas the developmental trajectory of the ability to integrate multisensory inputs in children is less well understood. In this study we explored the effects of audiovisual semantic congruency on identification of…
Multisensory Integration Affects Visuo-Spatial Working Memory
ERIC Educational Resources Information Center
Botta, Fabiano; Santangelo, Valerio; Raffone, Antonino; Sanabria, Daniel; Lupianez, Juan; Belardinelli, Marta Olivetti
2011-01-01
In the present study, we investigate how spatial attention, driven by unisensory and multisensory cues, can bias the access of information into visuo-spatial working memory (VSWM). In a series of four experiments, we compared the effectiveness of spatially-nonpredictive visual, auditory, or audiovisual cues in capturing participants' spatial…
Multisensory Instruction in Foreign Language Education.
ERIC Educational Resources Information Center
Robles, Teresita del Rosario Caballero; Uglem, Craig Thomas Chase
This paper reviews some theories that through history have explained the process of learning. It also taps some new findings on how the brain learns. Multisensory instruction is a pedagogic strategy that covers the greatest number of individual preferences in the classroom, language laboratories, and multimedia rooms for a constant and diverse…
Multisensory Public Access Catalogs on CD-ROM.
ERIC Educational Resources Information Center
Harrison, Nancy; Murphy, Brower
1987-01-01
BiblioFile Intelligent Catalog is a CD-ROM-based public access catalog system which incorporates graphics and sound to provide a multisensory interface and artificial intelligence techniques to increase search precision. The system can be updated frequently and inexpensively by linking hard disk drives to CD-ROM optical drives. (MES)
Improving Vocabulary Acquisition with Multisensory Instruction
ERIC Educational Resources Information Center
D'Alesio, Rosemary; Scalia, Maureen T.; Zabel, Renee M.
2007-01-01
The purpose of this action research project was to improve student vocabulary acquisition through a multisensory, direct instructional approach. The study involved three teachers and a target population of 73 students in second and seventh grade classrooms. The intervention was implemented from September through December of 2006 and analyzed in…
Accelerating Early Language Development with Multi-Sensory Training
ERIC Educational Resources Information Center
Bjorn, Piia M.; Kakkuri, Irma; Karvonen, Pirkko; Leppanen, Paavo H. T.
2012-01-01
This paper reports the outcome of a multi-sensory intervention on infant language skills. A programme titled "Rhyming Game and Exercise Club", which included kinaesthetic-tactile mother-child rhyming games performed in natural joint attention situations, was intended to accelerate Finnish six- to eight-month-old infants' language development. The…
Multisensory Interference in Early Deaf Adults
ERIC Educational Resources Information Center
Heimler, Benedetta; Baruffaldi, Francesca; Bonmassar, Claudia; Venturini, Marta; Pavani, Francesco
2017-01-01
Multisensory interactions in deaf cognition are largely unexplored. Unisensory studies suggest that behavioral/neural changes may be more prominent for visual compared to tactile processing in early deaf adults. Here we test whether such an asymmetry results in increased saliency of vision over touch during visuo-tactile interactions. About 23…
The maxillary palp of aedes aegypti, a model of multisensory integration
USDA-ARS?s Scientific Manuscript database
Female yellow-fever mosquitoes, Aedes aegypti, are obligate blood-feeders and vectors of the pathogens that cause dengue fever, yellow fever and Chikungunya. This feeding behavior concludes a series of multisensory events guiding the mosquito to its host from a distance. The antennae and maxillary...
Evidence for Diminished Multisensory Integration in Autism Spectrum Disorders
ERIC Educational Resources Information Center
Stevenson, Ryan A.; Siemann, Justin K.; Woynaroski, Tiffany G.; Schneider, Brittany C.; Eberly, Haley E.; Camarata, Stephen M.; Wallace, Mark T.
2014-01-01
Individuals with autism spectrum disorders (ASD) exhibit alterations in sensory processing, including changes in the integration of information across the different sensory modalities. In the current study, we used the sound-induced flash illusion to assess multisensory integration in children with ASD and typically-developing (TD) controls.…
Early Visual Deprivation Alters Multisensory Processing in Peripersonal Space
ERIC Educational Resources Information Center
Collignon, Olivier; Charbonneau, Genevieve; Lassonde, Maryse; Lepore, Franco
2009-01-01
Multisensory peripersonal space develops in a maturational process that is thought to be influenced by early sensory experience. We investigated the role of vision in the effective development of audiotactile interactions in peripersonal space. Early blind (EB), late blind (LB) and sighted control (SC) participants were asked to lateralize…
Enhanced visuo-haptic integration for the non-dominant hand.
Yalachkov, Yavor; Kaiser, Jochen; Doehrmann, Oliver; Naumer, Marcus J
2015-07-21
Visuo-haptic integration contributes essentially to object shape recognition. Although there has been a considerable advance in elucidating the neural underpinnings of multisensory perception, it is still unclear whether seeing an object and exploring it with the dominant hand elicits the same brain response as compared to the non-dominant hand. Using fMRI to measure brain activation in right-handed participants, we found that for both left- and right-hand stimulation the left lateral occipital complex (LOC) and anterior cerebellum (aCER) were involved in visuo-haptic integration of familiar objects. These two brain regions were then further investigated in another study, where unfamiliar, novel objects were presented to a different group of right-handers. Here the left LOC and aCER were more strongly activated by bimodal than unimodal stimuli only when the left but not the right hand was used. A direct comparison indicated that the multisensory gain of the fMRI activation was significantly higher for the left than the right hand. These findings are in line with the principle of "inverse effectiveness", implying that processing of bimodally presented stimuli is particularly enhanced when the unimodal stimuli are weak. This applies also when right-handed subjects see and simultaneously touch unfamiliar objects with their non-dominant left hand. Thus, the fMRI signal in the left LOC and aCER induced by visuo-haptic stimulation is dependent on which hand was employed for haptic exploration. Copyright © 2015 Elsevier B.V. All rights reserved.
Audio-visual sensory deprivation degrades visuo-tactile peri-personal space.
Noel, Jean-Paul; Park, Hyeong-Dong; Pasqualini, Isabella; Lissek, Herve; Wallace, Mark; Blanke, Olaf; Serino, Andrea
2018-05-01
Self-perception is scaffolded upon the integration of multisensory cues on the body, the space surrounding the body (i.e., the peri-personal space; PPS), and from within the body. We asked whether reducing information available from external space would change: PPS, interoceptive accuracy, and self-experience. Twenty participants were exposed to 15 min of audio-visual deprivation and performed: (i) a visuo-tactile interaction task measuring their PPS; (ii) a heartbeat perception task measuring interoceptive accuracy; and (iii) a series of questionnaires related to self-perception and mental illness. These tasks were carried out in two conditions: while exposed to a standard sensory environment and under a condition of audio-visual deprivation. Results suggest that while PPS becomes ill defined after audio-visual deprivation, interoceptive accuracy is unaltered at a group-level, with some participants improving and some worsening in interoceptive accuracy. Interestingly, correlational individual differences analyses revealed that changes in PPS after audio-visual deprivation were related to interoceptive accuracy and self-reports of "unusual experiences" on an individual subject basis. Taken together, the findings argue for a relationship between the malleability of PPS, interoceptive accuracy, and an inclination toward aberrant ideation often associated with mental illness. Copyright © 2018. Published by Elsevier Inc.
Altieri, Nicholas; Pisoni, David B.; Townsend, James T.
2012-01-01
Summerfield (1987) proposed several accounts of audiovisual speech perception, a field of research that has burgeoned in recent years. The proposed accounts included the integration of discrete phonetic features, vectors describing the values of independent acoustical and optical parameters, the filter function of the vocal tract, and articulatory dynamics of the vocal tract. The latter two accounts assume that the representations of audiovisual speech perception are based on abstract gestures, while the former two assume that the representations consist of symbolic or featural information obtained from visual and auditory modalities. Recent converging evidence from several different disciplines reveals that the general framework of Summerfield’s feature-based theories should be expanded. An updated framework building upon the feature-based theories is presented. We propose a processing model arguing that auditory and visual brain circuits provide facilitatory information when the inputs are correctly timed, and that auditory and visual speech representations do not necessarily undergo translation into a common code during information processing. Future research on multisensory processing in speech perception should investigate the connections between auditory and visual brain regions, and utilize dynamic modeling tools to further understand the timing and information processing mechanisms involved in audiovisual speech integration. PMID:21968081
Stevenson, Ryan A; Toulmin, Jennifer K; Youm, Ariana; Besney, Richard M A; Schulz, Samantha E; Barense, Morgan D; Ferber, Susanne
2017-10-30
Recent empirical evidence suggests that autistic individuals perceive the world differently than their typically-developed peers. One theoretical account, the predictive coding hypothesis, posits that autistic individuals show a decreased reliance on previous perceptual experiences, which may relate to autism symptomatology. We tested this through a well-characterized, audiovisual statistical-learning paradigm in which typically-developed participants were first adapted to consistent temporal relationships between audiovisual stimulus pairs (audio-leading, synchronous, visual-leading) and then performed a simultaneity judgement task with audiovisual stimulus pairs varying in temporal offset from auditory-leading to visual-leading. Following exposure to the visual-leading adaptation phase, participants' perception of synchrony was biased towards visual-leading presentations, reflecting the statistical regularities of their previously experienced environment. Importantly, the strength of adaptation was significantly related to the level of autistic traits that the participant exhibited, measured by the Autism Quotient (AQ). This was specific to the Attention to Detail subscale of the AQ that assesses the perceptual propensity to focus on fine-grain aspects of sensory input at the expense of more integrative perceptions. More severe Attention to Detail was related to weaker adaptation. These results support the predictive coding framework, and suggest that changes in sensory perception commonly reported in autism may contribute to autistic symptomatology.
Altieri, Nicholas; Pisoni, David B; Townsend, James T
2011-01-01
Summerfield (1987) proposed several accounts of audiovisual speech perception, a field of research that has burgeoned in recent years. The proposed accounts included the integration of discrete phonetic features, vectors describing the values of independent acoustical and optical parameters, the filter function of the vocal tract, and articulatory dynamics of the vocal tract. The latter two accounts assume that the representations of audiovisual speech perception are based on abstract gestures, while the former two assume that the representations consist of symbolic or featural information obtained from visual and auditory modalities. Recent converging evidence from several different disciplines reveals that the general framework of Summerfield's feature-based theories should be expanded. An updated framework building upon the feature-based theories is presented. We propose a processing model arguing that auditory and visual brain circuits provide facilitatory information when the inputs are correctly timed, and that auditory and visual speech representations do not necessarily undergo translation into a common code during information processing. Future research on multisensory processing in speech perception should investigate the connections between auditory and visual brain regions, and utilize dynamic modeling tools to further understand the timing and information processing mechanisms involved in audiovisual speech integration.
Hearing Scenes: A Neuromagnetic Signature of Auditory Source and Reverberant Space Separation
Oliva, Aude
2017-01-01
Abstract Perceiving the geometry of surrounding space is a multisensory process, crucial to contextualizing object perception and guiding navigation behavior. Humans can make judgments about surrounding spaces from reverberation cues, caused by sounds reflecting off multiple interior surfaces. However, it remains unclear how the brain represents reverberant spaces separately from sound sources. Here, we report separable neural signatures of auditory space and source perception during magnetoencephalography (MEG) recording as subjects listened to brief sounds convolved with monaural room impulse responses (RIRs). The decoding signature of sound sources began at 57 ms after stimulus onset and peaked at 130 ms, while space decoding started at 138 ms and peaked at 386 ms. Importantly, these neuromagnetic responses were readily dissociable in form and time: while sound source decoding exhibited an early and transient response, the neural signature of space was sustained and independent of the original source that produced it. The reverberant space response was robust to variations in sound source, and vice versa, indicating a generalized response not tied to specific source-space combinations. These results provide the first neuromagnetic evidence for robust, dissociable auditory source and reverberant space representations in the human brain and reveal the temporal dynamics of how auditory scene analysis extracts percepts from complex naturalistic auditory signals. PMID:28451630
Exploring how musical rhythm entrains brain activity with electroencephalogram frequency-tagging
Nozaradan, Sylvie
2014-01-01
The ability to perceive a regular beat in music and synchronize to this beat is a widespread human skill. Fundamental to musical behaviour, beat and meter refer to the perception of periodicities while listening to musical rhythms and often involve spontaneous entrainment to move on these periodicities. Here, we present a novel experimental approach inspired by the frequency-tagging approach to understand the perception and production of rhythmic inputs. This approach is illustrated here by recording the human electroencephalogram responses at beat and meter frequencies elicited in various contexts: mental imagery of meter, spontaneous induction of a beat from rhythmic patterns, multisensory integration and sensorimotor synchronization. Collectively, our observations support the view that entrainment and resonance phenomena subtend the processing of musical rhythms in the human brain. More generally, they highlight the potential of this approach to help us understand the link between the phenomenology of musical beat and meter and the bias towards periodicities arising under certain circumstances in the nervous system. Entrainment to music provides a highly valuable framework to explore general entrainment mechanisms as embodied in the human brain. PMID:25385771
Elucidating concepts in drug design through taste with natural and artificial sweeteners.
Lipchock, James M; Lipchock, Sarah V
2016-11-12
Fundamental concepts in biochemistry important for drug design often lack connection to the macroscopic world and can be difficult for students to grasp, particularly those in introductory science courses at the high school and college level. Educational research has shown that multisensory teaching facilitates learning, but teaching at the high school and college level is almost exclusively limited to the visual and auditory senses. This approach neglects the lifetime of experience our students bring to the classroom in the form of taste perception and makes our teaching less supportive of those with sensory impairment. In this article, we outline a novel guided-inquiry activity that utilizes taste perception for a series of natural and artificial sweetener solutions to introduce the concepts of substrate affinity and selectivity in the context of drug design. The findings from this study demonstrate clear gains in student knowledge, as well as an increase in enthusiasm for the fields of biochemistry and drug design. © 2016 by The International Union of Biochemistry and Molecular Biology, 44(6):550-554, 2016. © 2016 The International Union of Biochemistry and Molecular Biology.
Basic multisensory functions can be acquired after congenital visual pattern deprivation in humans.
Putzar, Lisa; Gondan, Matthias; Röder, Brigitte
2012-01-01
People treated for bilateral congenital cataracts offer a model to study the influence of visual deprivation in early infancy on visual and multisensory development. We investigated cross-modal integration capabilities in cataract patients using a simple detection task that provided redundant information to two different senses. In both patients and controls, redundancy gains were consistent with coactivation models, indicating an integrated processing of modality-specific information. This finding is in contrast with recent studies showing impaired higher-level multisensory interactions in cataract patients. The present results suggest that basic cross-modal integrative processes for simple short stimuli do not depend on visual and/or crossmodal input since birth.
Bortolon, Catherine; Capdevielle, Delphine; Altman, Rosalie; Macgregor, Alexandra; Attal, Jérôme; Raffard, Stéphane
2017-07-01
Self-face recognition is crucial for sense of identity and for maintaining a coherent sense of self. Most of our daily life experiences with the image of our own face happen when we look at ourselves in the mirror. However, to date, mirror self-perception in schizophrenia has received little attention despite evidence that face recognition deficits and self abnormalities have been described in schizophrenia. Thus, this study aims to investigate mirror self-face perception in schizophrenia patients and its correlation with clinical symptoms. Twenty-four schizophrenia patients and twenty-five healthy controls were explicitly requested to describe their image in detail during 2min whilst looking at themselves in a mirror. Then, they were asked to report whether they experienced any self-face recognition difficulties. Results showed that schizophrenia patients reported more feelings of strangeness towards their face compared to healthy controls (U=209.5, p=0.048, r=0.28), but no statistically significant differences were found regarding misidentification (p=0.111) and failures in recognition (p=0.081). Symptoms such as hallucinations, somatic concerns and depression were also associated with self-face perception abnormalities (all p-values>0.05). Feelings of strangeness toward one's own face in schizophrenia might be part of a familiar face perception deficit or a more global self-disturbance, which is characterized by a loss of self-other boundaries and has been associated with abnormal body experiences and first rank symptoms. Regarding this last hypothesis, multisensorial integration might have an impact on the way patients perceive themselves since it has an important role in mirror self-perception. Copyright © 2017. Published by Elsevier B.V.
Distortions of Subjective Time Perception Within and Across Senses
van Wassenhove, Virginie; Buonomano, Dean V.; Shimojo, Shinsuke; Shams, Ladan
2008-01-01
Background The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood. Methodology/Findings We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations. Conclusions/Significance These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions. PMID:18197248
Chang, Yuanmay
2008-06-01
The purpose of this study was to investigate the influence of Chinese culture on nursing leadership behavior in Taiwan nurses. A descriptive study compared staff nurses' assessment of Chinese value in the leadership behavior of their head nurses. Data analysis was made on a convenience sample in Taiwan of 214 head nurses and 2,127 staff nurses who had worked with their head nurse for at least one year. Six medical centers and regional hospitals in northern (Taipei), central (Taichung) and southern (Kaohsiung) Taiwan were recruited for this study. Instruments included the demographic questionnaire, Chinese Value Survey, and Kang's Chinese Leadership Behaviors Module Scale. Results indicated that head nurses scored significantly higher than staff nurses in terms of all cultural values and leadership behaviors. Both staff nurses and head nurses scored the highest mean scores in personal integrity (Yi) and human connectedness (Ren) and the lowest in moral discipline (Li). Staff nurse perceptions of leadership behavior indicated the role of parent to be higher than either the role of director or mentor. Head nurses perceptions of leadership behavior emphasized the role of the director more than either parent or mentor. There were no significant differences between the staff nurses and head nurses in terms of expectative leadership behavior, which gave the role of director higher mean scores than those of either the parent or mentor. Positive and significant associations (r = .266 to r = .334) were found between cultural values and perceptions of leadership behavior. Cultural values predicted 10.6% of leadership behavior variance. The three demographic characteristics of location in northern Taiwan (beta = .09), intention to leave (beta = -.14), and general unit (beta = .10) and the two cultural values of human connectedness (Ren) (beta = .16) and personal integrity (Yi) (beta = .16) together reported a cumulative R2 of 14.6% to explain variance in leadership behavior perceptions. Results of this study identified the important cultural values "Ren" and "Yi". Managers and administrators could add the consideration of such cultural values into nursing leadership to enhance the organization in which Taiwan nurses work.
Multisensory Teaching of Basic Language Skills. Second Edition
ERIC Educational Resources Information Center
Birsh, Judith R., Ed.
2005-01-01
For students with dyslexia and other learning disabilities--and for their peers--creative teaching methods that use two or more senses can dramatically improve language skills and academic outcomes. That is why every current and future educator needs the second edition of this definitive guide to multisensory teaching. A core text for a variety of…
The Multisensory Sound Lab: Sounds You Can See and Feel.
ERIC Educational Resources Information Center
Lederman, Norman; Hendricks, Paula
1994-01-01
A multisensory sound lab has been developed at the Model Secondary School for the Deaf (District of Columbia). A special floor allows vibrations to be felt, and a spectrum analyzer displays frequencies and harmonics visually. The lab is used for science education, auditory training, speech therapy, music and dance instruction, and relaxation…
ERIC Educational Resources Information Center
Walet, Jennifer
2011-01-01
This paper examines the issue of struggling readers and writers, and offers suggestions to help teachers increase struggling students' motivation and metacognition. Suggestions include multisensory methods that make use of the visual, auditory and kinesthetic learning pathways, as well as explicit strategy instruction to improve students' ability…
Multisensory Information Boosts Numerical Matching Abilities in Young Children
ERIC Educational Resources Information Center
Jordan, Kerry E.; Baker, Joseph
2011-01-01
This study presents the first evidence that preschool children perform more accurately in a numerical matching task when given multisensory rather than unisensory information about number. Three- to 5-year-old children learned to play a numerical matching game on a touchscreen computer, which asked them to match a sample numerosity with a…
Technologically and Artistically Enhanced Multi-Sensory Computer-Programming Education
ERIC Educational Resources Information Center
Katai, Zoltan; Toth, Laszlo
2010-01-01
Over the last decades more and more research has analysed relatively new or rediscovered teaching-learning concepts like blended, hybrid, multi-sensory or technologically enhanced learning. This increased interest in these educational forms can be explained by new exciting discoveries in brain research and cognitive psychology, as well as by the…
Please! Teach All of Me: Multisensory Activities for Preschoolers.
ERIC Educational Resources Information Center
Crawford, Jackie; Hanson, Joni; Gums, Marcia; Neys, Paula
Most people, including children, have preferences for how they learn about the world. When these preferences are clearly noticeable, they may be thought of as sensory strengths. For some children, sensory strengths develop because of a weakness in another sensory area. For these children, multisensory instruction can be very helpful. Multisensory…
Age-related differences in audiovisual interactions of semantically different stimuli.
Viggiano, Maria Pia; Giovannelli, Fabio; Giganti, Fiorenza; Rossi, Arianna; Metitieri, Tiziana; Rebai, Mohamed; Guerrini, Renzo; Cincotta, Massimo
2017-01-01
Converging results have shown that adults benefit from congruent multisensory stimulation in the identification of complex stimuli, whereas the developmental trajectory of the ability to integrate multisensory inputs in children is less well understood. In this study we explored the effects of audiovisual semantic congruency on identification of visually presented stimuli belonging to different categories, using a cross-modal approach. Four groups of children ranging in age from 6 to 13 years and adults were administered an object identification task of visually presented pictures belonging to living and nonliving entities. Stimuli were presented in visual, congruent audiovisual, incongruent audiovisual, and noise conditions. Results showed that children under 12 years of age did not benefit from multisensory presentation in speeding up the identification. In children the incoherent audiovisual condition had an interfering effect, especially for the identification of living things. These data suggest that the facilitating effect of the audiovisual interaction into semantic factors undergoes developmental changes and the consolidation of adult-like processing of multisensory stimuli begins in late childhood. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Impairments of Multisensory Integration and Cross-Sensory Learning as Pathways to Dyslexia
Hahn, Noemi; Foxe, John J.; Molholm, Sophie
2014-01-01
Two sensory systems are intrinsic to learning to read. Written words enter the brain through the visual system and associated sounds through the auditory system. The task before the beginning reader is quite basic. She must learn correspondences between orthographic tokens and phonemic utterances, and she must do this to the point that there is seamless automatic ‘connection’ between these sensorially distinct units of language. It is self-evident then that learning to read requires formation of cross-sensory associations to the point that deeply encoded multisensory representations are attained. While the majority of individuals manage this task to a high degree of expertise, some struggle to attain even rudimentary capabilities. Why do dyslexic individuals, who learn well in myriad other domains, fail at this particular task? Here, we examine the literature as it pertains to multisensory processing in dyslexia. We find substantial support for multisensory deficits in dyslexia, and make the case that to fully understand its neurological basis, it will be necessary to thoroughly probe the integrity of auditory-visual integration mechanisms. PMID:25265514
Virtual head rotation reveals a process of route reconstruction from human vestibular signals
Day, Brian L; Fitzpatrick, Richard C
2005-01-01
The vestibular organs can feed perceptual processes that build a picture of our route as we move about in the world. However, raw vestibular signals do not define the path taken because, during travel, the head can undergo accelerations unrelated to the route and also be orientated in any direction to vary the signal. This study investigated the computational process by which the brain transforms raw vestibular signals for the purpose of route reconstruction. We electrically stimulated the vestibular nerves of human subjects to evoke a virtual head rotation fixed in skull co-ordinates and measure its perceptual effect. The virtual head rotation caused subjects to perceive an illusory whole-body rotation that was a cyclic function of head-pitch angle. They perceived whole-body yaw rotation in one direction with the head pitched forwards, the opposite direction with the head pitched backwards, and no rotation with the head in an intermediate position. A model based on vector operations and the anatomy and firing properties of semicircular canals precisely predicted these perceptions. In effect, a neural process computes the vector dot product between the craniocentric vestibular vector of head rotation and the gravitational unit vector. This computation yields the signal of body rotation in the horizontal plane that feeds our perception of the route travelled. PMID:16002439
Aging-related changes in auditory and visual integration measured with MEG
Stephen, Julia M.; Knoefel, Janice E.; Adair, John; Hart, Blaine; Aine, Cheryl J.
2010-01-01
As noted in the aging literature, processing delays often occur in the central nervous system with increasing age, which is often attributable in part to demyelination. In addition, differential slowing between sensory systems has been shown to be most discrepant between visual (up to 20 ms) and auditory systems (< 5 ms). Therefore, we used MEG to measure the multisensory integration response in auditory association cortex in young and elderly participants to better understand the effects of aging on multisensory integration abilities. Results show a main effect for reaction times (RTs); the mean RTs of the elderly were significantly slower than the young. In addition, in the young we found significant facilitation of RTs to the multisensory stimuli relative to both unisensory stimuli, when comparing the cumulative distribution functions, which was not evident for the elderly. We also identified a significant interaction between age and condition in the superior temporal gyrus. In particular, the elderly had larger amplitude responses (~100 ms) to auditory stimuli relative to the young when auditory stimuli alone were presented, whereas the amplitude of responses to the multisensory stimuli was reduced in the elderly, relative to the young. This suppressed cortical multisensory integration response in the elderly, which corresponded with slower RTs and reduced RT facilitation effects in the elderly, has not been reported previously and may be related to poor cortical integration based on timing changes in unisensory processing in the elderly. PMID:20713130
Aging-related changes in auditory and visual integration measured with MEG.
Stephen, Julia M; Knoefel, Janice E; Adair, John; Hart, Blaine; Aine, Cheryl J
2010-10-22
As noted in the aging literature, processing delays often occur in the central nervous system with increasing age, which is often attributable in part to demyelination. In addition, differential slowing between sensory systems has been shown to be most discrepant between visual (up to 20ms) and auditory systems (<5ms). Therefore, we used MEG to measure the multisensory integration response in auditory association cortex in young and elderly participants to better understand the effects of aging on multisensory integration abilities. Results show a main effect for reaction times (RTs); the mean RTs of the elderly were significantly slower than the young. In addition, in the young we found significant facilitation of RTs to the multisensory stimuli relative to both unisensory stimuli, when comparing the cumulative distribution functions, which was not evident for the elderly. We also identified a significant interaction between age and condition in the superior temporal gyrus. In particular, the elderly had larger amplitude responses (∼100ms) to auditory stimuli relative to the young when auditory stimuli alone were presented, whereas the amplitude of responses to the multisensory stimuli was reduced in the elderly, relative to the young. This suppressed cortical multisensory integration response in the elderly, which corresponded with slower RTs and reduced RT facilitation effects, has not been reported previously and may be related to poor cortical integration based on timing changes in unisensory processing in the elderly. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Mantzicopoulos, Panayota
2004-01-01
The author examined age, gender, and ethnic differences in the self-perceptions of 112 low-income children who were assessed with the Pictorial Scale of Perceived Competence and Social Acceptance (PSPCSA) at Head Start and kindergarten. Children's self-ratings of competence were overly optimistic across the 4 subscales of the PSPCSA during the 2…
Basic Perception in Head-worn Augmented Reality Displays
2012-01-01
Basic Perception in Head-worn Augmented Reality Displays Mark A. Livingston, Joseph L. Gabbard , J. Edward Swan II, Ciara M. Sibley, and Jane H...mark.livingston@nrl.navy.mil Joseph L. Gabbard Virginia Polytechnic Institute and State University, Blacksburg, VA 24061, USA e-mail: jgabbard@vt.edu J...Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 2 Livingston, Gabbard , et al. 1 Introduction For many first-time users of augmented reality
Read My Lips: Brain Dynamics Associated with Audiovisual Integration and Deviance Detection.
Tse, Chun-Yu; Gratton, Gabriele; Garnsey, Susan M; Novak, Michael A; Fabiani, Monica
2015-09-01
Information from different modalities is initially processed in different brain areas, yet real-world perception often requires the integration of multisensory signals into a single percept. An example is the McGurk effect, in which people viewing a speaker whose lip movements do not match the utterance perceive the spoken sounds incorrectly, hearing them as more similar to those signaled by the visual rather than the auditory input. This indicates that audiovisual integration is important for generating the phoneme percept. Here we asked when and where the audiovisual integration process occurs, providing spatial and temporal boundaries for the processes generating phoneme perception. Specifically, we wanted to separate audiovisual integration from other processes, such as simple deviance detection. Building on previous work employing ERPs, we used an oddball paradigm in which task-irrelevant audiovisually deviant stimuli were embedded in strings of non-deviant stimuli. We also recorded the event-related optical signal, an imaging method combining spatial and temporal resolution, to investigate the time course and neuroanatomical substrate of audiovisual integration. We found that audiovisual deviants elicit a short duration response in the middle/superior temporal gyrus, whereas audiovisual integration elicits a more extended response involving also inferior frontal and occipital regions. Interactions between audiovisual integration and deviance detection processes were observed in the posterior/superior temporal gyrus. These data suggest that dynamic interactions between inferior frontal cortex and sensory regions play a significant role in multimodal integration.
Effects of Multisensory Therapy on Behaviour of Adult Clients with Developmental Disabilities.
Sally, Chan; David, Thompson R; Chau, P C; Tam, W; Chiu, I Ws
The objective of this review was to present the best available evidence on the effect of multisensory therapy in adult clients with developmental disabilities on the frequency of challenging behaviour, the frequency of stereotypic self-stimulating behaviour, and the frequency of relaxing behaviour INCLUSION CRITERIA: The review summarised all the relevant studies relating to the multisensory therapy intervention. Trials which included adult clients (aged 18-60) diagnosed with mental retardation according to the criteria of the Diagnostic and Statistical Manual of Mental Disorders: IV Classification or those with an Intelligence Quotient < 70 and who stayed in institutions.Types of interventions: Multisensory therapy/ multisensory environment/ SnoezelenTypes of outcomes measures: Outcome measures of interest were challenging behaviour, stereotypic self-stimulating behaviour and relaxing behaviour.Types of studies: This study considered any randomized or quasi-randomized controlled trials that investigated the effectiveness of multisensory therapy on adult clients with developmental disabilities. Due to a limited number of high quality RCT's on this subject, papers using other experimental or observational designs were also included. Electronic databases were used to search for primary publications. The reference lists and bibliographies of retrieved articles were reviewed to identify research not located through other search strategies. Two reviewers assessed all identified abstracts and full reports were retrieved for all studies that met the inclusion criteria of the review. Studies identified from bibliography searches were assessed on the study title. Methodological quality was assessed by two reviewers using a checklist. Disagreements between reviewers were resolved by discussion with a third reviewer. Data were extracted independently by two reviewers using a data extraction tool. A third reviewer dealt with disagreements. In all studies percentages of clients in each category and/or change in group mean score for outcomes were reported. If appropriate, results from comparable groups of studies were pooled in statistical meta-analysis using Review Manager Software from the Cochrane Collaboration. Odds ratio (for categorical outcome data) or weighted mean differences (for continuous data) and their 95% confidence intervals were calculated for each analysis. Heterogeneity between combined studies was tested using standard chi-square test. For the purpose of this review, where possible, intention to treat and/or completer analysis were performed. Where statistical pooling was not appropriate or possible, the findings were summarised in narrative form. 130 publications were identified through the various database searches and review of reference list and bibliographies. However, only 15 English publications were included in the review. The present evidence showed that multisensory therapy promoted participants' positive emotions and they reported being happier and more relaxed. Evidence also indicated that participants' had displayed more positive emotions and less negative emotions after therapy sessions. This systematic review demonstrated a beneficial effect of multisensory therapy in promoting participants' positive emotions. Out of the 15 reviewed studies, 12 studies had a single treatment group only. While the reviewers acknowledge the difficulty in carrying out randomised controlled trial in people with developmental disabilities and challenging behaviour, the lack of trial-derived evidence makes it difficult to produce a strong conclusion to show the effectiveness of the multisensory therapy.
Development of sensorial experiments and their implementation into undergraduate laboratories
NASA Astrophysics Data System (ADS)
Bromfield Lee, Deborah Christina
"Visualization" of chemical phenomena often has been limited in the teaching laboratories to the sense of sight. We have developed chemistry experiments that rely on senses other than eyesight to investigate chemical concepts, make quantitative determinations, and familiarize students with chemical techniques traditionally designed using only eyesight. Multi-sensory learning can benefit all students by actively engaging them in learning through stimulation or an alternative way of experiencing a concept or ideas. Perception of events or concepts usually depends on the information from the different sensory systems combined. The use of multi-sensory learning can take advantage of all the senses to reinforce learning as each sense builds toward a more complete experience of scientific data. Research has shown that multi-sensory representations of scientific phenomena is a valuable tool for enhancing understanding of chemistry as well as displacing misconceptions through experience. Multi-sensory experiences have also been shown to enrich memory performance. There are few experiments published which utilize multiple senses in the teaching laboratory. The sensorial experiments chosen were conceptually similar to experiments currently performed in undergraduate laboratories; however students collect different types of data using multi-sensory observations. The experiments themselves were developed by using chemicals that would provide different sensory changes or capitalizing on sensory observations that were typically overlooked or ignored and obtain similar and precise results as in traditional experiments. Minimizing hazards and using safe practices are especially essential in these experiments as students utilize senses traditionally not allowed to be used in the laboratories. These sensorial experiments utilize typical equipment found in the teaching laboratories as well as inexpensive chemicals in order to aid implementation. All experiments are rigorously tested for accuracy and all chemicals examined for safety prior to implementation. The pedagogical objectives were established of to provide the ability to develop and stimulate students' conceptual understanding. The educational assessments of these experiments are are fashioned using the framework chosen (Marzano and Kendall). All the experiments are designed as collaborative, inquiry-based experiments in aims of enhancing the students understanding of the subject and promote critical thinking skills. These experiments use an investigative approach rather than verification methods. Terminology and misconceptions of the experiment were evaluated to prevent misunderstanding or confusion during the experiment. Interventions to address these misconceptions and learning problems associated with the experiment were developed. We have developed the Learning Lab Report, LLR, as an alternative model for the traditional laboratory reports, with the goal of transforming the traditional reports into something more useful for both students and instructors. The educational strategies are employed to develop this format in order to promote students to think critically about the concepts and take an active involvement in learning. From the results of the LLR, all experiments were reviewed and re-written to address any learning problems. The sensorial experiments study several topics usually covered in the first 2 years of the chemistry curriculum (general and organic chemistry courses). The experiments implemented, organic qualitative analysis, esterification kinetics, Le Chatelier equilibrium, thermometric titrations and ASA kinetics, worked effectively as students were able to draw correct conclusions about the concepts from the data obtained. An olfactory titration using the smell of the rutabaga vegetable has been developed and thoroughly tested. The LLR was utilized with the equilibrium, titration and acetyl salicylic acid experiments. The details of the development, implementation of these sensorial experiments and the LLR and student results are discussed.
ERIC Educational Resources Information Center
Jaka, Fahima Salman
2015-01-01
This study explores the perceptions of school heads and teachers in facilitating young dyslexic children in primary mainstream schools of Pakistan. Through purposive sampling, the researcher selected eight participants: Four primary school heads and four primary teachers from elite schools of Karachi. The research instrument selected for this…
A Multisensory Aquatic Environment for Individuals with Intellectual/Developmental Disabilities
ERIC Educational Resources Information Center
Potter, Cindy; Erzen, Carol
2008-01-01
This article presents the eighth of a 12-part series exploring the benefits of aquatic therapy and recreation for people with special needs. Here, the authors describe the process of development and installation of an aquatic multisensory environment (MSE) and the many factors that one should consider for a successful result. There are many…
ERIC Educational Resources Information Center
Waldvogel, Steven John
2010-01-01
Scope and method of study: The purpose of this research study was to examine the effectiveness of an (IMSE) Orton-Gillingham based multi-sensory instructional reading program when incorporated with kindergarten through first grade classroom reading instruction in one rural Midwestern school district. The IMSE supplemental reading program is…
ERIC Educational Resources Information Center
Hill, Lindsay; Trusler, Karen; Furniss, Frederick; Lancioni, Giulio
2012-01-01
Background: The aim of the present study was to evaluate the effects of the sensory equipment provided in a multi-sensory environment (MSE) and the level of social contact provided on levels of stereotyped behaviours assessed as being maintained by automatic reinforcement. Method: Stereotyped and engaged behaviours of two young people with severe…
Multi-Sensory Exercises: An Approach to Communicative Practice. 1975-1979.
ERIC Educational Resources Information Center
Kalivoda, Theodore B.
A reprint of a 1975 article on multi-sensory exercises for communicative second language learning is presented. The article begins by noting that the use of drills as a language learning and practice technique had been lost in the trend toward communicative language teaching, but that drills can provide a means of gaining functional control of…
Multisensory integration, sensory substitution and visual rehabilitation.
Proulx, Michael J; Ptito, Maurice; Amedi, Amir
2014-04-01
Sensory substitution has advanced remarkably over the past 35 years since first introduced to the scientific literature by Paul Bach-y-Rita. In this issue dedicated to his memory, we describe a collection of reviews that assess the current state of neuroscience research on sensory substitution, visual rehabilitation, and multisensory processes. Copyright © 2014. Published by Elsevier Ltd.
The Impact of Using Multi-Sensory Approach for Teaching Students with Learning Disabilities
ERIC Educational Resources Information Center
Obaid, Majeda Al Sayyed
2013-01-01
The purpose of this study is to investigate the effect of using the Multi-Sensory Approach for teaching students with learning disabilities on the sixth grade students' achievement in mathematics at Jordanian public schools. To achieve the purpose of the study, a pre/post-test was constructed to measure students' achievement in mathematics. The…
A Unified Model of Heading and Path Perception in Primate MSTd
Layton, Oliver W.; Browning, N. Andrew
2014-01-01
Self-motion, steering, and obstacle avoidance during navigation in the real world require humans to travel along curved paths. Many perceptual models have been proposed that focus on heading, which specifies the direction of travel along straight paths, but not on path curvature, which humans accurately perceive and is critical to everyday locomotion. In primates, including humans, dorsal medial superior temporal area (MSTd) has been implicated in heading perception. However, the majority of MSTd neurons respond optimally to spiral patterns, rather than to the radial expansion patterns associated with heading. No existing theory of curved path perception explains the neural mechanisms by which humans accurately assess path and no functional role for spiral-tuned cells has yet been proposed. Here we present a computational model that demonstrates how the continuum of observed cells (radial to circular) in MSTd can simultaneously code curvature and heading across the neural population. Curvature is encoded through the spirality of the most active cell, and heading is encoded through the visuotopic location of the center of the most active cell's receptive field. Model curvature and heading errors fit those made by humans. Our model challenges the view that the function of MSTd is heading estimation, based on our analysis we claim that it is primarily concerned with trajectory estimation and the simultaneous representation of both curvature and heading. In our model, temporal dynamics afford time-history in the neural representation of optic flow, which may modulate its structure. This has far-reaching implications for the interpretation of studies that assume that optic flow is, and should be, represented as an instantaneous vector field. Our results suggest that spiral motion patterns that emerge in spatio-temporal optic flow are essential for guiding self-motion along complex trajectories, and that cells in MSTd are specifically tuned to extract complex trajectory estimation from flow. PMID:24586130
Audiovisual Temporal Processing and Synchrony Perception in the Rat.
Schormans, Ashley L; Scott, Kaela E; Vo, Albert M Q; Tyker, Anna; Typlt, Marei; Stolzberg, Daniel; Allman, Brian L
2016-01-01
Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer's ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats ( n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats ( n = 7) perceived the synchronous audiovisual stimuli to be "visual first" for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20-40 ms. Ultimately, given that our behavioral and electrophysiological results were consistent with studies conducted on human participants and previous recordings made in multisensory brain regions of different species, we suggest that the rat represents an effective model for studying audiovisual temporal synchrony at both the neuronal and perceptual level.
Audiovisual Temporal Processing and Synchrony Perception in the Rat
Schormans, Ashley L.; Scott, Kaela E.; Vo, Albert M. Q.; Tyker, Anna; Typlt, Marei; Stolzberg, Daniel; Allman, Brian L.
2017-01-01
Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer’s ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats (n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats (n = 7) perceived the synchronous audiovisual stimuli to be “visual first” for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20–40 ms. Ultimately, given that our behavioral and electrophysiological results were consistent with studies conducted on human participants and previous recordings made in multisensory brain regions of different species, we suggest that the rat represents an effective model for studying audiovisual temporal synchrony at both the neuronal and perceptual level. PMID:28119580
Optimal multisensory decision-making in a reaction-time task.
Drugowitsch, Jan; DeAngelis, Gregory C; Klier, Eliana M; Angelaki, Dora E; Pouget, Alexandre
2014-06-14
Humans and animals can integrate sensory evidence from various sources to make decisions in a statistically near-optimal manner, provided that the stimulus presentation time is fixed across trials. Little is known about whether optimality is preserved when subjects can choose when to make a decision (reaction-time task), nor when sensory inputs have time-varying reliability. Using a reaction-time version of a visual/vestibular heading discrimination task, we show that behavior is clearly sub-optimal when quantified with traditional optimality metrics that ignore reaction times. We created a computational model that accumulates evidence optimally across both cues and time, and trades off accuracy with decision speed. This model quantitatively explains subjects's choices and reaction times, supporting the hypothesis that subjects do, in fact, accumulate evidence optimally over time and across sensory modalities, even when the reaction time is under the subject's control.
Acoustic pressure waves induced in human heads by RF pulses from high-field MRI scanners.
Lin, James C; Wang, Zhangwei
2010-04-01
The current evolution toward greater image resolution from magnetic resonance image (MRI) scanners has prompted the exploration of higher strength magnetic fields and use of higher levels of radio frequencies (RFs). Auditory perception of RF pulses by humans has been reported during MRI with head coils. It has shown that the mechanism of interaction for the auditory effect is caused by an RF pulse-induced thermoelastic pressure wave inside the head. We report a computational study of the intensity and frequency of thermoelastic pressure waves generated by RF pulses in the human head inside high-field MRI and clinical scanners. The U.S. Food and Drug Administration (U.S. FDA) guides limit the local specific absorption rate (SAR) in the body-including the head-to 8 W kg(-1). We present results as functions of SAR and show that for a given SAR the peak acoustic pressures generated in the anatomic head model were essentially the same at 64, 300, and 400 MHz (1.5, 7.0, and 9.4 T). Pressures generated in the anatomic head are comparable to the threshold pressure of 20 mPa for sound perception by humans at the cochlea for 4 W kg(-1). Moreover, results indicate that the peak acoustic pressure in the brain is only 2 to 3 times the auditory threshold at the U.S. FDA guideline of 8 W kg(-1). Even at a high SAR of 20 W kg(-1), where the acoustic pressure in the brain could be more than 7 times the auditory threshold, the sound pressure levels would not be more than 17 db above threshold of perception at the cochlea.
Effect of head pitch and roll orientations on magnetically induced vertigo.
Mian, Omar S; Li, Yan; Antunes, Andre; Glover, Paul M; Day, Brian L
2016-02-15
Lying supine in a strong magnetic field, such as in magnetic resonance imaging scanners, can induce a perception of whole-body rotation. The leading hypothesis to explain this invokes a Lorentz force mechanism acting on vestibular endolymph that acts to stimulate semicircular canals. The hypothesis predicts that the perception of whole-body rotation will depend on head orientation in the field. Results showed that the direction and magnitude of apparent whole-body rotation while stationary in a 7 T magnetic field is influenced by head orientation. The data are compatible with the Lorentz force hypothesis of magnetic vestibular stimulation and furthermore demonstrate the operation of a spatial transformation process from head-referenced vestibular signals to Earth-referenced body motion. High strength static magnetic fields are known to induce vertigo, believed to be via stimulation of the vestibular system. The leading hypothesis (Lorentz forces) predicts that the induced vertigo should depend on the orientation of the magnetic field relative to the head. In this study we examined the effect of static head pitch (-80 to +40 deg; 12 participants) and roll (-40 to +40 deg; 11 participants) on qualitative and quantitative aspects of vertigo experienced in the dark by healthy humans when exposed to the static uniform magnetic field inside a 7 T MRI scanner. Three participants were additionally examined at 180 deg pitch and roll orientations. The effect of roll orientation on horizontal and vertical nystagmus was also measured and was found to affect only the vertical component. Vertigo was most discomforting when head pitch was around 60 deg extension and was mildest when it was around 20 deg flexion. Quantitative analysis of vertigo focused on the induced perception of horizontal-plane rotation reported online with the aid of hand-held switches. Head orientation had effects on both the magnitude and the direction of this perceived rotation. The data suggest sinusoidal relationships between head orientation and perception with spatial periods of 180 deg for pitch and 360 deg for roll, which we explain is consistent with the Lorentz force hypothesis. The effects of head pitch on vertigo and previously reported nystagmus are consistent with both effects being driven by a common vestibular signal. To explain all the observed effects, this common signal requires contributions from multiple semicircular canals. © 2015 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.
Mahoney, Jeannette R; Holtzer, Roee; Verghese, Joe
2014-01-01
Research detailing multisensory integration (MSI) processes in aging and their association with clinically relevant outcomes is virtually non-existent. To our knowledge, the relationship between MSI and balance has not been well-established in aging. Given known alterations in unisensory processing with increasing age, the aims of the current study were to determine differential behavioral patterns of MSI in aging and investigate whether MSI was significantly associated with balance and fall-risk. Seventy healthy older adults (M = 75 years; 58% female) participated in the current study. Participants were instructed to make speeded responses to visual, somatosensory, and visual-somatosensory (VS) stimuli. Based on reaction times (RTs) to all stimuli, participants were classified into one of two groups (MSI or NO MSI), depending on their MSI RT benefit. Static balance was assessed using mean unipedal stance time. Overall, results revealed that RTs to VS stimuli were significantly shorter than those elicited to constituent unisensory conditions. Further, the current experimental design afforded differential patterns of multisensory processing, with 75% of the elderly sample demonstrating multisensory enhancements. Interestingly, 25% of older adults did not demonstrate multisensory RT facilitation; a finding that was attributed to extremely fast RTs overall and specifically in response to somatosensory inputs. Individuals in the NO MSI group maintained significantly better unipedal stance times and reported less falls, compared to elders in the MSI group. This study reveals the existence of differential patterns of multisensory processing in aging, while describing the clinical translational value of MSI enhancements in predicting balance and falls risk.
Van der Stoep, N; Spence, C; Nijboer, T C W; Van der Stigchel, S
2015-11-01
Two processes that can give rise to multisensory response enhancement (MRE) are multisensory integration (MSI) and crossmodal exogenous spatial attention. It is, however, currently unclear what the relative contribution of each of these is to MRE. We investigated this issue using two tasks that are generally assumed to measure MSI (a redundant target effect task) and crossmodal exogenous spatial attention (a spatial cueing task). One block of trials consisted of unimodal auditory and visual targets designed to provide a unimodal baseline. In two other blocks of trials, the participants were presented with spatially and temporally aligned and misaligned audiovisual (AV) targets (0, 50, 100, and 200ms SOA). In the integration block, the participants were instructed to respond to the onset of the first target stimulus that they detected (A or V). The instruction for the cueing block was to respond only to the onset of the visual targets. The targets could appear at one of three locations: left, center, and right. The participants were instructed to respond only to lateral targets. The results indicated that MRE was caused by MSI at 0ms SOA. At 50ms SOA, both crossmodal exogenous spatial attention and MSI contributed to the observed MRE, whereas the MRE observed at the 100 and 200ms SOAs was attributable to crossmodal exogenous spatial attention, alerting, and temporal preparation. These results therefore suggest that there may be a temporal window in which both MSI and exogenous crossmodal spatial attention can contribute to multisensory response enhancement. Copyright © 2015 Elsevier B.V. All rights reserved.
Mahoney, Jeannette R.; Holtzer, Roee; Verghese, Joe
2014-01-01
Research detailing multisensory integration (MSI) processes in aging and their association with clinically relevant outcomes is virtually non-existent. To our knowledge, the relationship between MSI and balance has not been well-established in aging. Given known alterations in unisensory processing with increasing age, the aims of the current study were to determine differential behavioral patterns of MSI in aging and investigate whether MSI was significantly associated with balance and fall-risk. Seventy healthy older adults (M = 75 years; 58% female) participated in the current study. Participants were instructed to make speeded responses to visual, somatosensory, and visual-somatosensory (VS) stimuli. Based on reaction times (RTs) to all stimuli, participants were classified into one of two groups (MSI or NO MSI), depending on their MSI RT benefit. Static balance was assessed using mean unipedal stance time. Overall, results revealed that RTs to VS stimuli were significantly shorter than those elicited to constituent unisensory conditions. Further, the current experimental design afforded differential patterns of multisensory processing, with 75% of the elderly sample demonstrating multisensory enhancements. Interestingly, 25% of older adults did not demonstrate multisensory RT facilitation; a finding that was attributed to extremely fast RTs overall and specifically in response to somatosensory inputs. Individuals in the NO MSI group maintained significantly better unipedal stance times and reported less falls, compared to elders in the MSI group. This study reveals the existence of differential patterns of multisensory processing in aging, while describing the clinical translational value of MSI enhancements in predicting balance and falls risk. PMID:25102664
Motion parallax in immersive cylindrical display systems
NASA Astrophysics Data System (ADS)
Filliard, N.; Reymond, G.; Kemeny, A.; Berthoz, A.
2012-03-01
Motion parallax is a crucial visual cue produced by translations of the observer for the perception of depth and selfmotion. Therefore, tracking the observer viewpoint has become inevitable in immersive virtual (VR) reality systems (cylindrical screens, CAVE, head mounted displays) used e.g. in automotive industry (style reviews, architecture design, ergonomics studies) or in scientific studies of visual perception. The perception of a stable and rigid world requires that this visual cue be coherent with other extra-retinal (e.g. vestibular, kinesthetic) cues signaling ego-motion. Although world stability is never questioned in real world, rendering head coupled viewpoint in VR can lead to the perception of an illusory perception of unstable environments, unless a non-unity scale factor is applied on recorded head movements. Besides, cylindrical screens are usually used with static observers due to image distortions when rendering image for viewpoints different from a sweet spot. We developed a technique to compensate in real-time these non-linear visual distortions, in an industrial VR setup, based on a cylindrical screen projection system. Additionally, to evaluate the amount of discrepancies tolerated without perceptual distortions between visual and extraretinal cues, a "motion parallax gain" between the velocity of the observer's head and that of the virtual camera was introduced in this system. The influence of this artificial gain was measured on the gait stability of free-standing participants. Results indicate that, below unity, gains significantly alter postural control. Conversely, the influence of higher gains remains limited, suggesting a certain tolerance of observers to these conditions. Parallax gain amplification is therefore proposed as a possible solution to provide a wider exploration of space to users of immersive virtual reality systems.
Detection of Iberian ham aroma by a semiconductor multisensorial system.
Otero, Laura; Horrillo, M A Carmen; García, María; Sayago, Isabel; Aleixandre, Manuel; Fernández, M A Jesús; Arés, Luis; Gutiérrez, Javier
2003-11-01
A semiconductor multisensorial system, based on tin oxide, to control the quality of dry-cured Iberian hams is described. Two types of ham (submitted to different drying temperatures) were selected. Good responses were obtained from the 12 elements forming the multisensor for different operating temperatures. Discrimination between the two types of ham was successfully realised through principal component analysis (PCA).
ERIC Educational Resources Information Center
Ghoneim, Nahed Mohammed Mahmoud; Elghotmy, Heba Elsayed Abdelsalam
2015-01-01
The current study investigates the effect of a suggested multisensory phonics program on developing kindergarten pre-service teachers' EFL reading accuracy and phonemic awareness. A total of 40 fourth year kindergarten pre-service teachers, Faculty of Education, participated in the study that involved one group experimental design. Pre-post tests…
ERIC Educational Resources Information Center
Schlesinger, Nora W.; Gray, Shelley
2017-01-01
The purpose of this study was to investigate whether the use of simultaneous multisensory structured language instruction promoted better letter name and sound production, word reading, and word spelling for second grade children with typical development (N = 6) or with dyslexia (N = 5) than structured language instruction alone. The use of…
ERIC Educational Resources Information Center
ten Brug, Annet; van der Putten, Annette; Penne, Anneleen; Maes, Bea; Vlaskamp, Carla
2012-01-01
Background: Multi-sensory storytelling (MSST) books are individualized stories, which involve sensory stimulation in addition to verbal text. Despite the frequent use of MSST in practice, little research is conducted into its structure, content and effectiveness. This study aims at the analysis of the development, content and application in…
ERIC Educational Resources Information Center
Ellingsen, Ryleigh; Clinton, Elias
2017-01-01
This manuscript reviews the empirical literature of the TouchMath© instructional program. The TouchMath© program is a commercial mathematics series that uses a dot notation system to provide multisensory instruction of computation skills. Using the program, students are taught to solve computational tasks in a multisensory manner that does not…
ERIC Educational Resources Information Center
Bao, Vanessa A.; Doobay, Victoria; Mottron, Laurent; Collignon, Olivier; Bertone, Armando
2017-01-01
Previous studies have suggested audiovisual multisensory integration (MSI) may be atypical in Autism Spectrum Disorder (ASD). However, much of the research having found an alteration in MSI in ASD involved socio-communicative stimuli. The goal of the current study was to investigate MSI abilities in ASD using lower-level stimuli that are not…
ERIC Educational Resources Information Center
Lotan, Meir; Gold, Christian
2009-01-01
Background: The Snoezelen[R] is a multisensory intervention approach that has been implemented with various populations. Due to an almost complete absence of rigorous research in this field, the confirmation of this approach as an effective therapeutic intervention is warranted. Method: To evaluate the therapeutic influence of the…
ERIC Educational Resources Information Center
Hogg, James; Cavet, Judith; Lambe, Loretto; Smeddle, Mary
2001-01-01
A research review on the use of Snoezelen (multisensory training) with people with mental retardation demonstrates a wide range of positive outcomes, though there is little evidence of generalization even to the immediate post-Snoezelen environment. The issue of staff attitudes and the place of Snoezelen in facilitating positive interactions is…
ERIC Educational Resources Information Center
Ten Brug, Annet; Munde, Vera S.; van der Putten, Annette A.J.; Vlaskamp, Carla
2015-01-01
Introduction: Multi-sensory storytelling (MSST) is a storytelling method designed for individuals with profound intellectual and multiple disabilities (PIMD). It is essential that listeners be alert during MSST, so that they become familiar with their personalised stories. Repetition and the presentation of stimuli are likely to affect the…
ERIC Educational Resources Information Center
Sparks, Richard L.; Artzer, Marjorie; Patton, Jon; Ganschow, Leonore; Miller, Karen; Hordubay, Dorothy J.; Walsh, Geri
1998-01-01
A study examined the benefits of multisensory structured language (MSL) instruction in Spanish for 39 high school students at risk for foreign-language learning difficulties and 16 controls. On measures of oral and written foreign-language proficiency, the MSL and control groups scored significantly higher than those instructed using traditional…
ERIC Educational Resources Information Center
Zaccagnini, Cindy M.; Antia, Shirin D.
1993-01-01
This study of the effects of intensive multisensory speech training on the speech production of a profoundly hearing-impaired child (age nine) found that the addition of Visual Phonics hand cues did not result in speech production gains. All six target phonemes were generalized to new words and maintained after the intervention was discontinued.…
Magnée, Maurice J C M; de Gelder, Beatrice; van Engeland, Herman; Kemner, Chantal
2011-01-01
Successful integration of various simultaneously perceived perceptual signals is crucial for social behavior. Recent findings indicate that this multisensory integration (MSI) can be modulated by attention. Theories of Autism Spectrum Disorders (ASDs) suggest that MSI is affected in this population while it remains unclear to what extent this is related to impairments in attentional capacity. In the present study Event-related potentials (ERPs) following emotionally congruent and incongruent face-voice pairs were measured in 23 high-functioning, adult ASD individuals and 24 age- and IQ-matched controls. MSI was studied while the attention of the participants was manipulated. ERPs were measured at typical auditory and visual processing peaks, namely, P2 and N170. While controls showed MSI during divided attention and easy selective attention tasks, individuals with ASD showed MSI during easy selective attention tasks only. It was concluded that individuals with ASD are able to process multisensory emotional stimuli, but this is differently modulated by attention mechanisms in these participants, especially those associated with divided attention. This atypical interaction between attention and MSI is also relevant to treatment strategies, with training of multisensory attentional control possibly being more beneficial than conventional sensory integration therapy.
Cappe, Céline; Morel, Anne; Barone, Pascal
2009-01-01
Multisensory and sensorimotor integrations are usually considered to occur in superior colliculus and cerebral cortex, but few studies proposed the thalamus as being involved in these integrative processes. We investigated whether the organization of the thalamocortical (TC) systems for different modalities partly overlap, representing an anatomical support for multisensory and sensorimotor interplay in thalamus. In 2 macaque monkeys, 6 neuroanatomical tracers were injected in the rostral and caudal auditory cortex, posterior parietal cortex (PE/PEa in area 5), and dorsal and ventral premotor cortical areas (PMd, PMv), demonstrating the existence of overlapping territories of thalamic projections to areas of different modalities (sensory and motor). TC projections, distinct from the ones arising from specific unimodal sensory nuclei, were observed from motor thalamus to PE/PEa or auditory cortex and from sensory thalamus to PMd/PMv. The central lateral nucleus and the mediodorsal nucleus project to all injected areas, but the most significant overlap across modalities was found in the medial pulvinar nucleus. The present results demonstrate the presence of thalamic territories integrating different sensory modalities with motor attributes. Based on the divergent/convergent pattern of TC and corticothalamic projections, 4 distinct mechanisms of multisensory and sensorimotor interplay are proposed. PMID:19150924
Fernández, Roemi; Salinas, Carlota; Montes, Héctor; Sarria, Javier
2014-01-01
The motivation of this research was to explore the feasibility of detecting and locating fruits from different kinds of crops in natural scenarios. To this end, a unique, modular and easily adaptable multisensory system and a set of associated pre-processing algorithms are proposed. The offered multisensory rig combines a high resolution colour camera and a multispectral system for the detection of fruits, as well as for the discrimination of the different elements of the plants, and a Time-Of-Flight (TOF) camera that provides fast acquisition of distances enabling the localisation of the targets in the coordinate space. A controlled lighting system completes the set-up, increasing its flexibility for being used in different working conditions. The pre-processing algorithms designed for the proposed multisensory system include a pixel-based classification algorithm that labels areas of interest that belong to fruits and a registration algorithm that combines the results of the aforementioned classification algorithm with the data provided by the TOF camera for the 3D reconstruction of the desired regions. Several experimental tests have been carried out in outdoors conditions in order to validate the capabilities of the proposed system. PMID:25615730
Multi-modal distraction: insights from children's limited attention.
Matusz, Pawel J; Broadbent, Hannah; Ferrari, Jessica; Forrest, Benjamin; Merkley, Rebecca; Scerif, Gaia
2015-03-01
How does the multi-sensory nature of stimuli influence information processing? Cognitive systems with limited selective attention can elucidate these processes. Six-year-olds, 11-year-olds and 20-year-olds engaged in a visual search task that required them to detect a pre-defined coloured shape under conditions of low or high visual perceptual load. On each trial, a peripheral distractor that could be either compatible or incompatible with the current target colour was presented either visually, auditorily or audiovisually. Unlike unimodal distractors, audiovisual distractors elicited reliable compatibility effects across the two levels of load in adults and in the older children, but high visual load significantly reduced distraction for all children, especially the youngest participants. This study provides the first demonstration that multi-sensory distraction has powerful effects on selective attention: Adults and older children alike allocate attention to potentially relevant information across multiple senses. However, poorer attentional resources can, paradoxically, shield the youngest children from the deleterious effects of multi-sensory distraction. Furthermore, we highlight how developmental research can enrich the understanding of distinct mechanisms controlling adult selective attention in multi-sensory environments. Copyright © 2014 Elsevier B.V. All rights reserved.
Audio-Tactile Integration in Congenitally and Late Deaf Cochlear Implant Users
Nava, Elena; Bottari, Davide; Villwock, Agnes; Fengler, Ineke; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte
2014-01-01
Several studies conducted in mammals and humans have shown that multisensory processing may be impaired following congenital sensory loss and in particular if no experience is achieved within specific early developmental time windows known as sensitive periods. In this study we investigated whether basic multisensory abilities are impaired in hearing-restored individuals with deafness acquired at different stages of development. To this aim, we tested congenitally and late deaf cochlear implant (CI) recipients, age-matched with two groups of hearing controls, on an audio-tactile redundancy paradigm, in which reaction times to unimodal and crossmodal redundant signals were measured. Our results showed that both congenitally and late deaf CI recipients were able to integrate audio-tactile stimuli, suggesting that congenital and acquired deafness does not prevent the development and recovery of basic multisensory processing. However, we found that congenitally deaf CI recipients had a lower multisensory gain compared to their matched controls, which may be explained by their faster responses to tactile stimuli. We discuss this finding in the context of reorganisation of the sensory systems following sensory loss and the possibility that these changes cannot be “rewired” through auditory reafferentation. PMID:24918766
Audio-tactile integration in congenitally and late deaf cochlear implant users.
Nava, Elena; Bottari, Davide; Villwock, Agnes; Fengler, Ineke; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte
2014-01-01
Several studies conducted in mammals and humans have shown that multisensory processing may be impaired following congenital sensory loss and in particular if no experience is achieved within specific early developmental time windows known as sensitive periods. In this study we investigated whether basic multisensory abilities are impaired in hearing-restored individuals with deafness acquired at different stages of development. To this aim, we tested congenitally and late deaf cochlear implant (CI) recipients, age-matched with two groups of hearing controls, on an audio-tactile redundancy paradigm, in which reaction times to unimodal and crossmodal redundant signals were measured. Our results showed that both congenitally and late deaf CI recipients were able to integrate audio-tactile stimuli, suggesting that congenital and acquired deafness does not prevent the development and recovery of basic multisensory processing. However, we found that congenitally deaf CI recipients had a lower multisensory gain compared to their matched controls, which may be explained by their faster responses to tactile stimuli. We discuss this finding in the context of reorganisation of the sensory systems following sensory loss and the possibility that these changes cannot be "rewired" through auditory reafferentation.
NASA Technical Reports Server (NTRS)
Busquets, Anthony M.; Parrish, Russell V.; Williams, Steven P.
1991-01-01
High-fidelity color pictorial displays that incorporate depth cues in the display elements are currently available. Depth cuing applied to advanced head-down flight display concepts potentially enhances the pilot's situational awareness and improves task performance. Depth cues provided by stereopsis exhibit constraints that must be fully understood so depth cuing enhancements can be adequately realized and exploited. A fundamental issue (the goal of this investigation) is whether the use of head-down stereoscopic displays in flight applications degrade the real-world depth perception of pilots using such displays. Stereoacuity tests are used in this study as the measure of interest. Eight pilots flew repeated simulated landing approaches using both nonstereo and stereo 3-D head-down pathway-in-the-sky displays. At this decision height of each approach (where the pilot changes to an out-the-window view to obtain real-world visual references) the pilots changed to a stereoacuity test that used real objects. Statistical analysis of stereoacuity measures (data for a control condition of no exposure to any electronic flight display compared with data for changes from nonstereo and from stereo displays) reveals no significant differences for any of the conditions. Therefore, changing from short-term exposure to a head-down stereo display has no more effect on real-world relative depth perception than does changing from a nonstereo display. However, depth perception effects based on sized and distance judgements and on long-term exposure remain issues to be investigated.
Zelic, Gregory; Mottet, Denis; Lagarde, Julien
2012-01-01
Recent behavioral neuroscience research revealed that elementary reactive behavior can be improved in the case of cross-modal sensory interactions thanks to underlying multisensory integration mechanisms. Can this benefit be generalized to an ongoing coordination of movements under severe physical constraints? We choose a juggling task to examine this question. A central issue well-known in juggling lies in establishing and maintaining a specific temporal coordination among balls, hands, eyes and posture. Here, we tested whether providing additional timing information about the balls and hands motions by using external sound and tactile periodic stimulations, the later presented at the wrists, improved the behavior of jugglers. One specific combination of auditory and tactile metronome led to a decrease of the spatiotemporal variability of the juggler's performance: a simple sound associated to left and right tactile cues presented antiphase to each other, which corresponded to the temporal pattern of hands movement in the juggling task. A contrario, no improvements were obtained in the case of other auditory and tactile combinations. We even found a degraded performance when tactile events were presented alone. The nervous system thus appears able to integrate in efficient way environmental information brought by different sensory modalities, but only if the information specified matches specific features of the coordination pattern. We discuss the possible implications of these results for the understanding of the neuronal integration process implied in audio-tactile interaction in the context of complex voluntary movement, and considering the well-known gating effect of movement on vibrotactile perception. PMID:22384211
Colonius, Hans; Diederich, Adele
2011-07-01
The concept of a "time window of integration" holds that information from different sensory modalities must not be perceived too far apart in time in order to be integrated into a multisensory perceptual event. Empirical estimates of window width differ widely, however, ranging from 40 to 600 ms depending on context and experimental paradigm. Searching for theoretical derivation of window width, Colonius and Diederich (Front Integr Neurosci 2010) developed a decision-theoretic framework using a decision rule that is based on the prior probability of a common source, the likelihood of temporal disparities between the unimodal signals, and the payoff for making right or wrong decisions. Here, this framework is extended to the focused attention task where subjects are asked to respond to signals from a target modality only. Evoking the framework of the time-window-of-integration (TWIN) model, an explicit expression for optimal window width is obtained. The approach is probed on two published focused attention studies. The first is a saccadic reaction time study assessing the efficiency with which multisensory integration varies as a function of aging. Although the window widths for young and older adults differ by nearly 200 ms, presumably due to their different peripheral processing speeds, neither of them deviates significantly from the optimal values. In the second study, head saccadic reactions times to a perfectly aligned audiovisual stimulus pair had been shown to depend on the prior probability of spatial alignment. Intriguingly, they reflected the magnitude of the time-window widths predicted by our decision-theoretic framework, i.e., a larger time window is associated with a higher prior probability.
The Functional Role of Neural Oscillations in Non-Verbal Emotional Communication
Symons, Ashley E.; El-Deredy, Wael; Schwartze, Michael; Kotz, Sonja A.
2016-01-01
Effective interpersonal communication depends on the ability to perceive and interpret nonverbal emotional expressions from multiple sensory modalities. Current theoretical models propose that visual and auditory emotion perception involves a network of brain regions including the primary sensory cortices, the superior temporal sulcus (STS), and orbitofrontal cortex (OFC). However, relatively little is known about how the dynamic interplay between these regions gives rise to the perception of emotions. In recent years, there has been increasing recognition of the importance of neural oscillations in mediating neural communication within and between functional neural networks. Here we review studies investigating changes in oscillatory activity during the perception of visual, auditory, and audiovisual emotional expressions, and aim to characterize the functional role of neural oscillations in nonverbal emotion perception. Findings from the reviewed literature suggest that theta band oscillations most consistently differentiate between emotional and neutral expressions. While early theta synchronization appears to reflect the initial encoding of emotionally salient sensory information, later fronto-central theta synchronization may reflect the further integration of sensory information with internal representations. Additionally, gamma synchronization reflects facilitated sensory binding of emotional expressions within regions such as the OFC, STS, and, potentially, the amygdala. However, the evidence is more ambiguous when it comes to the role of oscillations within the alpha and beta frequencies, which vary as a function of modality (or modalities), presence or absence of predictive information, and attentional or task demands. Thus, the synchronization of neural oscillations within specific frequency bands mediates the rapid detection, integration, and evaluation of emotional expressions. Moreover, the functional coupling of oscillatory activity across multiples frequency bands supports a predictive coding model of multisensory emotion perception in which emotional facial and body expressions facilitate the processing of emotional vocalizations. PMID:27252638
The Functional Role of Neural Oscillations in Non-Verbal Emotional Communication.
Symons, Ashley E; El-Deredy, Wael; Schwartze, Michael; Kotz, Sonja A
2016-01-01
Effective interpersonal communication depends on the ability to perceive and interpret nonverbal emotional expressions from multiple sensory modalities. Current theoretical models propose that visual and auditory emotion perception involves a network of brain regions including the primary sensory cortices, the superior temporal sulcus (STS), and orbitofrontal cortex (OFC). However, relatively little is known about how the dynamic interplay between these regions gives rise to the perception of emotions. In recent years, there has been increasing recognition of the importance of neural oscillations in mediating neural communication within and between functional neural networks. Here we review studies investigating changes in oscillatory activity during the perception of visual, auditory, and audiovisual emotional expressions, and aim to characterize the functional role of neural oscillations in nonverbal emotion perception. Findings from the reviewed literature suggest that theta band oscillations most consistently differentiate between emotional and neutral expressions. While early theta synchronization appears to reflect the initial encoding of emotionally salient sensory information, later fronto-central theta synchronization may reflect the further integration of sensory information with internal representations. Additionally, gamma synchronization reflects facilitated sensory binding of emotional expressions within regions such as the OFC, STS, and, potentially, the amygdala. However, the evidence is more ambiguous when it comes to the role of oscillations within the alpha and beta frequencies, which vary as a function of modality (or modalities), presence or absence of predictive information, and attentional or task demands. Thus, the synchronization of neural oscillations within specific frequency bands mediates the rapid detection, integration, and evaluation of emotional expressions. Moreover, the functional coupling of oscillatory activity across multiples frequency bands supports a predictive coding model of multisensory emotion perception in which emotional facial and body expressions facilitate the processing of emotional vocalizations.
Gravity as a Strong Prior: Implications for Perception and Action.
Jörges, Björn; López-Moliner, Joan
2017-01-01
In the future, humans are likely to be exposed to environments with altered gravity conditions, be it only visually (Virtual and Augmented Reality), or visually and bodily (space travel). As visually and bodily perceived gravity as well as an interiorized representation of earth gravity are involved in a series of tasks, such as catching, grasping, body orientation estimation and spatial inferences, humans will need to adapt to these new gravity conditions. Performance under earth gravity discrepant conditions has been shown to be relatively poor, and few studies conducted in gravity adaptation are rather discouraging. Especially in VR on earth, conflicts between bodily and visual gravity cues seem to make a full adaptation to visually perceived earth-discrepant gravities nearly impossible, and even in space, when visual and bodily cues are congruent, adaptation is extremely slow. We invoke a Bayesian framework for gravity related perceptual processes, in which earth gravity holds the status of a so called "strong prior". As other strong priors, the gravity prior has developed through years and years of experience in an earth gravity environment. For this reason, the reliability of this representation is extremely high and overrules any sensory information to its contrary. While also other factors such as the multisensory nature of gravity perception need to be taken into account, we present the strong prior account as a unifying explanation for empirical results in gravity perception and adaptation to earth-discrepant gravities.
Kokinous, Jenny; Tavano, Alessandro; Kotz, Sonja A; Schröger, Erich
2017-02-01
The role of spatial frequencies (SF) is highly debated in emotion perception, but previous work suggests the importance of low SFs for detecting emotion in faces. Furthermore, emotion perception essentially relies on the rapid integration of multimodal information from faces and voices. We used EEG to test the functional relevance of SFs in the integration of emotional and non-emotional audiovisual stimuli. While viewing dynamic face-voice pairs, participants were asked to identify auditory interjections, and the electroencephalogram (EEG) was recorded. Audiovisual integration was measured as auditory facilitation, indexed by the extent of the auditory N1 amplitude suppression in audiovisual compared to an auditory only condition. We found an interaction of SF filtering and emotion in the auditory response suppression. For neutral faces, larger N1 suppression ensued in the unfiltered and high SF conditions as compared to the low SF condition. Angry face perception led to a larger N1 suppression in the low SF condition. While the results for the neural faces indicate that perceptual quality in terms of SF content plays a major role in audiovisual integration, the results for angry faces suggest that early multisensory integration of emotional information favors low SF neural processing pathways, overruling the predictive value of the visual signal per se. Copyright © 2016 Elsevier B.V. All rights reserved.
Interoceptive signals impact visual processing: Cardiac modulation of visual body perception.
Ronchi, Roberta; Bernasconi, Fosco; Pfeiffer, Christian; Bello-Ruiz, Javier; Kaliuzhna, Mariia; Blanke, Olaf
2017-09-01
Multisensory perception research has largely focused on exteroceptive signals, but recent evidence has revealed the integration of interoceptive signals with exteroceptive information. Such research revealed that heartbeat signals affect sensory (e.g., visual) processing: however, it is unknown how they impact the perception of body images. Here we linked our participants' heartbeat to visual stimuli and investigated the spatio-temporal brain dynamics of cardio-visual stimulation on the processing of human body images. We recorded visual evoked potentials with 64-channel electroencephalography while showing a body or a scrambled-body (control) that appeared at the frequency of the on-line recorded participants' heartbeat or not (not-synchronous, control). Extending earlier studies, we found a body-independent effect, with cardiac signals enhancing visual processing during two time periods (77-130 ms and 145-246 ms). Within the second (later) time-window we detected a second effect characterised by enhanced activity in parietal, temporo-occipital, inferior frontal, and right basal ganglia-insula regions, but only when non-scrambled body images were flashed synchronously with the heartbeat (208-224 ms). In conclusion, our results highlight the role of interoceptive information for the visual processing of human body pictures within a network integrating cardio-visual signals of relevance for perceptual and cognitive aspects of visual body processing. Copyright © 2017 Elsevier Inc. All rights reserved.
A crossmodal role for audition in taste perception.
Yan, Kimberly S; Dando, Robin
2015-06-01
Our sense of taste can be influenced by our other senses, with several groups having explored the effects of olfactory, visual, or tactile stimulation on what we perceive as taste. Research into multisensory, or crossmodal perception has rarely linked our sense of taste with that of audition. In our study, 48 participants in a crossover experiment sampled multiple concentrations of solutions of 5 prototypic tastants, during conditions with or without broad spectrum auditory stimulation, simulating that of airline cabin noise. Airline cabins are an unusual environment, in which food is consumed routinely under extreme noise conditions, often over 85 dB, and in which the perceived quality of food is often criticized. Participants rated the intensity of solutions representing varying concentrations of the 5 basic tastes on the general Labeled Magnitude Scale. No difference in intensity ratings was evident between the control and sound condition for salty, sour, or bitter tastes. Likewise, panelists did not perform differently during sound conditions when rating tactile, visual, or auditory stimulation, or in reaction time tests. Interestingly, sweet taste intensity was rated progressively lower, whereas the perception of umami taste was augmented during the experimental sound condition, to a progressively greater degree with increasing concentration. We postulate that this effect arises from mechanostimulation of the chorda tympani nerve, which transits directly across the tympanic membrane of the middle ear. (c) 2015 APA, all rights reserved).
ERIC Educational Resources Information Center
Magpuri-Lavell, Theresa; Paige, David; Williams, Rosemary; Akins, Kristia; Cameron, Molly
2014-01-01
The present study examined the impact of the Simultaneous Multisensory Institute for Language Arts (SMILA) approach on the reading proficiency of 39 students between the ages of 7-11 participating in a summer reading program. The summer reading clinic draws students from the surrounding community which is located in a large urban district in the…
ERIC Educational Resources Information Center
Fava, Leonardo; Strauss, Kristin
2010-01-01
The present study examined whether Snoezelen and Stimulus Preference environments have differential effects on disruptive and pro-social behaviors in adults with profound mental retardation and autism. In N = 27 adults these target behaviors were recorded for a total of 20 sessions using both multi-sensory rooms. Three comparison groups were…
ERIC Educational Resources Information Center
van Staden, Annalene
2013-01-01
The reading skills of many deaf children lag several years behind those of hearing children, and there is a need for identifying reading difficulties and implementing effective reading support strategies in this population. This study embraces a balanced reading approach, and investigates the efficacy of applying multi-sensory coding strategies…
The internal representation of head orientation differs for conscious perception and balance control
Dalton, Brian H.; Rasman, Brandon G.; Inglis, J. Timothy
2017-01-01
Key points We tested perceived head‐on‐feet orientation and the direction of vestibular‐evoked balance responses in passively and actively held head‐turned postures.The direction of vestibular‐evoked balance responses was not aligned with perceived head‐on‐feet orientation while maintaining prolonged passively held head‐turned postures. Furthermore, static visual cues of head‐on‐feet orientation did not update the estimate of head posture for the balance controller.A prolonged actively held head‐turned posture did not elicit a rotation in the direction of the vestibular‐evoked balance response despite a significant rotation in perceived angular head posture.It is proposed that conscious perception of head posture and the transformation of vestibular signals for standing balance relying on this head posture are not dependent on the same internal representation. Rather, the balance system may operate under its own sensorimotor principles, which are partly independent from perception. Abstract Vestibular signals used for balance control must be integrated with other sensorimotor cues to allow transformation of descending signals according to an internal representation of body configuration. We explored two alternative models of sensorimotor integration that propose (1) a single internal representation of head‐on‐feet orientation is responsible for perceived postural orientation and standing balance or (2) conscious perception and balance control are driven by separate internal representations. During three experiments, participants stood quietly while passively or actively maintaining a prolonged head‐turned posture (>10 min). Throughout the trials, participants intermittently reported their perceived head angular position, and subsequently electrical vestibular stimuli were delivered to elicit whole‐body balance responses. Visual recalibration of head‐on‐feet posture was used to determine whether static visual cues are used to update the internal representation of body configuration for perceived orientation and standing balance. All three experiments involved situations in which the vestibular‐evoked balance response was not orthogonal to perceived head‐on‐feet orientation, regardless of the visual information provided. For prolonged head‐turned postures, balance responses consistent with actual head‐on‐feet posture occurred only during the active condition. Our results indicate that conscious perception of head‐on‐feet posture and vestibular control of balance do not rely on the same internal representation, but instead treat sensorimotor cues in parallel and may arrive at different conclusions regarding head‐on‐feet posture. The balance system appears to bypass static visual cues of postural orientation and mainly use other sensorimotor signals of head‐on‐feet position to transform vestibular signals of head motion, a mechanism appropriate for most daily activities. PMID:28035656
Sugar reduction without compromising sensory perception. An impossible dream?
Hutchings, Scott C; Low, Julia Y Q; Keast, Russell S J
2018-03-21
Sugar reduction is a major technical challenge for the food industry to address in response to public health concerns regarding the amount of added sugars in foods. This paper reviews sweet taste perception, sensory methods to evaluate sugar reduction and the merits of different techniques available to reduce sugar content. The use of sugar substitutes (non-nutritive sweeteners, sugar alcohols, and fibres) can achieve the greatest magnitude of sugar and energy reduction, however bitter side tastes and varying temporal sweet profiles are common issues. The use of multisensory integration principles (particularly aroma) can be an effective approach to reduce sugar content, however the magnitude of sugar reduction is small. Innovation in food structure (modifying the sucrose distribution, serum release and fracture mechanics) offers a new way to reduce sugar without significant changes in food composition, however may be difficult to implement in food produced on a large scale. Gradual sugar reduction presents difficulties for food companies from a sales perspective if acceptability is compromised. Ultimately, a holistic approach where food manufacturers integrate a range of these techniques is likely to provide the best progress. However, substantial reduction of sugar in processed foods without compromising sensory properties may be an impossible dream.
Exploring how musical rhythm entrains brain activity with electroencephalogram frequency-tagging.
Nozaradan, Sylvie
2014-12-19
The ability to perceive a regular beat in music and synchronize to this beat is a widespread human skill. Fundamental to musical behaviour, beat and meter refer to the perception of periodicities while listening to musical rhythms and often involve spontaneous entrainment to move on these periodicities. Here, we present a novel experimental approach inspired by the frequency-tagging approach to understand the perception and production of rhythmic inputs. This approach is illustrated here by recording the human electroencephalogram responses at beat and meter frequencies elicited in various contexts: mental imagery of meter, spontaneous induction of a beat from rhythmic patterns, multisensory integration and sensorimotor synchronization. Collectively, our observations support the view that entrainment and resonance phenomena subtend the processing of musical rhythms in the human brain. More generally, they highlight the potential of this approach to help us understand the link between the phenomenology of musical beat and meter and the bias towards periodicities arising under certain circumstances in the nervous system. Entrainment to music provides a highly valuable framework to explore general entrainment mechanisms as embodied in the human brain. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Sound Richness of Music Might Be Mediated by Color Perception: A PET Study.
Satoh, Masayuki; Nagata, Ken; Tomimoto, Hidekazu
2015-01-01
We investigated the role of the fusiform cortex in music processing with the use of PET, focusing on the perception of sound richness. Musically naïve subjects listened to familiar melodies with three kinds of accompaniments: (i) an accompaniment composed of only three basic chords (chord condition), (ii) a simple accompaniment typically used in traditional music text books in elementary school (simple condition), and (iii) an accompaniment with rich and flowery sounds composed by a professional composer (complex condition). Using a PET subtraction technique, we studied changes in regional cerebral blood flow (rCBF) in simple minus chord, complex minus simple, and complex minus chord conditions. The simple minus chord, complex minus simple, and complex minus chord conditions regularly showed increases in rCBF at the posterior portion of the inferior temporal gyrus, including the LOC and fusiform gyrus. We may conclude that certain association cortices such as the LOC and the fusiform cortex may represent centers of multisensory integration, with foreground and background segregation occurring at the LOC level and the recognition of richness and floweriness of stimuli occurring in the fusiform cortex, both in terms of vision and audition.
Olfactory discrimination: when vision matters?
Demattè, M Luisa; Sanabria, Daniel; Spence, Charles
2009-02-01
Many previous studies have attempted to investigate the effect of visual cues on olfactory perception in humans. The majority of this research has only looked at the modulatory effect of color, which has typically been explained in terms of multisensory perceptual interactions. However, such crossmodal effects may equally well relate to interactions taking place at a higher level of information processing as well. In fact, it is well-known that semantic knowledge can have a substantial effect on people's olfactory perception. In the present study, we therefore investigated the influence of visual cues, consisting of color patches and/or shapes, on people's olfactory discrimination performance. Participants had to make speeded odor discrimination responses (lemon vs. strawberry) while viewing a red or yellow color patch, an outline drawing of a strawberry or lemon, or a combination of these color and shape cues. Even though participants were instructed to ignore the visual stimuli, our results demonstrate that the accuracy of their odor discrimination responses was influenced by visual distractors. This result shows that both color and shape information are taken into account during speeded olfactory discrimination, even when such information is completely task irrelevant, hinting at the automaticity of such higher level visual-olfactory crossmodal interactions.
Baier, Bernhard; Thömke, Frank; Wilting, Janine; Heinze, Caroline; Geber, Christian; Dieterich, Marianne
2012-10-24
The perceived subjective visual vertical (SVV) is an important sign of a vestibular otolith tone imbalance in the roll plane. Previous studies suggested that unilateral pontomedullary brainstem lesions cause ipsiversive roll-tilt of SVV, whereas pontomesencephalic lesions cause contraversive roll-tilts of SVV. However, previous data were of limited quality and lacked a statistical approach. We therefore tested roll-tilt of the SVV in 79 human patients with acute unilateral brainstem lesions due to stroke by applying modern statistical lesion-behavior mapping analysis. Roll-tilt of the SVV was verified to be a brainstem sign, and for the first time it was confirmed statistically that lesions of the medial longitudinal fasciculus (MLF) and the medial vestibular nucleus are associated with ipsiversive tilt of the SVV, whereas contraversive tilts are associated with lesions affecting the rostral interstitial nucleus of the MLF, the superior cerebellar peduncle, the oculomotor nucleus, and the interstitial nucleus of Cajal. Thus, these structures constitute the anatomical pathway in the brainstem for verticality perception. Present data indicate that graviceptive otolith signals present a predominant role in the multisensory system of verticality perception.
Castagna, Filomena; Montemagni, Cristiana; Maria Milani, Anna; Rocca, Giuseppe; Rocca, Paola; Casacchia, Massimo; Bogetto, Filippo
2013-02-28
This study aimed to evaluate the ability to decode emotion in the auditory and audiovisual modality in a group of patients with schizophrenia, and to explore the role of cognition and psychopathology in affecting these emotion recognition abilities. Ninety-four outpatients in a stable phase and 51 healthy subjects were recruited. Patients were assessed through a psychiatric evaluation and a wide neuropsychological battery. All subjects completed the comprehensive affect testing system (CATS), a group of computerized tests designed to evaluate emotion perception abilities. With respect to the controls, patients were not impaired in the CATS tasks involving discrimination of nonemotional prosody, naming of emotional stimuli expressed by voice and judging the emotional content of a sentence, whereas they showed a specific impairment in decoding emotion in a conflicting auditory condition and in the multichannel modality. Prosody impairment was affected by executive functions, attention and negative symptoms, while deficit in multisensory emotion recognition was affected by executive functions and negative symptoms. These emotion recognition deficits, rather than being associated purely with emotion perception disturbances in schizophrenia, are affected by core symptoms of the illness. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
2013-01-01
There is considerable interest in the structural and functional properties of the angular gyrus (AG). Located in the posterior part of the inferior parietal lobule, the AG has been shown in numerous meta-analysis reviews to be consistently activated in a variety of tasks. This review discusses the involvement of the AG in semantic processing, word reading and comprehension, number processing, default mode network, memory retrieval, attention and spatial cognition, reasoning, and social cognition. This large functional neuroimaging literature depicts a major role for the AG in processing concepts rather than percepts when interfacing perception-to-recognition-to-action. More specifically, the AG emerges as a cross-modal hub where converging multisensory information is combined and integrated to comprehend and give sense to events, manipulate mental representations, solve familiar problems, and reorient attention to relevant information. In addition, this review discusses recent findings that point to the existence of multiple subdivisions in the AG. This spatial parcellation can serve as a framework for reporting AG activations with greater definition. This review also acknowledges that the role of the AG cannot comprehensibly be identified in isolation but needs to be understood in parallel with the influence from other regions. Several interesting questions that warrant further investigations are finally emphasized. PMID:22547530
NASA Technical Reports Server (NTRS)
Clement, G.; Moore, S. T.; Raphan, T.; Cohen, B.
2001-01-01
During the 1998 Neurolab mission (STS-90), four astronauts were exposed to interaural and head vertical (dorsoventral) linear accelerations of 0.5 g and 1 g during constant velocity rotation on a centrifuge, both on Earth and during orbital space flight. Subjects were oriented either left-ear-out or right-ear-out (Gy centrifugation), or lay supine along the centrifuge arm with their head off-axis (Gz centrifugation). Pre-flight centrifugation, producing linear accelerations of 0.5 g and 1 g along the Gy (interaural) axis, induced illusions of roll-tilt of 20 degrees and 34 degrees for gravito-inertial acceleration (GIA) vector tilts of 27 degrees and 45 degrees , respectively. Pre-flight 0.5 g and 1 g Gz (head dorsoventral) centrifugation generated perceptions of backward pitch of 5 degrees and 15 degrees , respectively. In the absence of gravity during space flight, the same centrifugation generated a GIA that was equivalent to the centripetal acceleration and aligned with the Gy or Gz axes. Perception of tilt was underestimated relative to this new GIA orientation during early in-flight Gy centrifugation, but was close to the GIA after 16 days in orbit, when subjects reported that they felt as if they were 'lying on side'. During the course of the mission, inflight roll-tilt perception during Gy centrifugation increased from 45 degrees to 83 degrees at 1 g and from 42 degrees to 48 degrees at 0.5 g. Subjects felt 'upside-down' during in-flight Gz centrifugation from the first in-flight test session, which reflected the new GIA orientation along the head dorsoventral axis. The different levels of in-flight tilt perception during 0.5 g and 1 g Gy centrifugation suggests that other non-vestibular inputs, including an internal estimate of the body vertical and somatic sensation, were utilized in generating tilt perception. Interpretation of data by a weighted sum of body vertical and somatic vectors, with an estimate of the GIA from the otoliths, suggests that perception weights the sense of the body vertical more heavily early in-flight, that this weighting falls during adaptation to microgravity, and that the decreased reliance on the body vertical persists early post-flight, generating an exaggerated sense of tilt. Since graviceptors respond to linear acceleration and not to head tilt in orbit, it has been proposed that adaptation to weightlessness entails reinterpretation of otolith activity, causing tilt to be perceived as translation. Since linear acceleration during in-flight centrifugation was always perceived as tilt, not translation, the findings do not support this hypothesis.
Contribution of self-motion perception to acoustic target localization.
Pettorossi, V E; Brosch, M; Panichi, R; Botti, F; Grassi, S; Troiani, D
2005-05-01
The findings of this study suggest that acoustic spatial perception during head movement is achieved by the vestibular system, which is responsible for the correct dynamic of acoustic target pursuit. The ability to localize sounds in space during whole-body rotation relies on the auditory localization system, which recognizes the position of sound in a head-related frame, and on the sensory systems, namely the vestibular system, which perceive head and body movement. The aim of this study was to analyse the contribution of head motion cues to the spatial representation of acoustic targets in humans. Healthy subjects standing on a rotating platform in the dark were asked to pursue with a laser pointer an acoustic target which was horizontally rotated while the body was kept stationary or maintained stationary while the whole body was rotated. The contribution of head motion to the spatial acoustic representation could be inferred by comparing the gains and phases of the pursuit in the two experimental conditions when the frequency was varied. During acoustic target rotation there was a reduction in the gain and an increase in the phase lag, while during whole-body rotations the gain tended to increase and the phase remained constant. The different contributions of the vestibular and acoustic systems were confirmed by analysing the acoustic pursuit during asymmetric body rotation. In this particular condition, in which self-motion perception gradually diminished, an increasing delay in target pursuit was observed.
The influence of sensorimotor experience on the aesthetic evaluation of dance across the life span.
Kirsch, Louise P; Cross, Emily S
2018-01-01
Understanding how action perception, embodiment, and emotion interact is essential for advancing knowledge about how we perceive and interact with each other in a social world. One tool that has proved particularly useful in the past decade for exploring the relationship between perception, action, and affect is dance. Dance is, in its essence, a rich and multisensory art form that can be used to help answer not only basic questions about social cognition but also questions concerning how aging shapes the relationship between action perception, and the role played by affect, emotion, and aesthetics in social perception. In the present study, we used a 1-week physical and visual dance training paradigm to instill varying degrees of sensorimotor experience among non-dancers from three distinct age groups (early adolescents, young adults, and older adults). Our aim was to begin to build an understanding of how aging influences the relationship between action embodiment and affective (or aesthetic) value, at both brain and behavioral levels. On balance, our results point toward a similar positive effect of sensorimotor training on aesthetic evaluations across the life span on a behavioral level, but to rather different neural substrates supporting implicit aesthetic judgment of dance movements at different life stages. Taken together, the present study contributes valuable first insights into the relationship between sensorimotor experience and affective evaluations across ages, and underscores the utility of dance as a stimulus and training intervention for addressing key questions relevant to human neuroscience as well as the arts and humanities. © 2018 Elsevier B.V. All rights reserved.
The functional and structural asymmetries of the superior temporal sulcus.
Specht, Karsten; Wigglesworth, Philip
2018-02-01
The superior temporal sulcus (STS) is an anatomical structure that increasingly interests researchers. This structure appears to receive multisensory input and is involved in several perceptual and cognitive core functions, such as speech perception, audiovisual integration, (biological) motion processing and theory of mind capacities. In addition, the superior temporal sulcus is not only one of the longest sulci of the brain, but it also shows marked functional and structural asymmetries, some of which have only been found in humans. To explore the functional-structural relationships of these asymmetries in more detail, this study combines functional and structural magnetic resonance imaging. Using a speech perception task, an audiovisual integration task, and a theory of mind task, this study again demonstrated an involvement of the STS in these processes, with an expected strong leftward asymmetry for the speech perception task. Furthermore, this study confirmed the earlier described, human-specific asymmetries, namely that the left STS is longer than the right STS and that the right STS is deeper than the left STS. However, this study did not find any relationship between these structural asymmetries and the detected brain activations or their functional asymmetries. This can, on the other hand, give further support to the notion that the structural asymmetry of the STS is not directly related to the functional asymmetry of the speech perception and the language system as a whole, but that it may have other causes and functions. © 2018 The Authors. Scandinavian Journal of Psychology published by Scandinavian Psychological Associations and John Wiley & Sons Ltd.
Giving speech a hand: gesture modulates activity in auditory cortex during speech perception.
Hubbard, Amy L; Wilson, Stephen M; Callan, Daniel E; Dapretto, Mirella
2009-03-01
Viewing hand gestures during face-to-face communication affects speech perception and comprehension. Despite the visible role played by gesture in social interactions, relatively little is known about how the brain integrates hand gestures with co-occurring speech. Here we used functional magnetic resonance imaging (fMRI) and an ecologically valid paradigm to investigate how beat gesture-a fundamental type of hand gesture that marks speech prosody-might impact speech perception at the neural level. Subjects underwent fMRI while listening to spontaneously-produced speech accompanied by beat gesture, nonsense hand movement, or a still body; as additional control conditions, subjects also viewed beat gesture, nonsense hand movement, or a still body all presented without speech. Validating behavioral evidence that gesture affects speech perception, bilateral nonprimary auditory cortex showed greater activity when speech was accompanied by beat gesture than when speech was presented alone. Further, the left superior temporal gyrus/sulcus showed stronger activity when speech was accompanied by beat gesture than when speech was accompanied by nonsense hand movement. Finally, the right planum temporale was identified as a putative multisensory integration site for beat gesture and speech (i.e., here activity in response to speech accompanied by beat gesture was greater than the summed responses to speech alone and beat gesture alone), indicating that this area may be pivotally involved in synthesizing the rhythmic aspects of both speech and gesture. Taken together, these findings suggest a common neural substrate for processing speech and gesture, likely reflecting their joint communicative role in social interactions.
Giving Speech a Hand: Gesture Modulates Activity in Auditory Cortex During Speech Perception
Hubbard, Amy L.; Wilson, Stephen M.; Callan, Daniel E.; Dapretto, Mirella
2008-01-01
Viewing hand gestures during face-to-face communication affects speech perception and comprehension. Despite the visible role played by gesture in social interactions, relatively little is known about how the brain integrates hand gestures with co-occurring speech. Here we used functional magnetic resonance imaging (fMRI) and an ecologically valid paradigm to investigate how beat gesture – a fundamental type of hand gesture that marks speech prosody – might impact speech perception at the neural level. Subjects underwent fMRI while listening to spontaneously-produced speech accompanied by beat gesture, nonsense hand movement, or a still body; as additional control conditions, subjects also viewed beat gesture, nonsense hand movement, or a still body all presented without speech. Validating behavioral evidence that gesture affects speech perception, bilateral nonprimary auditory cortex showed greater activity when speech was accompanied by beat gesture than when speech was presented alone. Further, the left superior temporal gyrus/sulcus showed stronger activity when speech was accompanied by beat gesture than when speech was accompanied by nonsense hand movement. Finally, the right planum temporale was identified as a putative multisensory integration site for beat gesture and speech (i.e., here activity in response to speech accompanied by beat gesture was greater than the summed responses to speech alone and beat gesture alone), indicating that this area may be pivotally involved in synthesizing the rhythmic aspects of both speech and gesture. Taken together, these findings suggest a common neural substrate for processing speech and gesture, likely reflecting their joint communicative role in social interactions. PMID:18412134
A three-finger multisensory hand for dexterous space robotic tasks
NASA Technical Reports Server (NTRS)
Murase, Yuichi; Komada, Satoru; Uchiyama, Takashi; Machida, Kazuo; Akita, Kenzo
1994-01-01
The National Space Development Agency of Japan will launch ETS-7 in 1997, as a test bed for next generation space technology of RV&D and space robot. MITI has been developing a three-finger multisensory hand for complex space robotic tasks. The hand can be operated under remote control or autonomously. This paper describes the design and development of the hand and the performance of a breadboard model.
ERIC Educational Resources Information Center
Sparks, Richard; And Others
1992-01-01
A multisensory structured language (MSL) approach was utilized with two groups of at-risk high school students (n=63), taught in either English and Spanish (MSL/ES) or Spanish only. Foreign language aptitude improved for both groups and native language skills for the MSL/ES group. A group receiving traditional foreign language instruction showed…
Multimodal sensorimotor system in unicellular zoospores of a fungus.
Swafford, Andrew J M; Oakley, Todd H
2018-01-19
Complex sensory systems often underlie critical behaviors, including avoiding predators and locating prey, mates and shelter. Multisensory systems that control motor behavior even appear in unicellular eukaryotes, such as Chlamydomonas , which are important laboratory models for sensory biology. However, we know of no unicellular opisthokonts that control motor behavior using a multimodal sensory system. Therefore, existing single-celled models for multimodal sensorimotor integration are very distantly related to animals. Here, we describe a multisensory system that controls the motor function of unicellular fungal zoospores. We found that zoospores of Allomyces arbusculus exhibit both phototaxis and chemotaxis. Furthermore, we report that closely related Allomyces species respond to either the chemical or the light stimuli presented in this study, not both, and likely do not share this multisensory system. This diversity of sensory systems within Allomyces provides a rare example of a comparative framework that can be used to examine the evolution of sensory systems following the gain/loss of available sensory modalities. The tractability of Allomyces and related fungi as laboratory organisms will facilitate detailed mechanistic investigations into the genetic underpinnings of novel photosensory systems, and how multisensory systems may have functioned in early opisthokonts before multicellularity allowed for the evolution of specialized cell types. © 2018. Published by The Company of Biologists Ltd.
Multisensory perceptual learning is dependent upon task difficulty.
De Niear, Matthew A; Koo, Bonhwang; Wallace, Mark T
2016-11-01
There has been a growing interest in developing behavioral tasks to enhance temporal acuity as recent findings have demonstrated changes in temporal processing in a number of clinical conditions. Prior research has demonstrated that perceptual training can enhance temporal acuity both within and across different sensory modalities. Although certain forms of unisensory perceptual learning have been shown to be dependent upon task difficulty, this relationship has not been explored for multisensory learning. The present study sought to determine the effects of task difficulty on multisensory perceptual learning. Prior to and following a single training session, participants completed a simultaneity judgment (SJ) task, which required them to judge whether a visual stimulus (flash) and auditory stimulus (beep) presented in synchrony or at various stimulus onset asynchronies (SOAs) occurred synchronously or asynchronously. During the training session, participants completed the same SJ task but received feedback regarding the accuracy of their responses. Participants were randomly assigned to one of three levels of difficulty during training: easy, moderate, and hard, which were distinguished based on the SOAs used during training. We report that only the most difficult (i.e., hard) training protocol enhanced temporal acuity. We conclude that perceptual training protocols for enhancing multisensory temporal acuity may be optimized by employing audiovisual stimuli for which it is difficult to discriminate temporal synchrony from asynchrony.
Sanfratello, Lori; Aine, Cheryl; Stephen, Julia
2018-05-25
Impairments in auditory and visual processing are common in schizophrenia (SP). In the unisensory realm visual deficits are primarily noted for the dorsal visual stream. In addition, insensitivity to timing offsets between stimuli are widely reported for SP. The aim of the present study was to test at the physiological level differences in dorsal/ventral stream visual processing and timing sensitivity between SP and healthy controls (HC) using MEG and a simple auditory/visual task utilizing a variety of multisensory conditions. The paradigm included all combinations of synchronous/asynchronous and central/peripheral stimuli, yielding 4 task conditions. Both HC and SP groups showed activation in parietal areas (dorsal visual stream) during all multisensory conditions, with parietal areas showing decreased activation for SP relative to HC, and a significantly delayed peak of activation for SP in intraparietal sulcus (IPS). We also observed a differential effect of stimulus synchrony on HC and SP parietal response. Furthermore, a (negative) correlation was found between SP positive symptoms and activity in IPS. Taken together, our results provide evidence of impairment of the dorsal visual stream in SP during a multisensory task, along with an altered response to timing offsets between presented multisensory stimuli. Copyright © 2018 Elsevier B.V. All rights reserved.
Stapleton, John; Setti, Annalisa; Doheny, Emer P; Kenny, Rose Anne; Newell, Fiona N
2014-02-01
Recent research has provided evidence suggesting a link between inefficient processing of multisensory information and incidence of falling in older adults. Specifically, Setti et al. (Exp Brain Res 209:375-384, 2011) reported that older adults with a history of falling were more susceptible than their healthy, age-matched counterparts to the sound-induced flash illusion. Here, we investigated whether balance control in fall-prone older adults was directly associated with multisensory integration by testing susceptibility to the illusion under two postural conditions: sitting and standing. Whilst standing, fall-prone older adults had a greater body sway than the age-matched healthy older adults and their body sway increased when presented with the audio-visual illusory but not the audio-visual congruent conditions. We also found an increase in susceptibility to the sound-induced flash illusion during standing relative to sitting for fall-prone older adults only. Importantly, no performance differences were found across groups in either the unisensory or non-illusory multisensory conditions across the two postures. These results suggest an important link between multisensory integration and balance control in older adults and have important implications for understanding why some older adults are prone to falling.
Nam, Seung-Min; Kim, Won-Hyo; Yun, Chang-Kyo
2017-04-01
[Purpose] This study aimed to investigate the effects of multisensory dynamic balance training on muscles thickness such as rectus femoris, anterior tibialis, medial gastrocnemius, lateral gastrocnemius in children with spastic diplegic cerebral palsy by using ultrasonography. [Subjects and Methods] Fifteen children diagnosed with spastic diplegic cerebral palsy were divided randomly into the balance training group and control group. The experimental group only received a multisensory dynamic balance training, while the control group performed general physiotherapy focused balance and muscle strengthening exercise based Neurodevelopmental treatment. Both groups had a therapy session for 30 minutes per day, three times a week for six weeks. The ultrasonographic muscle thickness were obtained in order to compare and analyze muscle thickness before and after in each group. [Result] The experimental group had significant increases in muscle thickness in the rectus femoris, tibialis anterior, medial gastrocnemius and lateral gastrocnemius muscles. The control group had significant increases in muscle thickness in the tibialis anterior. The test results of the rectus femoris, medial gastrocnemius and lateral gastrocnemius muscle thickness values between the groups showed significant differences. [Conclusion] In conclusion, a multisensory dynamic balance training can be recommended as a treatment method for patients with spastic diplegic cerebral palsy.
Early multisensory interactions affect the competition among multiple visual objects.
Van der Burg, Erik; Talsma, Durk; Olivers, Christian N L; Hickey, Clayton; Theeuwes, Jan
2011-04-01
In dynamic cluttered environments, audition and vision may benefit from each other in determining what deserves further attention and what does not. We investigated the underlying neural mechanisms responsible for attentional guidance by audiovisual stimuli in such an environment. Event-related potentials (ERPs) were measured during visual search through dynamic displays consisting of line elements that randomly changed orientation. Search accuracy improved when a target orientation change was synchronized with an auditory signal as compared to when the auditory signal was absent or synchronized with a distractor orientation change. The ERP data show that behavioral benefits were related to an early multisensory interaction over left parieto-occipital cortex (50-60 ms post-stimulus onset), which was followed by an early positive modulation (80-100 ms) over occipital and temporal areas contralateral to the audiovisual event, an enhanced N2pc (210-250 ms), and a contralateral negative slow wave (CNSW). The early multisensory interaction was correlated with behavioral search benefits, indicating that participants with a strong multisensory interaction benefited the most from the synchronized auditory signal. We suggest that an auditory signal enhances the neural response to a synchronized visual event, which increases the chances of selection in a multiple object environment. Copyright © 2010 Elsevier Inc. All rights reserved.
Visual perception of axes of head rotation
Arnoldussen, D. M.; Goossens, J.; van den Berg, A. V.
2013-01-01
Registration of ego-motion is important to accurately navigate through space. Movements of the head and eye relative to space are registered through the vestibular system and optical flow, respectively. Here, we address three questions concerning the visual registration of self-rotation. (1) Eye-in-head movements provide a link between the motion signals received by sensors in the moving eye and sensors in the moving head. How are these signals combined into an ego-rotation percept? We combined optic flow of simulated forward and rotational motion of the eye with different levels of eye-in-head rotation for a stationary head. We dissociated simulated gaze rotation and head rotation by different levels of eye-in-head pursuit. We found that perceived rotation matches simulated head- not gaze-rotation. This rejects a model for perceived self-rotation that relies on the rotation of the gaze line. Rather, eye-in-head signals serve to transform the optic flow's rotation information, that specifies rotation of the scene relative to the eye, into a rotation relative to the head. This suggests that transformed visual self-rotation signals may combine with vestibular signals. (2) Do transformed visual self-rotation signals reflect the arrangement of the semi-circular canals (SCC)? Previously, we found sub-regions within MST and V6+ that respond to the speed of the simulated head rotation. Here, we re-analyzed those Blood oxygenated level-dependent (BOLD) signals for the presence of a spatial dissociation related to the axes of visually simulated head rotation, such as have been found in sub-cortical regions of various animals. Contrary, we found a rather uniform BOLD response to simulated rotation along the three SCC axes. (3) We investigated if subject's sensitivity to the direction of the head rotation axis shows SCC axes specifcity. We found that sensitivity to head rotation is rather uniformly distributed, suggesting that in human cortex, visuo-vestibular integration is not arranged into the SCC frame. PMID:23919087
Does Visual Performance Influence Head Impact Severity Among High School Football Athletes?
Schmidt, Julianne D; Guskiewicz, Kevin M; Mihalik, Jason P; Blackburn, J Troy; Siegmund, Gunter P; Marshall, Stephen W
2015-11-01
To compare the odds of sustaining moderate and severe head impacts, rather than mild, between high school football players with high and low visual performance. Prospective quasi-experimental. Clinical Research Center/On-field. Thirty-seven high school varsity football players. Athletes completed the Nike SPARQ Sensory Station visual assessment before the season. Head impact biomechanics were captured at all practices and games using the Head Impact Telemetry System. Each player was classified as either a high or low performer using a median split for each of the following visual performance measures: visual clarity, contrast sensitivity, depth perception, near-far quickness, target capture, perception span, eye-hand coordination, go/no go, and reaction time. We computed the odds of sustaining moderate and severe head impacts against the reference odds of sustaining mild head impacts across groups of high and low performers for each of the visual performance measures. Players with better near-far quickness had increased odds of sustaining moderate [odds ratios (ORs), 1.27; 95% confidence intervals (CIs), 1.04-1.56] and severe head impacts (OR, 1.45; 95% CI, 1.05-2.01) as measured by Head Impact Technology severity profile. High and low performers were at equal odds on all other measures. Better visual performance did not reduce the odds of sustaining higher magnitude head impacts. Visual performance may play less of a role than expected for protecting against higher magnitude head impacts among high school football players. Further research is needed to determine whether visual performance influences concussion risk. Based on our results, we do not recommend using visual training programs at the high school level for the purpose of reducing the odds of sustaining higher magnitude head impacts.
McGuckian, Thomas B; Cole, Michael H; Pepping, Gert-Jan
2018-04-01
To visually perceive opportunities for action, athletes rely on the movements of their eyes, head and body to explore their surrounding environment. To date, the specific types of technology and their efficacy for assessing the exploration behaviours of association footballers have not been systematically reviewed. This review aimed to synthesise the visual perception and exploration behaviours of footballers according to the task constraints, action requirements of the experimental task, and level of expertise of the athlete, in the context of the technology used to quantify the visual perception and exploration behaviours of footballers. A systematic search for papers that included keywords related to football, technology, and visual perception was conducted. All 38 included articles utilised eye-movement registration technology to quantify visual perception and exploration behaviour. The experimental domain appears to influence the visual perception behaviour of footballers, however no studies investigated exploration behaviours of footballers in open-play situations. Studies rarely utilised representative stimulus presentation or action requirements. To fully understand the visual perception requirements of athletes, it is recommended that future research seek to validate alternate technologies that are capable of investigating the eye, head and body movements associated with the exploration behaviours of footballers during representative open-play situations.
Vaina, Lucia M.; Buonanno, Ferdinando; Rushton, Simon K.
2014-01-01
Background All contemporary models of perception of locomotor heading from optic flow (the characteristic patterns of retinal motion that result from self-movement) begin with relative motion. Therefore it would be expected that an impairment on perception of relative motion should impact on the ability to judge heading and other 3D motion tasks. Material/Methods We report two patients with occipital lobe lesions whom we tested on a battery of motion tasks. Patients were impaired on all tests that involved relative motion in plane (motion discontinuity, form from differences in motion direction or speed). Despite this they retained the ability to judge their direction of heading relative to a target. A potential confound is that observers can derive information about heading from scale changes bypassing the need to use optic flow. Therefore we ran further experiments in which we isolated optic flow and scale change. Results Patients’ performance was in normal ranges on both tests. The finding that ability to perceive heading can be retained despite an impairment on ability to judge relative motion questions the assumption that heading perception proceeds from initial processing of relative motion. Furthermore, on a collision detection task, SS and SR’s performance was significantly better for simulated forward movement of the observer in the 3D scene, than for the static observer. This suggests that in spite of severe deficits on relative motion in the frontoparlel (xy) plane, information from self-motion helped identification objects moving along an intercept 3D relative motion trajectory. Conclusions This result suggests a potential use of a flow parsing strategy to detect in a 3D world the trajectory of moving objects when the observer is moving forward. These results have implications for developing rehabilitation strategies for deficits in visually guided navigation. PMID:25183375
Modeling heading and path perception from optic flow in the case of independently moving objects
Raudies, Florian; Neumann, Heiko
2013-01-01
Humans are usually accurate when estimating heading or path from optic flow, even in the presence of independently moving objects (IMOs) in an otherwise rigid scene. To invoke significant biases in perceived heading, IMOs have to be large and obscure the focus of expansion (FOE) in the image plane, which is the point of approach. For the estimation of path during curvilinear self-motion no significant biases were found in the presence of IMOs. What makes humans robust in their estimation of heading or path using optic flow? We derive analytical models of optic flow for linear and curvilinear self-motion using geometric scene models. Heading biases of a linear least squares method, which builds upon these analytical models, are large, larger than those reported for humans. This motivated us to study segmentation cues that are available from optic flow. We derive models of accretion/deletion, expansion/contraction, acceleration/deceleration, local spatial curvature, and local temporal curvature, to be used as cues to segment an IMO from the background. Integrating these segmentation cues into our method of estimating heading or path now explains human psychophysical data and extends, as well as unifies, previous investigations. Our analysis suggests that various cues available from optic flow help to segment IMOs and, thus, make humans' heading and path perception robust in the presence of such IMOs. PMID:23554589
Tilt and Translation Motion Perception during Pitch Tilt with Visual Surround Translation
NASA Technical Reports Server (NTRS)
O'Sullivan, Brita M.; Harm, Deborah L.; Reschke, Millard F.; Wood, Scott J.
2006-01-01
The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive an accurate representation of spatial orientation. Previous studies suggest that multisensory integration is critical for discriminating linear accelerations arising from tilt and translation head motion. Visual input is especially important at low frequencies where canal input is declining. The NASA Tilt Translation Device (TTD) was designed to recreate postflight orientation disturbances by exposing subjects to matching tilt self motion with conflicting visual surround translation. Previous studies have demonstrated that brief exposures to pitch tilt with foreaft visual surround translation produced changes in compensatory vertical eye movement responses, postural equilibrium, and motion sickness symptoms. Adaptation appeared greatest with visual scene motion leading (versus lagging) the tilt motion, and the adaptation time constant appeared to be approximately 30 min. The purpose of this study was to compare motion perception when the visual surround translation was inphase versus outofphase with pitch tilt. The inphase stimulus presented visual surround motion one would experience if the linear acceleration was due to foreaft self translation within a stationary surround, while the outofphase stimulus had the visual scene motion leading the tilt by 90 deg as previously used. The tilt stimuli in these conditions were asymmetrical, ranging from an upright orientation to 10 deg pitch back. Another objective of the study was to compare motion perception with the inphase stimulus when the tilts were asymmetrical relative to upright (0 to 10 deg back) versus symmetrical (10 deg forward to 10 deg back). Twelve subjects (6M, 6F, 22-55 yrs) were tested during 3 sessions separated by at least one week. During each of the three sessions (out-of-phase asymmetrical, in-phase asymmetrical, inphase symmetrical), subjects were exposed to visual surround translation synchronized with pitch tilt at 0.1 Hz for a total of 30 min. Tilt and translation motion perception was obtained from verbal reports and a joystick mounted on a linear stage. Horizontal vergence and vertical eye movements were obtained with a binocular video system. Responses were also obtained during darkness before and following 15 min and 30 min of visual surround translation. Each of the three stimulus conditions involving visual surround translation elicited a significantly reduced sense of perceived tilt and strong linear vection (perceived translation) compared to pre-exposure tilt stimuli in darkness. This increase in perceived translation with reduction in tilt perception was also present in darkness following 15 and 30 min exposures, provided the tilt stimuli were not interrupted. Although not significant, there was a trend for the inphase asymmetrical stimulus to elicit a stronger sense of both translation and tilt than the out-of-phase asymmetrical stimulus. Surprisingly, the inphase asymmetrical stimulus also tended to elicit a stronger sense of peak-to-peak translation than the inphase symmetrical stimulus, even though the range of linear acceleration during the symmetrical stimulus was twice that of the asymmetrical stimulus. These results are consistent with the hypothesis that the central nervous system resolves the ambiguity of inertial motion sensory cues by integrating inputs from visual, vestibular, and somatosensory systems.
Urban Multisensory Laboratory, AN Approach to Model Urban Space Human Perception
NASA Astrophysics Data System (ADS)
González, T.; Sol, D.; Saenz, J.; Clavijo, D.; García, H.
2017-09-01
An urban sensory lab (USL or LUS an acronym in Spanish) is a new and avant-garde approach for studying and analyzing a city. The construction of this approach allows the development of new methodologies to identify the emotional response of public space users. The laboratory combines qualitative analysis proposed by urbanists and quantitative measures managed by data analysis applications. USL is a new approach to go beyond the borders of urban knowledge. The design thinking strategy allows us to implement methods to understand the results provided by our technique. In this first approach, the interpretation is made by hand. However, our goal is to combine design thinking and machine learning in order to analyze the qualitative and quantitative data automatically. Now, the results are being used by students from the Urbanism and Architecture courses in order to get a better understanding of public spaces in Puebla, Mexico and its interaction with people.
Listening-touch, Affect and the Crafting of Medical Bodies through Percussion.
Harris, Anna
2016-03-01
The growing abundance of medical technologies has led to laments over doctors' sensory de-skilling, technologies viewed as replacing diagnosis based on sensory acumen. The technique of percussion has become emblematic of the kinds of skills considered lost. While disappearing from wards, percussion is still taught in medical schools. By ethnographically following how percussion is taught to and learned by students, this article considers the kinds of bodies configured through this multisensory practice. I suggest that three kinds of bodies arise: skilled bodies; affected bodies; and resonating bodies. As these bodies are crafted, I argue that boundaries between bodies of novices and bodies they learn from blur. Attending to an overlooked dimension of bodily configurations in medicine, self-perception, I show that learning percussion functions not only to perpetuate diagnostic craft skills but also as a way of knowing of, and through, the resource always at hand; one's own living breathing body.
Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D
2016-02-15
The aim of the present study was to characterize effects of learning a sign language on the processing of a spoken language. Specifically, audiovisual phoneme comprehension was assessed before and after 13 weeks of sign language exposure. L2 ASL learners performed this task in the fMRI scanner. Results indicated that L2 American Sign Language (ASL) learners' behavioral classification of the speech sounds improved with time compared to hearing nonsigners. Results indicated increased activation in the supramarginal gyrus (SMG) after sign language exposure, which suggests concomitant increased phonological processing of speech. A multiple regression analysis indicated that learner's rating on co-sign speech use and lipreading ability was correlated with SMG activation. This pattern of results indicates that the increased use of mouthing and possibly lipreading during sign language acquisition may concurrently improve audiovisual speech processing in budding hearing bimodal bilinguals. Copyright © 2015 Elsevier B.V. All rights reserved.
A spatially collocated sound thrusts a flash into awareness
Aller, Máté; Giani, Anette; Conrad, Verena; Watanabe, Masataka; Noppeney, Uta
2015-01-01
To interact effectively with the environment the brain integrates signals from multiple senses. It is currently unclear to what extent spatial information can be integrated across different senses in the absence of awareness. Combining dynamic continuous flash suppression (CFS) and spatial audiovisual stimulation, the current study investigated whether a sound facilitates a concurrent visual flash to elude flash suppression and enter perceptual awareness depending on audiovisual spatial congruency. Our results demonstrate that a concurrent sound boosts unaware visual signals into perceptual awareness. Critically, this process depended on the spatial congruency of the auditory and visual signals pointing towards low level mechanisms of audiovisual integration. Moreover, the concurrent sound biased the reported location of the flash as a function of flash visibility. The spatial bias of sounds on reported flash location was strongest for flashes that were judged invisible. Our results suggest that multisensory integration is a critical mechanism that enables signals to enter conscious perception. PMID:25774126
Fleming, Roland W
2017-09-15
Under typical viewing conditions, human observers effortlessly recognize materials and infer their physical, functional, and multisensory properties at a glance. Without touching materials, we can usually tell whether they would feel hard or soft, rough or smooth, wet or dry. We have vivid visual intuitions about how deformable materials like liquids or textiles respond to external forces and how surfaces like chrome, wax, or leather change appearance when formed into different shapes or viewed under different lighting. These achievements are impressive because the retinal image results from complex optical interactions between lighting, shape, and material, which cannot easily be disentangled. Here I argue that because of the diversity, mutability, and complexity of materials, they pose enormous challenges to vision science: What is material appearance, and how do we measure it? How are material properties estimated and represented? Resolving these questions causes us to scrutinize the basic assumptions of mid-level vision.
Listening-touch, Affect and the Crafting of Medical Bodies through Percussion
2015-01-01
The growing abundance of medical technologies has led to laments over doctors’ sensory de-skilling, technologies viewed as replacing diagnosis based on sensory acumen. The technique of percussion has become emblematic of the kinds of skills considered lost. While disappearing from wards, percussion is still taught in medical schools. By ethnographically following how percussion is taught to and learned by students, this article considers the kinds of bodies configured through this multisensory practice. I suggest that three kinds of bodies arise: skilled bodies; affected bodies; and resonating bodies. As these bodies are crafted, I argue that boundaries between bodies of novices and bodies they learn from blur. Attending to an overlooked dimension of bodily configurations in medicine, self-perception, I show that learning percussion functions not only to perpetuate diagnostic craft skills but also as a way of knowing of, and through, the resource always at hand; one’s own living breathing body. PMID:27390549
Model of human dynamic orientation. Ph.D. Thesis; [associated with vestibular stimuli
NASA Technical Reports Server (NTRS)
Ormsby, C. C.
1974-01-01
The dynamics associated with the perception of orientation were modelled for near-threshold and suprathreshold vestibular stimuli. A model of the information available at the peripheral sensors which was consistent with available neurophysiologic data was developed and served as the basis for the models of the perceptual responses. The central processor was assumed to utilize the information from the peripheral sensors in an optimal (minimum mean square error) manner to produce the perceptual estimates of dynamic orientation. This assumption, coupled with the models of sensory information, determined the form of the model for the central processor. The problem of integrating information from the semi-circular canals and the otoliths to predict the perceptual response to motions which stimulated both organs was studied. A model was developed which was shown to be useful in predicting the perceptual response to multi-sensory stimuli.
Using sound-taste correspondences to enhance the subjective value of tasting experiences.
Reinoso Carvalho, Felipe; Van Ee, Raymond; Rychtarikova, Monika; Touhafi, Abdellah; Steenhaut, Kris; Persoone, Dominique; Spence, Charles
2015-01-01
The soundscapes of those places where we eat and drink can influence our perception of taste. Here, we investigated whether contextual sound would enhance the subjective value of a tasting experience. The customers in a chocolate shop were invited to take part in an experiment in which they had to evaluate a chocolate's taste while listening to an auditory stimulus. Four different conditions were presented in a between-participants design. Envisioning a more ecological approach, a pre-recorded piece of popular music and the shop's own soundscape were used as the sonic stimuli. The results revealed that not only did the customers report having a significantly better tasting experience when the sounds were presented as part of the food's identity, but they were also willing to pay significantly more for the experience. The method outlined here paves a new approach to dealing with the design of multisensory tasting experiences, and gastronomic situations.
Illusory Obesity Triggers Body Dissatisfaction Responses in the Insula and Anterior Cingulate Cortex
Preston, Catherine; Ehrsson, H. Henrik
2016-01-01
In today's Western society, concerns regarding body size and negative feelings toward one's body are all too common. However, little is known about the neural mechanisms underlying negative feelings toward the body and how they relate to body perception and eating-disorder pathology. Here, we used multisensory illusions to elicit illusory ownership of obese and slim bodies during functional magnetic resonance imaging. The results implicate the anterior insula and the anterior cingulate cortex in the development of negative feelings toward the body through functional interactions with the posterior parietal cortex, which mediates perceived obesity. Moreover, cingulate neural responses were modulated by nonclinical eating-disorder psychopathology and were attenuated in females. These results reveal how perceptual and affective body representations interact in the human brain and may help explain the neurobiological underpinnings of eating-disorder vulnerability in women. PMID:27733537