Sample records for multisensory temporal processing

  1. The associations between multisensory temporal processing and symptoms of schizophrenia.

    PubMed

    Stevenson, Ryan A; Park, Sohee; Cochran, Channing; McIntosh, Lindsey G; Noel, Jean-Paul; Barense, Morgan D; Ferber, Susanne; Wallace, Mark T

    2017-01-01

    Recent neurobiological accounts of schizophrenia have included an emphasis on changes in sensory processing. These sensory and perceptual deficits can have a cascading effect onto higher-level cognitive processes and clinical symptoms. One form of sensory dysfunction that has been consistently observed in schizophrenia is altered temporal processing. In this study, we investigated temporal processing within and across the auditory and visual modalities in individuals with schizophrenia (SCZ) and age-matched healthy controls. Individuals with SCZ showed auditory and visual temporal processing abnormalities, as well as multisensory temporal processing dysfunction that extended beyond that attributable to unisensory processing dysfunction. Most importantly, these multisensory temporal deficits were associated with the severity of hallucinations. This link between atypical multisensory temporal perception and clinical symptomatology suggests that clinical symptoms of schizophrenia may be at least partly a result of cascading effects from (multi)sensory disturbances. These results are discussed in terms of underlying neural bases and the possible implications for remediation. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. The Construct of the Multisensory Temporal Binding Window and its Dysregulation in Developmental Disabilities

    PubMed Central

    Wallace, Mark T.; Stevenson, Ryan A.

    2014-01-01

    Behavior, perception and cognition are strongly shaped by the synthesis of information across the different sensory modalities. Such multisensory integration often results in performance and perceptual benefits that reflect the additional information conferred by having cues from multiple senses providing redundant or complementary information. The spatial and temporal relationships of these cues provide powerful statistical information about how these cues should be integrated or “bound” in order to create a unified perceptual representation. Much recent work has examined the temporal factors that are integral in multisensory processing, with many focused on the construct of the multisensory temporal binding window – the epoch of time within which stimuli from different modalities is likely to be integrated and perceptually bound. Emerging evidence suggests that this temporal window is altered in a series of neurodevelopmental disorders, including autism, dyslexia and schizophrenia. In addition to their role in sensory processing, these deficits in multisensory temporal function may play an important role in the perceptual and cognitive weaknesses that characterize these clinical disorders. Within this context, focus on improving the acuity of multisensory temporal function may have important implications for the amelioration of the “higher-order” deficits that serve as the defining features of these disorders. PMID:25128432

  3. Combined diffusion-weighted and functional magnetic resonance imaging reveals a temporal-occipital network involved in auditory-visual object processing

    PubMed Central

    Beer, Anton L.; Plank, Tina; Meyer, Georg; Greenlee, Mark W.

    2013-01-01

    Functional magnetic resonance imaging (MRI) showed that the superior temporal and occipital cortex are involved in multisensory integration. Probabilistic fiber tracking based on diffusion-weighted MRI suggests that multisensory processing is supported by white matter connections between auditory cortex and the temporal and occipital lobe. Here, we present a combined functional MRI and probabilistic fiber tracking study that reveals multisensory processing mechanisms that remained undetected by either technique alone. Ten healthy participants passively observed visually presented lip or body movements, heard speech or body action sounds, or were exposed to a combination of both. Bimodal stimulation engaged a temporal-occipital brain network including the multisensory superior temporal sulcus (msSTS), the lateral superior temporal gyrus (lSTG), and the extrastriate body area (EBA). A region-of-interest (ROI) analysis showed multisensory interactions (e.g., subadditive responses to bimodal compared to unimodal stimuli) in the msSTS, the lSTG, and the EBA region. Moreover, sounds elicited responses in the medial occipital cortex. Probabilistic tracking revealed white matter tracts between the auditory cortex and the medial occipital cortex, the inferior occipital cortex (IOC), and the superior temporal sulcus (STS). However, STS terminations of auditory cortex tracts showed limited overlap with the msSTS region. Instead, msSTS was connected to primary sensory regions via intermediate nodes in the temporal and occipital cortex. Similarly, the lSTG and EBA regions showed limited direct white matter connections but instead were connected via intermediate nodes. Our results suggest that multisensory processing in the STS is mediated by separate brain areas that form a distinct network in the lateral temporal and inferior occipital cortex. PMID:23407860

  4. Binding of Sights and Sounds: Age-Related Changes in Multisensory Temporal Processing

    ERIC Educational Resources Information Center

    Hillock, Andrea R.; Powers, Albert R.; Wallace, Mark T.

    2011-01-01

    We live in a multisensory world and one of the challenges the brain is faced with is deciding what information belongs together. Our ability to make assumptions about the relatedness of multisensory stimuli is partly based on their temporal and spatial relationships. Stimuli that are proximal in time and space are likely to be bound together by…

  5. The role of multisensory interplay in enabling temporal expectations.

    PubMed

    Ball, Felix; Michels, Lara E; Thiele, Carsten; Noesselt, Toemme

    2018-01-01

    Temporal regularities can guide our attention to focus on a particular moment in time and to be especially vigilant just then. Previous research provided evidence for the influence of temporal expectation on perceptual processing in unisensory auditory, visual, and tactile contexts. However, in real life we are often exposed to a complex and continuous stream of multisensory events. Here we tested - in a series of experiments - whether temporal expectations can enhance perception in multisensory contexts and whether this enhancement differs from enhancements in unisensory contexts. Our discrimination paradigm contained near-threshold targets (subject-specific 75% discrimination accuracy) embedded in a sequence of distractors. The likelihood of target occurrence (early or late) was manipulated block-wise. Furthermore, we tested whether spatial and modality-specific target uncertainty (i.e. predictable vs. unpredictable target position or modality) would affect temporal expectation (TE) measured with perceptual sensitivity (d ' ) and response times (RT). In all our experiments, hidden temporal regularities improved performance for expected multisensory targets. Moreover, multisensory performance was unaffected by spatial and modality-specific uncertainty, whereas unisensory TE effects on d ' but not RT were modulated by spatial and modality-specific uncertainty. Additionally, the size of the temporal expectation effect, i.e. the increase in perceptual sensitivity and decrease of RT, scaled linearly with the likelihood of expected targets. Finally, temporal expectation effects were unaffected by varying target position within the stream. Together, our results strongly suggest that participants quickly adapt to novel temporal contexts, that they benefit from multisensory (relative to unisensory) stimulation and that multisensory benefits are maximal if the stimulus-driven uncertainty is highest. We propose that enhanced informational content (i.e. multisensory stimulation) enables the robust extraction of temporal regularities which in turn boost (uni-)sensory representations. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Temporal processing deficit leads to impaired multisensory binding in schizophrenia.

    PubMed

    Zvyagintsev, Mikhail; Parisi, Carmen; Mathiak, Klaus

    2017-09-01

    Schizophrenia has been characterised by neurodevelopmental dysconnectivity resulting in cognitive and perceptual dysmetria. Hence patients with schizophrenia may be impaired to detect the temporal relationship between stimuli in different sensory modalities. However, only a few studies described deficit in perception of temporally asynchronous multisensory stimuli in schizophrenia. We examined the perceptual bias and the processing time of synchronous and delayed sounds in the streaming-bouncing illusion in 16 patients with schizophrenia and a matched control group of 18 participants. Equal for patients and controls, the synchronous sound biased the percept of two moving squares towards bouncing as opposed to the more frequent streaming percept in the condition without sound. In healthy controls, a delay of the sound presentation significantly reduced the bias and led to prolonged processing time whereas patients with schizophrenia did not differentiate between this condition and the condition with synchronous sound. Schizophrenia leads to a prolonged window of simultaneity for audiovisual stimuli. Therefore, temporal processing deficit in schizophrenia can lead to hyperintegration of temporally unmatched multisensory stimuli.

  7. Individual Differences in the Multisensory Temporal Binding Window Predict Susceptibility to Audiovisual Illusions

    ERIC Educational Resources Information Center

    Stevenson, Ryan A.; Zemtsov, Raquel K.; Wallace, Mark T.

    2012-01-01

    Human multisensory systems are known to bind inputs from the different sensory modalities into a unified percept, a process that leads to measurable behavioral benefits. This integrative process can be observed through multisensory illusions, including the McGurk effect and the sound-induced flash illusion, both of which demonstrate the ability of…

  8. Multisensory temporal function and EEG complexity in patients with epilepsy and psychogenic nonepileptic events.

    PubMed

    Noel, Jean-Paul; Kurela, LeAnne; Baum, Sarah H; Yu, Hong; Neimat, Joseph S; Gallagher, Martin J; Wallace, Mark

    2017-05-01

    Cognitive and perceptual comorbidities frequently accompany epilepsy and psychogenic nonepileptic events (PNEE). However, and despite the fact that perceptual function is built upon a multisensory foundation, little knowledge exists concerning multisensory function in these populations. Here, we characterized facets of multisensory processing abilities in patients with epilepsy and PNEE, and probed the relationship between individual resting-state EEG complexity and these psychophysical measures in each patient. We prospectively studied a cohort of patients with epilepsy (N=18) and PNEE (N=20) patients who were admitted to Vanderbilt's Epilepsy Monitoring Unit (EMU) and weaned off of anticonvulsant drugs. Unaffected age-matched persons staying with the patients in the EMU (N=15) were also recruited as controls. All participants performed two tests of multisensory function: an audio-visual simultaneity judgment and an audio-visual redundant target task. Further, in the cohort of patients with epilepsy and PNEE we quantified resting state EEG gamma power and complexity. Compared with both patients with epilepsy and control subjects, patients with PNEE exhibited significantly poorer acuity in audiovisual temporal function as evidenced in significantly larger temporal binding windows (i.e., they perceived larger stimulus asynchronies as being presented simultaneously). These differences appeared to be specific for temporal function, as there was no difference among the three groups in a non-temporally based measure of multisensory function - the redundant target task. Further, patients with PNEE exhibited more complex resting state EEG patterns as compared to their patients with epilepsy, and EEG complexity correlated with multisensory temporal performance on a subject-by-subject manner. Taken together, findings seem to indicate that patients with PNEE bind information from audition and vision over larger temporal intervals when compared with control subjects as well as patients with epilepsy. This difference in multisensory function appears to be specific to the temporal domain, and may be a contributing factor to the behavioral and perceptual alterations seen in this population. Published by Elsevier Inc.

  9. Audiovisual integration in depth: multisensory binding and gain as a function of distance.

    PubMed

    Noel, Jean-Paul; Modi, Kahan; Wallace, Mark T; Van der Stoep, Nathan

    2018-07-01

    The integration of information across sensory modalities is dependent on the spatiotemporal characteristics of the stimuli that are paired. Despite large variation in the distance over which events occur in our environment, relatively little is known regarding how stimulus-observer distance affects multisensory integration. Prior work has suggested that exteroceptive stimuli are integrated over larger temporal intervals in near relative to far space, and that larger multisensory facilitations are evident in far relative to near space. Here, we sought to examine the interrelationship between these previously established distance-related features of multisensory processing. Participants performed an audiovisual simultaneity judgment and redundant target task in near and far space, while audiovisual stimuli were presented at a range of temporal delays (i.e., stimulus onset asynchronies). In line with the previous findings, temporal acuity was poorer in near relative to far space. Furthermore, reaction time to asynchronously presented audiovisual targets suggested a temporal window for fast detection-a range of stimuli asynchronies that was also larger in near as compared to far space. However, the range of reaction times over which multisensory response enhancement was observed was limited to a restricted range of relatively small (i.e., 150 ms) asynchronies, and did not differ significantly between near and far space. Furthermore, for synchronous presentations, these distance-related (i.e., near vs. far) modulations in temporal acuity and multisensory gain correlated negatively at an individual subject level. Thus, the findings support the conclusion that multisensory temporal binding and gain are asymmetrically modulated as a function of distance from the observer, and specifies that this relationship is specific for temporally synchronous audiovisual stimulus presentations.

  10. Intracranial Cortical Responses during Visual–Tactile Integration in Humans

    PubMed Central

    Quinn, Brian T.; Carlson, Chad; Doyle, Werner; Cash, Sydney S.; Devinsky, Orrin; Spence, Charles; Halgren, Eric

    2014-01-01

    Sensory integration of touch and sight is crucial to perceiving and navigating the environment. While recent evidence from other sensory modality combinations suggests that low-level sensory areas integrate multisensory information at early processing stages, little is known about how the brain combines visual and tactile information. We investigated the dynamics of multisensory integration between vision and touch using the high spatial and temporal resolution of intracranial electrocorticography in humans. We present a novel, two-step metric for defining multisensory integration. The first step compares the sum of the unisensory responses to the bimodal response as multisensory responses. The second step eliminates the possibility that double addition of sensory responses could be misinterpreted as interactions. Using these criteria, averaged local field potentials and high-gamma-band power demonstrate a functional processing cascade whereby sensory integration occurs late, both anatomically and temporally, in the temporo–parieto–occipital junction (TPOJ) and dorsolateral prefrontal cortex. Results further suggest two neurophysiologically distinct and temporally separated integration mechanisms in TPOJ, while providing direct evidence for local suppression as a dominant mechanism for synthesizing visual and tactile input. These results tend to support earlier concepts of multisensory integration as relatively late and centered in tertiary multimodal association cortices. PMID:24381279

  11. Audio-visual temporal perception in children with restored hearing.

    PubMed

    Gori, Monica; Chilosi, Anna; Forli, Francesca; Burr, David

    2017-05-01

    It is not clear how audio-visual temporal perception develops in children with restored hearing. In this study we measured temporal discrimination thresholds with an audio-visual temporal bisection task in 9 deaf children with restored audition, and 22 typically hearing children. In typically hearing children, audition was more precise than vision, with no gain in multisensory conditions (as previously reported in Gori et al. (2012b)). However, deaf children with restored audition showed similar thresholds for audio and visual thresholds and some evidence of gain in audio-visual temporal multisensory conditions. Interestingly, we found a strong correlation between auditory weighting of multisensory signals and quality of language: patients who gave more weight to audition had better language skills. Similarly, auditory thresholds for the temporal bisection task were also a good predictor of language skills. This result supports the idea that the temporal auditory processing is associated with language development. Copyright © 2017. Published by Elsevier Ltd.

  12. Multisensory perceptual learning is dependent upon task difficulty.

    PubMed

    De Niear, Matthew A; Koo, Bonhwang; Wallace, Mark T

    2016-11-01

    There has been a growing interest in developing behavioral tasks to enhance temporal acuity as recent findings have demonstrated changes in temporal processing in a number of clinical conditions. Prior research has demonstrated that perceptual training can enhance temporal acuity both within and across different sensory modalities. Although certain forms of unisensory perceptual learning have been shown to be dependent upon task difficulty, this relationship has not been explored for multisensory learning. The present study sought to determine the effects of task difficulty on multisensory perceptual learning. Prior to and following a single training session, participants completed a simultaneity judgment (SJ) task, which required them to judge whether a visual stimulus (flash) and auditory stimulus (beep) presented in synchrony or at various stimulus onset asynchronies (SOAs) occurred synchronously or asynchronously. During the training session, participants completed the same SJ task but received feedback regarding the accuracy of their responses. Participants were randomly assigned to one of three levels of difficulty during training: easy, moderate, and hard, which were distinguished based on the SOAs used during training. We report that only the most difficult (i.e., hard) training protocol enhanced temporal acuity. We conclude that perceptual training protocols for enhancing multisensory temporal acuity may be optimized by employing audiovisual stimuli for which it is difficult to discriminate temporal synchrony from asynchrony.

  13. "Walking" through the sensory, cognitive, and temporal degradations of healthy aging.

    PubMed

    Paraskevoudi, Nadia; Balcı, Fuat; Vatakis, Argiro

    2018-05-09

    As we age, there is a wide range of changes in motor, sensory, cognitive, and temporal processing due to alterations in the functioning of the central nervous and musculoskeletal systems. Specifically, aging is associated with degradations in gait; altered processing of the individual sensory systems; modifications in executive control, memory, and attention; and changes in temporal processing. These age-related alterations are often inter-related and have been suggested to result from shared neural substrates. Additionally, the overlap between these brain areas and those controlling walking raises the possibility of facilitating performance in several tasks by introducing protocols that can efficiently target all four domains. Attempts to counteract these negative effects of normal aging have been focusing on research to prevent falls and/or enhance cognitive processes, while ignoring the potential multisensory benefits accompanying old age. Research shows that the aging brain tends to increasingly rely on multisensory integration to compensate for degradations in individual sensory systems and for altered neural functioning. This review covers the age-related changes in the above-mentioned domains and the potential to exploit the benefits associated with multisensory integration in aging so as to improve one's mobility and enhance sensory, cognitive, and temporal processing. © 2018 New York Academy of Sciences.

  14. Spatial heterogeneity of cortical receptive fields and its impact on multisensory interactions.

    PubMed

    Carriere, Brian N; Royal, David W; Wallace, Mark T

    2008-05-01

    Investigations of multisensory processing at the level of the single neuron have illustrated the importance of the spatial and temporal relationship of the paired stimuli and their relative effectiveness in determining the product of the resultant interaction. Although these principles provide a good first-order description of the interactive process, they were derived by treating space, time, and effectiveness as independent factors. In the anterior ectosylvian sulcus (AES) of the cat, previous work hinted that the spatial receptive field (SRF) architecture of multisensory neurons might play an important role in multisensory processing due to differences in the vigor of responses to identical stimuli placed at different locations within the SRF. In this study the impact of SRF architecture on cortical multisensory processing was investigated using semichronic single-unit electrophysiological experiments targeting a multisensory domain of the cat AES. The visual and auditory SRFs of AES multisensory neurons exhibited striking response heterogeneity, with SRF architecture appearing to play a major role in the multisensory interactions. The deterministic role of SRF architecture was tightly coupled to the manner in which stimulus location modulated the responsiveness of the neuron. Thus multisensory stimulus combinations at weakly effective locations within the SRF resulted in large (often superadditive) response enhancements, whereas combinations at more effective spatial locations resulted in smaller (additive/subadditive) interactions. These results provide important insights into the spatial organization and processing capabilities of cortical multisensory neurons, features that may provide important clues as to the functional roles played by this area in spatially directed perceptual processes.

  15. The sense of body ownership relaxes temporal constraints for multisensory integration.

    PubMed

    Maselli, Antonella; Kilteni, Konstantina; López-Moliner, Joan; Slater, Mel

    2016-08-03

    Experimental work on body ownership illusions showed how simple multisensory manipulation can generate the illusory experience of an artificial limb as being part of the own-body. This work highlighted how own-body perception relies on a plastic brain representation emerging from multisensory integration. The flexibility of this representation is reflected in the short-term modulations of physiological states and perceptual processing observed during these illusions. Here, we explore the impact of ownership illusions on the temporal dimension of multisensory integration. We show that, during the illusion, the temporal window for integrating touch on the physical body with touch seen on a virtual body representation, increases with respect to integration with visual events seen close but separated from the virtual body. We show that this effect is mediated by the ownership illusion. Crucially, the temporal window for visuotactile integration was positively correlated with participants' scores rating the illusory experience of owning the virtual body and touching the object seen in contact with it. Our results corroborate the recently proposed causal inference mechanism for illusory body ownership. As a novelty, they show that the ensuing illusory causal binding between stimuli from the real and fake body relaxes constraints for the integration of bodily signals.

  16. Keeping time in the brain: Autism spectrum disorder and audiovisual temporal processing.

    PubMed

    Stevenson, Ryan A; Segers, Magali; Ferber, Susanne; Barense, Morgan D; Camarata, Stephen; Wallace, Mark T

    2016-07-01

    A growing area of interest and relevance in the study of autism spectrum disorder (ASD) focuses on the relationship between multisensory temporal function and the behavioral, perceptual, and cognitive impairments observed in ASD. Atypical sensory processing is becoming increasingly recognized as a core component of autism, with evidence of atypical processing across a number of sensory modalities. These deviations from typical processing underscore the value of interpreting ASD within a multisensory framework. Furthermore, converging evidence illustrates that these differences in audiovisual processing may be specifically related to temporal processing. This review seeks to bridge the connection between temporal processing and audiovisual perception, and to elaborate on emerging data showing differences in audiovisual temporal function in autism. We also discuss the consequence of such changes, the specific impact on the processing of different classes of audiovisual stimuli (e.g. speech vs. nonspeech, etc.), and the presumptive brain processes and networks underlying audiovisual temporal integration. Finally, possible downstream behavioral implications, and possible remediation strategies are outlined. Autism Res 2016, 9: 720-738. © 2015 International Society for Autism Research, Wiley Periodicals, Inc. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  17. Multisensory processing of naturalistic objects in motion: a high-density electrical mapping and source estimation study.

    PubMed

    Senkowski, Daniel; Saint-Amour, Dave; Kelly, Simon P; Foxe, John J

    2007-07-01

    In everyday life, we continuously and effortlessly integrate the multiple sensory inputs from objects in motion. For instance, the sound and the visual percept of vehicles in traffic provide us with complementary information about the location and motion of vehicles. Here, we used high-density electrical mapping and local auto-regressive average (LAURA) source estimation to study the integration of multisensory objects in motion as reflected in event-related potentials (ERPs). A randomized stream of naturalistic multisensory-audiovisual (AV), unisensory-auditory (A), and unisensory-visual (V) "splash" clips (i.e., a drop falling and hitting a water surface) was presented among non-naturalistic abstract motion stimuli. The visual clip onset preceded the "splash" onset by 100 ms for multisensory stimuli. For naturalistic objects early multisensory integration effects beginning 120-140 ms after sound onset were observed over posterior scalp, with distributed sources localized to occipital cortex, temporal lobule, insular, and medial frontal gyrus (MFG). These effects, together with longer latency interactions (210-250 and 300-350 ms) found in a widespread network of occipital, temporal, and frontal areas, suggest that naturalistic objects in motion are processed at multiple stages of multisensory integration. The pattern of integration effects differed considerably for non-naturalistic stimuli. Unlike naturalistic objects, no early interactions were found for non-naturalistic objects. The earliest integration effects for non-naturalistic stimuli were observed 210-250 ms after sound onset including large portions of the inferior parietal cortex (IPC). As such, there were clear differences in the cortical networks activated by multisensory motion stimuli as a consequence of the semantic relatedness (or lack thereof) of the constituent sensory elements.

  18. A model of the temporal dynamics of multisensory enhancement

    PubMed Central

    Rowland, Benjamin A.; Stein, Barry E.

    2014-01-01

    The senses transduce different forms of environmental energy, and the brain synthesizes information across them to enhance responses to salient biological events. We hypothesize that the potency of multisensory integration is attributable to the convergence of independent and temporally aligned signals derived from cross-modal stimulus configurations onto multisensory neurons. The temporal profile of multisensory integration in neurons of the deep superior colliculus (SC) is consistent with this hypothesis. The responses of these neurons to visual, auditory, and combinations of visual–auditory stimuli reveal that multisensory integration takes place in real-time; that is, the input signals are integrated as soon as they arrive at the target neuron. Interactions between cross-modal signals may appear to reflect linear or nonlinear computations on a moment-by-moment basis, the aggregate of which determines the net product of multisensory integration. Modeling observations presented here suggest that the early nonlinear components of the temporal profile of multisensory integration can be explained with a simple spiking neuron model, and do not require more sophisticated assumptions about the underlying biology. A transition from nonlinear “super-additive” computation to linear, additive computation can be accomplished via scaled inhibition. The findings provide a set of design constraints for artificial implementations seeking to exploit the basic principles and potency of biological multisensory integration in contexts of sensory substitution or augmentation. PMID:24374382

  19. Perceptual learning shapes multisensory causal inference via two distinct mechanisms

    PubMed Central

    McGovern, David P.; Roudaia, Eugenie; Newell, Fiona N.; Roach, Neil W.

    2016-01-01

    To accurately represent the environment, our brains must integrate sensory signals from a common source while segregating those from independent sources. A reasonable strategy for performing this task is to restrict integration to cues that coincide in space and time. However, because multisensory signals are subject to differential transmission and processing delays, the brain must retain a degree of tolerance for temporal discrepancies. Recent research suggests that the width of this ‘temporal binding window’ can be reduced through perceptual learning, however, little is known about the mechanisms underlying these experience-dependent effects. Here, in separate experiments, we measure the temporal and spatial binding windows of human participants before and after training on an audiovisual temporal discrimination task. We show that training leads to two distinct effects on multisensory integration in the form of (i) a specific narrowing of the temporal binding window that does not transfer to spatial binding and (ii) a general reduction in the magnitude of crossmodal interactions across all spatiotemporal disparities. These effects arise naturally from a Bayesian model of causal inference in which learning improves the precision of audiovisual timing estimation, whilst concomitantly decreasing the prior expectation that stimuli emanate from a common source. PMID:27091411

  20. Perceptual learning shapes multisensory causal inference via two distinct mechanisms.

    PubMed

    McGovern, David P; Roudaia, Eugenie; Newell, Fiona N; Roach, Neil W

    2016-04-19

    To accurately represent the environment, our brains must integrate sensory signals from a common source while segregating those from independent sources. A reasonable strategy for performing this task is to restrict integration to cues that coincide in space and time. However, because multisensory signals are subject to differential transmission and processing delays, the brain must retain a degree of tolerance for temporal discrepancies. Recent research suggests that the width of this 'temporal binding window' can be reduced through perceptual learning, however, little is known about the mechanisms underlying these experience-dependent effects. Here, in separate experiments, we measure the temporal and spatial binding windows of human participants before and after training on an audiovisual temporal discrimination task. We show that training leads to two distinct effects on multisensory integration in the form of (i) a specific narrowing of the temporal binding window that does not transfer to spatial binding and (ii) a general reduction in the magnitude of crossmodal interactions across all spatiotemporal disparities. These effects arise naturally from a Bayesian model of causal inference in which learning improves the precision of audiovisual timing estimation, whilst concomitantly decreasing the prior expectation that stimuli emanate from a common source.

  1. Crossmodal association of auditory and visual material properties in infants.

    PubMed

    Ujiie, Yuta; Yamashita, Wakayo; Fujisaki, Waka; Kanazawa, So; Yamaguchi, Masami K

    2018-06-18

    The human perceptual system enables us to extract visual properties of an object's material from auditory information. In monkeys, the neural basis underlying such multisensory association develops through experience of exposure to a material; material information could be processed in the posterior inferior temporal cortex, progressively from the high-order visual areas. In humans, however, the development of this neural representation remains poorly understood. Here, we demonstrated for the first time the presence of a mapping of the auditory material property with visual material ("Metal" and "Wood") in the right temporal region in preverbal 4- to 8-month-old infants, using near-infrared spectroscopy (NIRS). Furthermore, we found that infants acquired the audio-visual mapping for a property of the "Metal" material later than for the "Wood" material, since infants form the visual property of "Metal" material after approximately 6 months of age. These findings indicate that multisensory processing of material information induces the activation of brain areas related to sound symbolism. Our findings also indicate that the material's familiarity might facilitate the development of multisensory processing during the first year of life.

  2. GABA concentration in superior temporal sulcus predicts gamma power and perception in the sound-induced flash illusion.

    PubMed

    Balz, Johanna; Keil, Julian; Roa Romero, Yadira; Mekle, Ralf; Schubert, Florian; Aydin, Semiha; Ittermann, Bernd; Gallinat, Jürgen; Senkowski, Daniel

    2016-01-15

    In everyday life we are confronted with inputs of multisensory stimuli that need to be integrated across our senses. Individuals vary considerably in how they integrate multisensory information, yet the neurochemical foundations underlying this variability are not well understood. Neural oscillations, especially in the gamma band (>30Hz) play an important role in multisensory processing. Furthermore, gamma-aminobutyric acid (GABA) neurotransmission contributes to the generation of gamma band oscillations (GBO), which can be sustained by activation of metabotropic glutamate receptors. Hence, differences in the GABA and glutamate systems might contribute to individual differences in multisensory processing. In this combined magnetic resonance spectroscopy and electroencephalography study, we examined the relationships between GABA and glutamate concentrations in the superior temporal sulcus (STS), source localized GBO, and illusion rate in the sound-induced flash illusion (SIFI). In 39 human volunteers we found robust relationships between GABA concentration, GBO power, and the SIFI perception rate (r-values=0.44 to 0.53). The correlation between GBO power and SIFI perception rate was about twofold higher when the modulating influence of the GABA level was included in the analysis as compared to when it was excluded. No significant effects were obtained for glutamate concentration. Our study suggests that the GABA level shapes individual differences in audiovisual perception through its modulating influence on GBO. GABA neurotransmission could be a promising target for treatment interventions of multisensory processing deficits in clinical populations, such as schizophrenia or autism. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. fMR-adaptation indicates selectivity to audiovisual content congruency in distributed clusters in human superior temporal cortex.

    PubMed

    van Atteveldt, Nienke M; Blau, Vera C; Blomert, Leo; Goebel, Rainer

    2010-02-02

    Efficient multisensory integration is of vital importance for adequate interaction with the environment. In addition to basic binding cues like temporal and spatial coherence, meaningful multisensory information is also bound together by content-based associations. Many functional Magnetic Resonance Imaging (fMRI) studies propose the (posterior) superior temporal cortex (STC) as the key structure for integrating meaningful multisensory information. However, a still unanswered question is how superior temporal cortex encodes content-based associations, especially in light of inconsistent results from studies comparing brain activation to semantically matching (congruent) versus nonmatching (incongruent) multisensory inputs. Here, we used fMR-adaptation (fMR-A) in order to circumvent potential problems with standard fMRI approaches, including spatial averaging and amplitude saturation confounds. We presented repetitions of audiovisual stimuli (letter-speech sound pairs) and manipulated the associative relation between the auditory and visual inputs (congruent/incongruent pairs). We predicted that if multisensory neuronal populations exist in STC and encode audiovisual content relatedness, adaptation should be affected by the manipulated audiovisual relation. The results revealed an occipital-temporal network that adapted independently of the audiovisual relation. Interestingly, several smaller clusters distributed over superior temporal cortex within that network, adapted stronger to congruent than to incongruent audiovisual repetitions, indicating sensitivity to content congruency. These results suggest that the revealed clusters contain multisensory neuronal populations that encode content relatedness by selectively responding to congruent audiovisual inputs, since unisensory neuronal populations are assumed to be insensitive to the audiovisual relation. These findings extend our previously revealed mechanism for the integration of letters and speech sounds and demonstrate that fMR-A is sensitive to multisensory congruency effects that may not be revealed in BOLD amplitude per se.

  4. Multisensory integration and ADHD-like traits: Evidence for an abnormal temporal integration window in ADHD.

    PubMed

    Panagiotidi, Maria; Overton, Paul G; Stafford, Tom

    2017-11-01

    Abnormalities in multimodal processing have been found in many developmental disorders such as autism and dyslexia. However, surprisingly little empirical work has been conducted to test the integrity of multisensory integration in Attention Deficit Hyperactivity Disorder (ADHD). The main aim of the present study was to examine links between symptoms of ADHD (as measured using a self-report scale in a healthy adult population) and the temporal aspects of multisensory processing. More specifically, a Simultaneity Judgement (SJ) and a Temporal Order Judgement (TOJ) task were used in participants with low and high levels of ADHD-like traits to measure the temporal integration window and Just-Noticeable Difference (JND) (respectively) between the timing of an auditory beep and a visual pattern presented over a broad range of stimulus onset asynchronies. The Point of Subjective Similarity (PSS) was also measured in both cases. In the SJ task, participants with high levels of ADHD-like traits considered significantly fewer stimuli to be simultaneous than participants with high levels of ADHD-like traits, and the former were found to have significantly smaller temporal windows of integration (although no difference was found in the PSS in the SJ or TOJ tasks, or the JND in the latter). This is the first study to identify an abnormal temporal integration window in individuals with ADHD-like traits. Perceived temporal misalignment of two or more modalities can lead to distractibility (e.g., when the stimulus components from different modalities occur separated by too large of a temporal gap). Hence, an abnormality in the perception of simultaneity could lead to the increased distractibility seen in ADHD. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Recalibration of the Multisensory Temporal Window of Integration Results from Changing Task Demands

    PubMed Central

    Mégevand, Pierre; Molholm, Sophie; Nayak, Ashabari; Foxe, John J.

    2013-01-01

    The notion of the temporal window of integration, when applied in a multisensory context, refers to the breadth of the interval across which the brain perceives two stimuli from different sensory modalities as synchronous. It maintains a unitary perception of multisensory events despite physical and biophysical timing differences between the senses. The boundaries of the window can be influenced by attention and past sensory experience. Here we examined whether task demands could also influence the multisensory temporal window of integration. We varied the stimulus onset asynchrony between simple, short-lasting auditory and visual stimuli while participants performed two tasks in separate blocks: a temporal order judgment task that required the discrimination of subtle auditory-visual asynchronies, and a reaction time task to the first incoming stimulus irrespective of its sensory modality. We defined the temporal window of integration as the range of stimulus onset asynchronies where performance was below 75% in the temporal order judgment task, as well as the range of stimulus onset asynchronies where responses showed multisensory facilitation (race model violation) in the reaction time task. In 5 of 11 participants, we observed audio-visual stimulus onset asynchronies where reaction time was significantly accelerated (indicating successful integration in this task) while performance was accurate in the temporal order judgment task (indicating successful segregation in that task). This dissociation suggests that in some participants, the boundaries of the temporal window of integration can adaptively recalibrate in order to optimize performance according to specific task demands. PMID:23951203

  6. Multisensory connections of monkey auditory cerebral cortex

    PubMed Central

    Smiley, John F.; Falchier, Arnaud

    2009-01-01

    Functional studies have demonstrated multisensory responses in auditory cortex, even in the primary and early auditory association areas. The features of somatosensory and visual responses in auditory cortex suggest that they are involved in multiple processes including spatial, temporal and object-related perception. Tract tracing studies in monkeys have demonstrated several potential sources of somatosensory and visual inputs to auditory cortex. These include potential somatosensory inputs from the retroinsular (RI) and granular insula (Ig) cortical areas, and from the thalamic posterior (PO) nucleus. Potential sources of visual responses include peripheral field representations of areas V2 and prostriata, as well as the superior temporal polysensory area (STP) in the superior temporal sulcus, and the magnocellular medial geniculate thalamic nucleus (MGm). Besides these sources, there are several other thalamic, limbic and cortical association structures that have multisensory responses and may contribute cross-modal inputs to auditory cortex. These connections demonstrated by tract tracing provide a list of potential inputs, but in most cases their significance has not been confirmed by functional experiments. It is possible that the somatosensory and visual modulation of auditory cortex are each mediated by multiple extrinsic sources. PMID:19619628

  7. On the relative contributions of multisensory integration and crossmodal exogenous spatial attention to multisensory response enhancement.

    PubMed

    Van der Stoep, N; Spence, C; Nijboer, T C W; Van der Stigchel, S

    2015-11-01

    Two processes that can give rise to multisensory response enhancement (MRE) are multisensory integration (MSI) and crossmodal exogenous spatial attention. It is, however, currently unclear what the relative contribution of each of these is to MRE. We investigated this issue using two tasks that are generally assumed to measure MSI (a redundant target effect task) and crossmodal exogenous spatial attention (a spatial cueing task). One block of trials consisted of unimodal auditory and visual targets designed to provide a unimodal baseline. In two other blocks of trials, the participants were presented with spatially and temporally aligned and misaligned audiovisual (AV) targets (0, 50, 100, and 200ms SOA). In the integration block, the participants were instructed to respond to the onset of the first target stimulus that they detected (A or V). The instruction for the cueing block was to respond only to the onset of the visual targets. The targets could appear at one of three locations: left, center, and right. The participants were instructed to respond only to lateral targets. The results indicated that MRE was caused by MSI at 0ms SOA. At 50ms SOA, both crossmodal exogenous spatial attention and MSI contributed to the observed MRE, whereas the MRE observed at the 100 and 200ms SOAs was attributable to crossmodal exogenous spatial attention, alerting, and temporal preparation. These results therefore suggest that there may be a temporal window in which both MSI and exogenous crossmodal spatial attention can contribute to multisensory response enhancement. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Auditory and visual modulation of temporal lobe neurons in voice-sensitive and association cortices.

    PubMed

    Perrodin, Catherine; Kayser, Christoph; Logothetis, Nikos K; Petkov, Christopher I

    2014-02-12

    Effective interactions between conspecific individuals can depend upon the receiver forming a coherent multisensory representation of communication signals, such as merging voice and face content. Neuroimaging studies have identified face- or voice-sensitive areas (Belin et al., 2000; Petkov et al., 2008; Tsao et al., 2008), some of which have been proposed as candidate regions for face and voice integration (von Kriegstein et al., 2005). However, it was unclear how multisensory influences occur at the neuronal level within voice- or face-sensitive regions, especially compared with classically defined multisensory regions in temporal association cortex (Stein and Stanford, 2008). Here, we characterize auditory (voice) and visual (face) influences on neuronal responses in a right-hemisphere voice-sensitive region in the anterior supratemporal plane (STP) of Rhesus macaques. These results were compared with those in the neighboring superior temporal sulcus (STS). Within the STP, our results show auditory sensitivity to several vocal features, which was not evident in STS units. We also newly identify a functionally distinct neuronal subpopulation in the STP that appears to carry the area's sensitivity to voice identity related features. Audiovisual interactions were prominent in both the STP and STS. However, visual influences modulated the responses of STS neurons with greater specificity and were more often associated with congruent voice-face stimulus pairings than STP neurons. Together, the results reveal the neuronal processes subserving voice-sensitive fMRI activity patterns in primates, generate hypotheses for testing in the visual modality, and clarify the position of voice-sensitive areas within the unisensory and multisensory processing hierarchies.

  9. Auditory and Visual Modulation of Temporal Lobe Neurons in Voice-Sensitive and Association Cortices

    PubMed Central

    Perrodin, Catherine; Kayser, Christoph; Logothetis, Nikos K.

    2014-01-01

    Effective interactions between conspecific individuals can depend upon the receiver forming a coherent multisensory representation of communication signals, such as merging voice and face content. Neuroimaging studies have identified face- or voice-sensitive areas (Belin et al., 2000; Petkov et al., 2008; Tsao et al., 2008), some of which have been proposed as candidate regions for face and voice integration (von Kriegstein et al., 2005). However, it was unclear how multisensory influences occur at the neuronal level within voice- or face-sensitive regions, especially compared with classically defined multisensory regions in temporal association cortex (Stein and Stanford, 2008). Here, we characterize auditory (voice) and visual (face) influences on neuronal responses in a right-hemisphere voice-sensitive region in the anterior supratemporal plane (STP) of Rhesus macaques. These results were compared with those in the neighboring superior temporal sulcus (STS). Within the STP, our results show auditory sensitivity to several vocal features, which was not evident in STS units. We also newly identify a functionally distinct neuronal subpopulation in the STP that appears to carry the area's sensitivity to voice identity related features. Audiovisual interactions were prominent in both the STP and STS. However, visual influences modulated the responses of STS neurons with greater specificity and were more often associated with congruent voice-face stimulus pairings than STP neurons. Together, the results reveal the neuronal processes subserving voice-sensitive fMRI activity patterns in primates, generate hypotheses for testing in the visual modality, and clarify the position of voice-sensitive areas within the unisensory and multisensory processing hierarchies. PMID:24523543

  10. Natural asynchronies in audiovisual communication signals regulate neuronal multisensory interactions in voice-sensitive cortex.

    PubMed

    Perrodin, Catherine; Kayser, Christoph; Logothetis, Nikos K; Petkov, Christopher I

    2015-01-06

    When social animals communicate, the onset of informative content in one modality varies considerably relative to the other, such as when visual orofacial movements precede a vocalization. These naturally occurring asynchronies do not disrupt intelligibility or perceptual coherence. However, they occur on time scales where they likely affect integrative neuronal activity in ways that have remained unclear, especially for hierarchically downstream regions in which neurons exhibit temporally imprecise but highly selective responses to communication signals. To address this, we exploited naturally occurring face- and voice-onset asynchronies in primate vocalizations. Using these as stimuli we recorded cortical oscillations and neuronal spiking responses from functional MRI (fMRI)-localized voice-sensitive cortex in the anterior temporal lobe of macaques. We show that the onset of the visual face stimulus resets the phase of low-frequency oscillations, and that the face-voice asynchrony affects the prominence of two key types of neuronal multisensory responses: enhancement or suppression. Our findings show a three-way association between temporal delays in audiovisual communication signals, phase-resetting of ongoing oscillations, and the sign of multisensory responses. The results reveal how natural onset asynchronies in cross-sensory inputs regulate network oscillations and neuronal excitability in the voice-sensitive cortex of macaques, a suggested animal model for human voice areas. These findings also advance predictions on the impact of multisensory input on neuronal processes in face areas and other brain regions.

  11. Temporal Structure and Complexity Affect Audio-Visual Correspondence Detection

    PubMed Central

    Denison, Rachel N.; Driver, Jon; Ruff, Christian C.

    2013-01-01

    Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence) to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task reproduced features of past findings based on explicit timing judgments but did not show any special advantage for perfectly synchronous streams. Importantly, the complexity of temporal patterns influences sensitivity to correspondence. Stochastic, irregular streams – with richer temporal pattern information – led to higher audio-visual matching sensitivity than predictable, rhythmic streams. Our results reveal that temporal structure and its complexity are key determinants for human detection of audio-visual correspondence. The distinctive emphasis of our new paradigms on temporal patterning could be useful for studying special populations with suspected abnormalities in audio-visual temporal perception and multisensory integration. PMID:23346067

  12. Musicians have enhanced audiovisual multisensory binding: experience-dependent effects in the double-flash illusion.

    PubMed

    Bidelman, Gavin M

    2016-10-01

    Musical training is associated with behavioral and neurophysiological enhancements in auditory processing for both musical and nonmusical sounds (e.g., speech). Yet, whether the benefits of musicianship extend beyond enhancements to auditory-specific skills and impact multisensory (e.g., audiovisual) processing has yet to be fully validated. Here, we investigated multisensory integration of auditory and visual information in musicians and nonmusicians using a double-flash illusion, whereby the presentation of multiple auditory stimuli (beeps) concurrent with a single visual object (flash) induces an illusory perception of multiple flashes. We parametrically varied the onset asynchrony between auditory and visual events (leads and lags of ±300 ms) to quantify participants' "temporal window" of integration, i.e., stimuli in which auditory and visual cues were fused into a single percept. Results show that musically trained individuals were both faster and more accurate at processing concurrent audiovisual cues than their nonmusician peers; nonmusicians had a higher susceptibility for responding to audiovisual illusions and perceived double flashes over an extended range of onset asynchronies compared to trained musicians. Moreover, temporal window estimates indicated that musicians' windows (<100 ms) were ~2-3× shorter than nonmusicians' (~200 ms), suggesting more refined multisensory integration and audiovisual binding. Collectively, findings indicate a more refined binding of auditory and visual cues in musically trained individuals. We conclude that experience-dependent plasticity of intensive musical experience extends beyond simple listening skills, improving multimodal processing and the integration of multiple sensory systems in a domain-general manner.

  13. Shifts in Audiovisual Processing in Healthy Aging.

    PubMed

    Baum, Sarah H; Stevenson, Ryan

    2017-09-01

    The integration of information across sensory modalities into unified percepts is a fundamental sensory process upon which a multitude of cognitive processes are based. We review the body of literature exploring aging-related changes in audiovisual integration published over the last five years. Specifically, we review the impact of changes in temporal processing, the influence of the effectiveness of sensory inputs, the role of working memory, and the newer studies of intra-individual variability during these processes. Work in the last five years on bottom-up influences of sensory perception has garnered significant attention. Temporal processing, a driving factors of multisensory integration, has now been shown to decouple with multisensory integration in aging, despite their co-decline with aging. The impact of stimulus effectiveness also changes with age, where older adults show maximal benefit from multisensory gain at high signal-to-noise ratios. Following sensory decline, high working memory capacities have now been shown to be somewhat of a protective factor against age-related declines in audiovisual speech perception, particularly in noise. Finally, newer research is emerging focusing on the general intra-individual variability observed with aging. Overall, the studies of the past five years have replicated and expanded on previous work that highlights the role of bottom-up sensory changes with aging and their influence on audiovisual integration, as well as the top-down influence of working memory.

  14. Audio-tactile integration and the influence of musical training.

    PubMed

    Kuchenbuch, Anja; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Pantev, Christo

    2014-01-01

    Perception of our environment is a multisensory experience; information from different sensory systems like the auditory, visual and tactile is constantly integrated. Complex tasks that require high temporal and spatial precision of multisensory integration put strong demands on the underlying networks but it is largely unknown how task experience shapes multisensory processing. Long-term musical training is an excellent model for brain plasticity because it shapes the human brain at functional and structural levels, affecting a network of brain areas. In the present study we used magnetoencephalography (MEG) to investigate how audio-tactile perception is integrated in the human brain and if musicians show enhancement of the corresponding activation compared to non-musicians. Using a paradigm that allowed the investigation of combined and separate auditory and tactile processing, we found a multisensory incongruency response, generated in frontal, cingulate and cerebellar regions, an auditory mismatch response generated mainly in the auditory cortex and a tactile mismatch response generated in frontal and cerebellar regions. The influence of musical training was seen in the audio-tactile as well as in the auditory condition, indicating enhanced higher-order processing in musicians, while the sources of the tactile MMN were not influenced by long-term musical training. Consistent with the predictive coding model, more basic, bottom-up sensory processing was relatively stable and less affected by expertise, whereas areas for top-down models of multisensory expectancies were modulated by training.

  15. Modality-specific selective attention attenuates multisensory integration.

    PubMed

    Mozolic, Jennifer L; Hugenschmidt, Christina E; Peiffer, Ann M; Laurienti, Paul J

    2008-01-01

    Stimuli occurring in multiple sensory modalities that are temporally synchronous or spatially coincident can be integrated together to enhance perception. Additionally, the semantic content or meaning of a stimulus can influence cross-modal interactions, improving task performance when these stimuli convey semantically congruent or matching information, but impairing performance when they contain non-matching or distracting information. Attention is one mechanism that is known to alter processing of sensory stimuli by enhancing perception of task-relevant information and suppressing perception of task-irrelevant stimuli. It is not known, however, to what extent attention to a single sensory modality can minimize the impact of stimuli in the unattended sensory modality and reduce the integration of stimuli across multiple sensory modalities. Our hypothesis was that modality-specific selective attention would limit processing of stimuli in the unattended sensory modality, resulting in a reduction of performance enhancements produced by semantically matching multisensory stimuli, and a reduction in performance decrements produced by semantically non-matching multisensory stimuli. The results from two experiments utilizing a cued discrimination task demonstrate that selective attention to a single sensory modality prevents the integration of matching multisensory stimuli that is normally observed when attention is divided between sensory modalities. Attention did not reliably alter the amount of distraction caused by non-matching multisensory stimuli on this task; however, these findings highlight a critical role for modality-specific selective attention in modulating multisensory integration.

  16. Natural asynchronies in audiovisual communication signals regulate neuronal multisensory interactions in voice-sensitive cortex

    PubMed Central

    Perrodin, Catherine; Kayser, Christoph; Logothetis, Nikos K.; Petkov, Christopher I.

    2015-01-01

    When social animals communicate, the onset of informative content in one modality varies considerably relative to the other, such as when visual orofacial movements precede a vocalization. These naturally occurring asynchronies do not disrupt intelligibility or perceptual coherence. However, they occur on time scales where they likely affect integrative neuronal activity in ways that have remained unclear, especially for hierarchically downstream regions in which neurons exhibit temporally imprecise but highly selective responses to communication signals. To address this, we exploited naturally occurring face- and voice-onset asynchronies in primate vocalizations. Using these as stimuli we recorded cortical oscillations and neuronal spiking responses from functional MRI (fMRI)-localized voice-sensitive cortex in the anterior temporal lobe of macaques. We show that the onset of the visual face stimulus resets the phase of low-frequency oscillations, and that the face–voice asynchrony affects the prominence of two key types of neuronal multisensory responses: enhancement or suppression. Our findings show a three-way association between temporal delays in audiovisual communication signals, phase-resetting of ongoing oscillations, and the sign of multisensory responses. The results reveal how natural onset asynchronies in cross-sensory inputs regulate network oscillations and neuronal excitability in the voice-sensitive cortex of macaques, a suggested animal model for human voice areas. These findings also advance predictions on the impact of multisensory input on neuronal processes in face areas and other brain regions. PMID:25535356

  17. Temporal Ventriloquism Reveals Intact Audiovisual Temporal Integration in Amblyopia.

    PubMed

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-02-01

    We have shown previously that amblyopia involves impaired detection of asynchrony between auditory and visual events. To distinguish whether this impairment represents a defect in temporal integration or nonintegrative multisensory processing (e.g., cross-modal matching), we used the temporal ventriloquism effect in which visual temporal order judgment (TOJ) is normally enhanced by a lagging auditory click. Participants with amblyopia (n = 9) and normally sighted controls (n = 9) performed a visual TOJ task. Pairs of clicks accompanied the two lights such that the first click preceded the first light, or second click lagged the second light by 100, 200, or 450 ms. Baseline audiovisual synchrony and visual-only conditions also were tested. Within both groups, just noticeable differences for the visual TOJ task were significantly reduced compared with baseline in the 100- and 200-ms click lag conditions. Within the amblyopia group, poorer stereo acuity and poorer visual acuity in the amblyopic eye were significantly associated with greater enhancement in visual TOJ performance in the 200-ms click lag condition. Audiovisual temporal integration is intact in amblyopia, as indicated by perceptual enhancement in the temporal ventriloquism effect. Furthermore, poorer stereo acuity and poorer visual acuity in the amblyopic eye are associated with a widened temporal binding window for the effect. These findings suggest that previously reported abnormalities in audiovisual multisensory processing may result from impaired cross-modal matching rather than a diminished capacity for temporal audiovisual integration.

  18. Being First Matters: Topographical Representational Similarity Analysis of ERP Signals Reveals Separate Networks for Audiovisual Temporal Binding Depending on the Leading Sense.

    PubMed

    Cecere, Roberto; Gross, Joachim; Willis, Ashleigh; Thut, Gregor

    2017-05-24

    In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Intersensory timing is crucial in this process because only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window, revealing asymmetries in its size and plasticity depending on the leading input: auditory-visual (AV) or visual-auditory (VA). Here, we tested whether separate neuronal mechanisms underlie this AV-VA dichotomy in humans. We recorded high-density EEG while participants performed an audiovisual simultaneity judgment task including various AV-VA asynchronies and unisensory control conditions (visual-only, auditory-only) and tested whether AV and VA processing generate different patterns of brain activity. After isolating the multisensory components of AV-VA event-related potentials (ERPs) from the sum of their unisensory constituents, we ran a time-resolved topographical representational similarity analysis (tRSA) comparing the AV and VA ERP maps. Spatial cross-correlation matrices were built from real data to index the similarity between the AV and VA maps at each time point (500 ms window after stimulus) and then correlated with two alternative similarity model matrices: AV maps = VA maps versus AV maps ≠ VA maps The tRSA results favored the AV maps ≠ VA maps model across all time points, suggesting that audiovisual temporal binding (indexed by synchrony perception) engages different neural pathways depending on the leading sense. The existence of such dual route supports recent theoretical accounts proposing that multiple binding mechanisms are implemented in the brain to accommodate different information parsing strategies in auditory and visual sensory systems. SIGNIFICANCE STATEMENT Intersensory timing is a crucial aspect of multisensory integration, determining whether and how inputs in one modality enhance stimulus processing in another modality. Our research demonstrates that evaluating synchrony of auditory-leading (AV) versus visual-leading (VA) audiovisual stimulus pairs is characterized by two distinct patterns of brain activity. This suggests that audiovisual integration is not a unitary process and that different binding mechanisms are recruited in the brain based on the leading sense. These mechanisms may be relevant for supporting different classes of multisensory operations, for example, auditory enhancement of visual attention (AV) and visual enhancement of auditory speech (VA). Copyright © 2017 Cecere et al.

  19. Being First Matters: Topographical Representational Similarity Analysis of ERP Signals Reveals Separate Networks for Audiovisual Temporal Binding Depending on the Leading Sense

    PubMed Central

    2017-01-01

    In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Intersensory timing is crucial in this process because only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window, revealing asymmetries in its size and plasticity depending on the leading input: auditory–visual (AV) or visual–auditory (VA). Here, we tested whether separate neuronal mechanisms underlie this AV–VA dichotomy in humans. We recorded high-density EEG while participants performed an audiovisual simultaneity judgment task including various AV–VA asynchronies and unisensory control conditions (visual-only, auditory-only) and tested whether AV and VA processing generate different patterns of brain activity. After isolating the multisensory components of AV–VA event-related potentials (ERPs) from the sum of their unisensory constituents, we ran a time-resolved topographical representational similarity analysis (tRSA) comparing the AV and VA ERP maps. Spatial cross-correlation matrices were built from real data to index the similarity between the AV and VA maps at each time point (500 ms window after stimulus) and then correlated with two alternative similarity model matrices: AVmaps = VAmaps versus AVmaps ≠ VAmaps. The tRSA results favored the AVmaps ≠ VAmaps model across all time points, suggesting that audiovisual temporal binding (indexed by synchrony perception) engages different neural pathways depending on the leading sense. The existence of such dual route supports recent theoretical accounts proposing that multiple binding mechanisms are implemented in the brain to accommodate different information parsing strategies in auditory and visual sensory systems. SIGNIFICANCE STATEMENT Intersensory timing is a crucial aspect of multisensory integration, determining whether and how inputs in one modality enhance stimulus processing in another modality. Our research demonstrates that evaluating synchrony of auditory-leading (AV) versus visual-leading (VA) audiovisual stimulus pairs is characterized by two distinct patterns of brain activity. This suggests that audiovisual integration is not a unitary process and that different binding mechanisms are recruited in the brain based on the leading sense. These mechanisms may be relevant for supporting different classes of multisensory operations, for example, auditory enhancement of visual attention (AV) and visual enhancement of auditory speech (VA). PMID:28450537

  20. Superadditive responses in superior temporal sulcus predict audiovisual benefits in object categorization.

    PubMed

    Werner, Sebastian; Noppeney, Uta

    2010-08-01

    Merging information from multiple senses provides a more reliable percept of our environment. Yet, little is known about where and how various sensory features are combined within the cortical hierarchy. Combining functional magnetic resonance imaging and psychophysics, we investigated the neural mechanisms underlying integration of audiovisual object features. Subjects categorized or passively perceived audiovisual object stimuli with the informativeness (i.e., degradation) of the auditory and visual modalities being manipulated factorially. Controlling for low-level integration processes, we show higher level audiovisual integration selectively in the superior temporal sulci (STS) bilaterally. The multisensory interactions were primarily subadditive and even suppressive for intact stimuli but turned into additive effects for degraded stimuli. Consistent with the inverse effectiveness principle, auditory and visual informativeness determine the profile of audiovisual integration in STS similarly to the influence of physical stimulus intensity in the superior colliculus. Importantly, when holding stimulus degradation constant, subjects' audiovisual behavioral benefit predicts their multisensory integration profile in STS: only subjects that benefit from multisensory integration exhibit superadditive interactions, while those that do not benefit show suppressive interactions. In conclusion, superadditive and subadditive integration profiles in STS are functionally relevant and related to behavioral indices of multisensory integration with superadditive interactions mediating successful audiovisual object categorization.

  1. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG).

    PubMed

    Lu, Yao; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Kuchenbuch, Anja; Pantev, Christo

    2014-01-01

    Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave), while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  2. The Multisensory Nature of Verbal Discourse in Parent-Toddler Interactions.

    PubMed

    Suanda, Sumarga H; Smith, Linda B; Yu, Chen

    Toddlers learn object names in sensory rich contexts. Many argue that this multisensory experience facilitates learning. Here, we examine how toddlers' multisensory experience is linked to another aspect of their experience associated with better learning: the temporally extended nature of verbal discourse. We observed parent-toddler dyads as they played with, and as parents talked about, a set of objects. Analyses revealed links between the multisensory and extended nature of speech, highlighting inter-connections and redundancies in the environment. We discuss the implications of these results for our understanding of early discourse, multisensory communication, and how the learning environment shapes language development.

  3. Temporal factors affecting somatosensory–auditory interactions in speech processing

    PubMed Central

    Ito, Takayuki; Gracco, Vincent L.; Ostry, David J.

    2014-01-01

    Speech perception is known to rely on both auditory and visual information. However, sound-specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009). In the present study, we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory–auditory interaction in speech perception. We examined the changes in event-related potentials (ERPs) in response to multisensory synchronous (simultaneous) and asynchronous (90 ms lag and lead) somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the ERP was reliably different from the two unisensory potentials. More importantly, the magnitude of the ERP difference varied as a function of the relative timing of the somatosensory–auditory stimulation. Event-related activity change due to stimulus timing was seen between 160 and 220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory–auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production. PMID:25452733

  4. Flexibility and Stability in Sensory Processing Revealed Using Visual-to-Auditory Sensory Substitution

    PubMed Central

    Hertz, Uri; Amedi, Amir

    2015-01-01

    The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. PMID:24518756

  5. Flexibility and Stability in Sensory Processing Revealed Using Visual-to-Auditory Sensory Substitution.

    PubMed

    Hertz, Uri; Amedi, Amir

    2015-08-01

    The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. © The Author 2014. Published by Oxford University Press.

  6. Temporal Processing of Audiovisual Stimuli Is Enhanced in Musicians: Evidence from Magnetoencephalography (MEG)

    PubMed Central

    Lu, Yao; Paraskevopoulos, Evangelos; Herholz, Sibylle C.; Kuchenbuch, Anja; Pantev, Christo

    2014-01-01

    Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave), while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events. PMID:24595014

  7. Neural Correlates of Multisensory Perceptual Learning

    PubMed Central

    Powers, Albert R.; Hevey, Matthew A.; Wallace, Mark T.

    2012-01-01

    The brain’s ability to bind incoming auditory and visual stimuli depends critically on the temporal structure of this information. Specifically, there exists a temporal window of audiovisual integration within which stimuli are highly likely to be perceived as part of the same environmental event. Several studies have described the temporal bounds of this window, but few have investigated its malleability. Recently, our laboratory has demonstrated that a perceptual training paradigm is capable of eliciting a 40% narrowing in the width of this window that is stable for at least one week after cessation of training. In the current study we sought to reveal the neural substrates of these changes. Eleven human subjects completed an audiovisual simultaneity judgment training paradigm, immediately before and after which they performed the same task during an event-related 3T fMRI session. The posterior superior temporal sulcus (pSTS) and areas of auditory and visual cortex exhibited robust BOLD decreases following training, and resting state and effective connectivity analyses revealed significant increases in coupling among these cortices after training. These results provide the first evidence of the neural correlates underlying changes in multisensory temporal binding and that likely represent the substrate for a multisensory temporal binding window. PMID:22553032

  8. Aging-related changes in auditory and visual integration measured with MEG

    PubMed Central

    Stephen, Julia M.; Knoefel, Janice E.; Adair, John; Hart, Blaine; Aine, Cheryl J.

    2010-01-01

    As noted in the aging literature, processing delays often occur in the central nervous system with increasing age, which is often attributable in part to demyelination. In addition, differential slowing between sensory systems has been shown to be most discrepant between visual (up to 20 ms) and auditory systems (< 5 ms). Therefore, we used MEG to measure the multisensory integration response in auditory association cortex in young and elderly participants to better understand the effects of aging on multisensory integration abilities. Results show a main effect for reaction times (RTs); the mean RTs of the elderly were significantly slower than the young. In addition, in the young we found significant facilitation of RTs to the multisensory stimuli relative to both unisensory stimuli, when comparing the cumulative distribution functions, which was not evident for the elderly. We also identified a significant interaction between age and condition in the superior temporal gyrus. In particular, the elderly had larger amplitude responses (~100 ms) to auditory stimuli relative to the young when auditory stimuli alone were presented, whereas the amplitude of responses to the multisensory stimuli was reduced in the elderly, relative to the young. This suppressed cortical multisensory integration response in the elderly, which corresponded with slower RTs and reduced RT facilitation effects in the elderly, has not been reported previously and may be related to poor cortical integration based on timing changes in unisensory processing in the elderly. PMID:20713130

  9. Aging-related changes in auditory and visual integration measured with MEG.

    PubMed

    Stephen, Julia M; Knoefel, Janice E; Adair, John; Hart, Blaine; Aine, Cheryl J

    2010-10-22

    As noted in the aging literature, processing delays often occur in the central nervous system with increasing age, which is often attributable in part to demyelination. In addition, differential slowing between sensory systems has been shown to be most discrepant between visual (up to 20ms) and auditory systems (<5ms). Therefore, we used MEG to measure the multisensory integration response in auditory association cortex in young and elderly participants to better understand the effects of aging on multisensory integration abilities. Results show a main effect for reaction times (RTs); the mean RTs of the elderly were significantly slower than the young. In addition, in the young we found significant facilitation of RTs to the multisensory stimuli relative to both unisensory stimuli, when comparing the cumulative distribution functions, which was not evident for the elderly. We also identified a significant interaction between age and condition in the superior temporal gyrus. In particular, the elderly had larger amplitude responses (∼100ms) to auditory stimuli relative to the young when auditory stimuli alone were presented, whereas the amplitude of responses to the multisensory stimuli was reduced in the elderly, relative to the young. This suppressed cortical multisensory integration response in the elderly, which corresponded with slower RTs and reduced RT facilitation effects, has not been reported previously and may be related to poor cortical integration based on timing changes in unisensory processing in the elderly. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  10. Heterogeneity in the spatial receptive field architecture of multisensory neurons of the superior colliculus and its effects on multisensory integration

    PubMed Central

    Ghose, Dipanwita; Wallace, Mark T.

    2013-01-01

    Multisensory integration has been widely studied in neurons of the mammalian superior colliculus (SC). This has led to the description of various determinants of multisensory integration, including those based on stimulus- and neuron-specific factors. The most widely characterized of these illustrate the importance of the spatial and temporal relationships of the paired stimuli as well as their relative effectiveness in eliciting a response in determining the final integrated output. Although these stimulus-specific factors have generally been considered in isolation (i.e., manipulating stimulus location while holding all other factors constant), they have an intrinsic interdependency that has yet to be fully elucidated. For example, changes in stimulus location will likely also impact both the temporal profile of response and the effectiveness of the stimulus. The importance of better describing this interdependency is further reinforced by the fact that SC neurons have large receptive fields, and that responses at different locations within these receptive fields are far from equivalent. To address these issues, the current study was designed to examine the interdependency between the stimulus factors of space and effectiveness in dictating the multisensory responses of SC neurons. The results show that neuronal responsiveness changes dramatically with changes in stimulus location – highlighting a marked heterogeneity in the spatial receptive fields of SC neurons. More importantly, this receptive field heterogeneity played a major role in the integrative product exhibited by stimulus pairings, such that pairings at weakly responsive locations of the receptive fields resulted in the largest multisensory interactions. Together these results provide greater insight into the interrelationship of the factors underlying multisensory integration in SC neurons, and may have important mechanistic implications for multisensory integration and the role it plays in shaping SC mediated behaviors. PMID:24183964

  11. Multisensory speech perception without the left superior temporal sulcus.

    PubMed

    Baum, Sarah H; Martin, Randi C; Hamilton, A Cris; Beauchamp, Michael S

    2012-09-01

    Converging evidence suggests that the left superior temporal sulcus (STS) is a critical site for multisensory integration of auditory and visual information during speech perception. We report a patient, SJ, who suffered a stroke that damaged the left tempo-parietal area, resulting in mild anomic aphasia. Structural MRI showed complete destruction of the left middle and posterior STS, as well as damage to adjacent areas in the temporal and parietal lobes. Surprisingly, SJ demonstrated preserved multisensory integration measured with two independent tests. First, she perceived the McGurk effect, an illusion that requires integration of auditory and visual speech. Second, her perception of morphed audiovisual speech with ambiguous auditory or visual information was significantly influenced by the opposing modality. To understand the neural basis for this preserved multisensory integration, blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) was used to examine brain responses to audiovisual speech in SJ and 23 healthy age-matched controls. In controls, bilateral STS activity was observed. In SJ, no activity was observed in the damaged left STS but in the right STS, more cortex was active in SJ than in any of the normal controls. Further, the amplitude of the BOLD response in right STS response to McGurk stimuli was significantly greater in SJ than in controls. The simplest explanation of these results is a reorganization of SJ's cortical language networks such that the right STS now subserves multisensory integration of speech. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Multisensory Speech Perception Without the Left Superior Temporal Sulcus

    PubMed Central

    Baum, Sarah H.; Martin, Randi C.; Hamilton, A. Cris; Beauchamp, Michael S.

    2012-01-01

    Converging evidence suggests that the left superior temporal sulcus (STS) is a critical site for multisensory integration of auditory and visual information during speech perception. We report a patient, SJ, who suffered a stroke that damaged the left tempo-parietal area, resulting in mild anomic aphasia. Structural MRI showed complete destruction of the left middle and posterior STS, as well as damage to adjacent areas in the temporal and parietal lobes. Surprisingly, SJ demonstrated preserved multisensory integration measured with two independent tests. First, she perceived the McGurk effect, an illusion that requires integration of auditory and visual speech. Second, her perception of morphed audiovisual speech with ambiguous auditory or visual information was significantly influenced by the opposing modality. To understand the neural basis for this preserved multisensory integration, blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) was used to examine brain responses to audiovisual speech in SJ and 23 healthy age-matched controls. In controls, bilateral STS activity was observed. In SJ, no activity was observed in the damaged left STS but in the right STS, more cortex was active in SJ than in any of the normal controls. Further, the amplitude of the BOLD response in right STS response to McGurk stimuli was significantly greater in SJ than in controls. The simplest explanation of these results is a reorganization of SJ's cortical language networks such that the right STS now subserves multisensory integration of speech. PMID:22634292

  13. Audiovisual Simultaneity Judgment and Rapid Recalibration throughout the Lifespan.

    PubMed

    Noel, Jean-Paul; De Niear, Matthew; Van der Burg, Erik; Wallace, Mark T

    2016-01-01

    Multisensory interactions are well established to convey an array of perceptual and behavioral benefits. One of the key features of multisensory interactions is the temporal structure of the stimuli combined. In an effort to better characterize how temporal factors influence multisensory interactions across the lifespan, we examined audiovisual simultaneity judgment and the degree of rapid recalibration to paired audiovisual stimuli (Flash-Beep and Speech) in a sample of 220 participants ranging from 7 to 86 years of age. Results demonstrate a surprisingly protracted developmental time-course for both audiovisual simultaneity judgment and rapid recalibration, with neither reaching maturity until well into adolescence. Interestingly, correlational analyses revealed that audiovisual simultaneity judgments (i.e., the size of the audiovisual temporal window of simultaneity) and rapid recalibration significantly co-varied as a function of age. Together, our results represent the most complete description of age-related changes in audiovisual simultaneity judgments to date, as well as being the first to describe changes in the degree of rapid recalibration as a function of age. We propose that the developmental time-course of rapid recalibration scaffolds the maturation of more durable audiovisual temporal representations.

  14. Vestibular signals in macaque extrastriate visual cortex are functionally appropriate for heading perception

    PubMed Central

    Liu, Sheng; Angelaki, Dora E.

    2009-01-01

    Visual and vestibular signals converge onto the dorsal medial superior temporal area (MSTd) of the macaque extrastriate visual cortex, which is thought to be involved in multisensory heading perception for spatial navigation. Peripheral otolith information, however, is ambiguous and cannot distinguish linear accelerations experienced during self-motion from those due to changes in spatial orientation relative to gravity. Here we show that, unlike peripheral vestibular sensors but similar to lobules 9 and 10 of the cerebellar vermis (nodulus and uvula), MSTd neurons respond selectively to heading and not to changes in orientation relative to gravity. In support of a role in heading perception, MSTd vestibular responses are also dominated by velocity-like temporal dynamics, which might optimize sensory integration with visual motion information. Unlike the cerebellar vermis, however, MSTd neurons also carry a spatial orientation-independent rotation signal from the semicircular canals, which could be useful in compensating for the effects of head rotation on the processing of optic flow. These findings show that vestibular signals in MSTd are appropriately processed to support a functional role in multisensory heading perception. PMID:19605631

  15. The COGs (context, object, and goals) in multisensory processing.

    PubMed

    ten Oever, Sanne; Romei, Vincenzo; van Atteveldt, Nienke; Soto-Faraco, Salvador; Murray, Micah M; Matusz, Pawel J

    2016-05-01

    Our understanding of how perception operates in real-world environments has been substantially advanced by studying both multisensory processes and "top-down" control processes influencing sensory processing via activity from higher-order brain areas, such as attention, memory, and expectations. As the two topics have been traditionally studied separately, the mechanisms orchestrating real-world multisensory processing remain unclear. Past work has revealed that the observer's goals gate the influence of many multisensory processes on brain and behavioural responses, whereas some other multisensory processes might occur independently of these goals. Consequently, other forms of top-down control beyond goal dependence are necessary to explain the full range of multisensory effects currently reported at the brain and the cognitive level. These forms of control include sensitivity to stimulus context as well as the detection of matches (or lack thereof) between a multisensory stimulus and categorical attributes of naturalistic objects (e.g. tools, animals). In this review we discuss and integrate the existing findings that demonstrate the importance of such goal-, object- and context-based top-down control over multisensory processing. We then put forward a few principles emerging from this literature review with respect to the mechanisms underlying multisensory processing and discuss their possible broader implications.

  16. The interactions of multisensory integration with endogenous and exogenous attention

    PubMed Central

    Tang, Xiaoyu; Wu, Jinglong; Shen, Yong

    2016-01-01

    Stimuli from multiple sensory organs can be integrated into a coherent representation through multiple phases of multisensory processing; this phenomenon is called multisensory integration. Multisensory integration can interact with attention. Here, we propose a framework in which attention modulates multisensory processing in both endogenous (goal-driven) and exogenous (stimulus-driven) ways. Moreover, multisensory integration exerts not only bottom-up but also top-down control over attention. Specifically, we propose the following: (1) endogenous attentional selectivity acts on multiple levels of multisensory processing to determine the extent to which simultaneous stimuli from different modalities can be integrated; (2) integrated multisensory events exert top-down control on attentional capture via multisensory search templates that are stored in the brain; (3) integrated multisensory events can capture attention efficiently, even in quite complex circumstances, due to their increased salience compared to unimodal events and can thus improve search accuracy; and (4) within a multisensory object, endogenous attention can spread from one modality to another in an exogenous manner. PMID:26546734

  17. The interactions of multisensory integration with endogenous and exogenous attention.

    PubMed

    Tang, Xiaoyu; Wu, Jinglong; Shen, Yong

    2016-02-01

    Stimuli from multiple sensory organs can be integrated into a coherent representation through multiple phases of multisensory processing; this phenomenon is called multisensory integration. Multisensory integration can interact with attention. Here, we propose a framework in which attention modulates multisensory processing in both endogenous (goal-driven) and exogenous (stimulus-driven) ways. Moreover, multisensory integration exerts not only bottom-up but also top-down control over attention. Specifically, we propose the following: (1) endogenous attentional selectivity acts on multiple levels of multisensory processing to determine the extent to which simultaneous stimuli from different modalities can be integrated; (2) integrated multisensory events exert top-down control on attentional capture via multisensory search templates that are stored in the brain; (3) integrated multisensory events can capture attention efficiently, even in quite complex circumstances, due to their increased salience compared to unimodal events and can thus improve search accuracy; and (4) within a multisensory object, endogenous attention can spread from one modality to another in an exogenous manner. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. The 4-D approach to visual control of autonomous systems

    NASA Technical Reports Server (NTRS)

    Dickmanns, Ernst D.

    1994-01-01

    Development of a 4-D approach to dynamic machine vision is described. Core elements of this method are spatio-temporal models oriented towards objects and laws of perspective projection in a foward mode. Integration of multi-sensory measurement data was achieved through spatio-temporal models as invariants for object recognition. Situation assessment and long term predictions were allowed through maintenance of a symbolic 4-D image of processes involving objects. Behavioral capabilities were easily realized by state feedback and feed-foward control.

  19. Distinct functional contributions of primary sensory and association areas to audiovisual integration in object categorization.

    PubMed

    Werner, Sebastian; Noppeney, Uta

    2010-02-17

    Multisensory interactions have been demonstrated in a distributed neural system encompassing primary sensory and higher-order association areas. However, their distinct functional roles in multisensory integration remain unclear. This functional magnetic resonance imaging study dissociated the functional contributions of three cortical levels to multisensory integration in object categorization. Subjects actively categorized or passively perceived noisy auditory and visual signals emanating from everyday actions with objects. The experiment included two 2 x 2 factorial designs that manipulated either (1) the presence/absence or (2) the informativeness of the sensory inputs. These experimental manipulations revealed three patterns of audiovisual interactions. (1) In primary auditory cortices (PACs), a concurrent visual input increased the stimulus salience by amplifying the auditory response regardless of task-context. Effective connectivity analyses demonstrated that this automatic response amplification is mediated via both direct and indirect [via superior temporal sulcus (STS)] connectivity to visual cortices. (2) In STS and intraparietal sulcus (IPS), audiovisual interactions sustained the integration of higher-order object features and predicted subjects' audiovisual benefits in object categorization. (3) In the left ventrolateral prefrontal cortex (vlPFC), explicit semantic categorization resulted in suppressive audiovisual interactions as an index for multisensory facilitation of semantic retrieval and response selection. In conclusion, multisensory integration emerges at multiple processing stages within the cortical hierarchy. The distinct profiles of audiovisual interactions dissociate audiovisual salience effects in PACs, formation of object representations in STS/IPS and audiovisual facilitation of semantic categorization in vlPFC. Furthermore, in STS/IPS, the profiles of audiovisual interactions were behaviorally relevant and predicted subjects' multisensory benefits in performance accuracy.

  20. A multisensory perspective of working memory

    PubMed Central

    Quak, Michel; London, Raquel Elea; Talsma, Durk

    2015-01-01

    Although our sensory experience is mostly multisensory in nature, research on working memory representations has focused mainly on examining the senses in isolation. Results from the multisensory processing literature make it clear that the senses interact on a more intimate manner than previously assumed. These interactions raise questions regarding the manner in which multisensory information is maintained in working memory. We discuss the current status of research on multisensory processing and the implications of these findings on our theoretical understanding of working memory. To do so, we focus on reviewing working memory research conducted from a multisensory perspective, and discuss the relation between working memory, attention, and multisensory processing in the context of the predictive coding framework. We argue that a multisensory approach to the study of working memory is indispensable to achieve a realistic understanding of how working memory processes maintain and manipulate information. PMID:25954176

  1. Heterogeneity in the spatial receptive field architecture of multisensory neurons of the superior colliculus and its effects on multisensory integration.

    PubMed

    Ghose, D; Wallace, M T

    2014-01-03

    Multisensory integration has been widely studied in neurons of the mammalian superior colliculus (SC). This has led to the description of various determinants of multisensory integration, including those based on stimulus- and neuron-specific factors. The most widely characterized of these illustrate the importance of the spatial and temporal relationships of the paired stimuli as well as their relative effectiveness in eliciting a response in determining the final integrated output. Although these stimulus-specific factors have generally been considered in isolation (i.e., manipulating stimulus location while holding all other factors constant), they have an intrinsic interdependency that has yet to be fully elucidated. For example, changes in stimulus location will likely also impact both the temporal profile of response and the effectiveness of the stimulus. The importance of better describing this interdependency is further reinforced by the fact that SC neurons have large receptive fields, and that responses at different locations within these receptive fields are far from equivalent. To address these issues, the current study was designed to examine the interdependency between the stimulus factors of space and effectiveness in dictating the multisensory responses of SC neurons. The results show that neuronal responsiveness changes dramatically with changes in stimulus location - highlighting a marked heterogeneity in the spatial receptive fields of SC neurons. More importantly, this receptive field heterogeneity played a major role in the integrative product exhibited by stimulus pairings, such that pairings at weakly responsive locations of the receptive fields resulted in the largest multisensory interactions. Together these results provide greater insight into the interrelationship of the factors underlying multisensory integration in SC neurons, and may have important mechanistic implications for multisensory integration and the role it plays in shaping SC-mediated behaviors. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  2. Grouping and Segregation of Sensory Events by Actions in Temporal Audio-Visual Recalibration.

    PubMed

    Ikumi, Nara; Soto-Faraco, Salvador

    2016-01-01

    Perception in multi-sensory environments involves both grouping and segregation of events across sensory modalities. Temporal coincidence between events is considered a strong cue to resolve multisensory perception. However, differences in physical transmission and neural processing times amongst modalities complicate this picture. This is illustrated by cross-modal recalibration, whereby adaptation to audio-visual asynchrony produces shifts in perceived simultaneity. Here, we examined whether voluntary actions might serve as a temporal anchor to cross-modal recalibration in time. Participants were tested on an audio-visual simultaneity judgment task after an adaptation phase where they had to synchronize voluntary actions with audio-visual pairs presented at a fixed asynchrony (vision leading or vision lagging). Our analysis focused on the magnitude of cross-modal recalibration to the adapted audio-visual asynchrony as a function of the nature of the actions during adaptation, putatively fostering cross-modal grouping or, segregation. We found larger temporal adjustments when actions promoted grouping than segregation of sensory events. However, a control experiment suggested that additional factors, such as attention to planning/execution of actions, could have an impact on recalibration effects. Contrary to the view that cross-modal temporal organization is mainly driven by external factors related to the stimulus or environment, our findings add supporting evidence for the idea that perceptual adjustments strongly depend on the observer's inner states induced by motor and cognitive demands.

  3. Grouping and Segregation of Sensory Events by Actions in Temporal Audio-Visual Recalibration

    PubMed Central

    Ikumi, Nara; Soto-Faraco, Salvador

    2017-01-01

    Perception in multi-sensory environments involves both grouping and segregation of events across sensory modalities. Temporal coincidence between events is considered a strong cue to resolve multisensory perception. However, differences in physical transmission and neural processing times amongst modalities complicate this picture. This is illustrated by cross-modal recalibration, whereby adaptation to audio-visual asynchrony produces shifts in perceived simultaneity. Here, we examined whether voluntary actions might serve as a temporal anchor to cross-modal recalibration in time. Participants were tested on an audio-visual simultaneity judgment task after an adaptation phase where they had to synchronize voluntary actions with audio-visual pairs presented at a fixed asynchrony (vision leading or vision lagging). Our analysis focused on the magnitude of cross-modal recalibration to the adapted audio-visual asynchrony as a function of the nature of the actions during adaptation, putatively fostering cross-modal grouping or, segregation. We found larger temporal adjustments when actions promoted grouping than segregation of sensory events. However, a control experiment suggested that additional factors, such as attention to planning/execution of actions, could have an impact on recalibration effects. Contrary to the view that cross-modal temporal organization is mainly driven by external factors related to the stimulus or environment, our findings add supporting evidence for the idea that perceptual adjustments strongly depend on the observer's inner states induced by motor and cognitive demands. PMID:28154529

  4. The rapid distraction of attentional resources toward the source of incongruent stimulus input during multisensory conflict.

    PubMed

    Donohue, Sarah E; Todisco, Alexandra E; Woldorff, Marty G

    2013-04-01

    Neuroimaging work on multisensory conflict suggests that the relevant modality receives enhanced processing in the face of incongruency. However, the degree of stimulus processing in the irrelevant modality and the temporal cascade of the attentional modulations in either the relevant or irrelevant modalities are unknown. Here, we employed an audiovisual conflict paradigm with a sensory probe in the task-irrelevant modality (vision) to gauge the attentional allocation to that modality. ERPs were recorded as participants attended to and discriminated spoken auditory letters while ignoring simultaneous bilateral visual letter stimuli that were either fully congruent, fully incongruent, or partially incongruent (one side incongruent, one congruent) with the auditory stimulation. Half of the audiovisual letter stimuli were followed 500-700 msec later by a bilateral visual probe stimulus. As expected, ERPs to the audiovisual stimuli showed an incongruency ERP effect (fully incongruent versus fully congruent) of an enhanced, centrally distributed, negative-polarity wave starting ∼250 msec. More critically here, the sensory ERP components to the visual probes were larger when they followed fully incongruent versus fully congruent multisensory stimuli, with these enhancements greatest on fully incongruent trials with the slowest RTs. In addition, on the slowest-response partially incongruent trials, the P2 sensory component to the visual probes was larger contralateral to the preceding incongruent visual stimulus. These data suggest that, in response to conflicting multisensory stimulus input, the initial cognitive effect is a capture of attention by the incongruent irrelevant-modality input, pulling neural processing resources toward that modality, resulting in rapid enhancement, rather than rapid suppression, of that input.

  5. Representation of vestibular and visual cues to self-motion in ventral intraparietal (VIP) cortex

    PubMed Central

    Chen, Aihua; Deangelis, Gregory C.; Angelaki, Dora E.

    2011-01-01

    Convergence of vestibular and visual motion information is important for self-motion perception. One cortical area that combines vestibular and optic flow signals is the ventral intraparietal area (VIP). We characterized unisensory and multisensory responses of macaque VIP neurons to translations and rotations in three dimensions. Approximately half of VIP cells show significant directional selectivity in response to optic flow, half show tuning to vestibular stimuli, and one-third show multisensory responses. Visual and vestibular direction preferences of multisensory VIP neurons could be congruent or opposite. When visual and vestibular stimuli were combined, VIP responses could be dominated by either input, unlike medial superior temporal area (MSTd) where optic flow tuning typically dominates or the visual posterior sylvian area (VPS) where vestibular tuning dominates. Optic flow selectivity in VIP was weaker than in MSTd but stronger than in VPS. In contrast, vestibular tuning for translation was strongest in VPS, intermediate in VIP, and weakest in MSTd. To characterize response dynamics, direction-time data were fit with a spatiotemporal model in which temporal responses were modeled as weighted sums of velocity, acceleration, and position components. Vestibular responses in VIP reflected balanced contributions of velocity and acceleration, whereas visual responses were dominated by velocity. Timing of vestibular responses in VIP was significantly faster than in MSTd, whereas timing of optic flow responses did not differ significantly among areas. These findings suggest that VIP may be proximal to MSTd in terms of vestibular processing but hierarchically similar to MSTd in terms of optic flow processing. PMID:21849564

  6. Audiovisual perception in amblyopia: A review and synthesis.

    PubMed

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-05-17

    Amblyopia is a common developmental sensory disorder that has been extensively and systematically investigated as a unisensory visual impairment. However, its effects are increasingly recognized to extend beyond vision to the multisensory domain. Indeed, amblyopia is associated with altered cross-modal interactions in audiovisual temporal perception, audiovisual spatial perception, and audiovisual speech perception. Furthermore, although the visual impairment in amblyopia is typically unilateral, the multisensory abnormalities tend to persist even when viewing with both eyes. Knowledge of the extent and mechanisms of the audiovisual impairments in amblyopia, however, remains in its infancy. This work aims to review our current understanding of audiovisual processing and integration deficits in amblyopia, and considers the possible mechanisms underlying these abnormalities. Copyright © 2018. Published by Elsevier Ltd.

  7. Disintegration of Multisensory Signals from the Real Hand Reduces Default Limb Self-Attribution: An fMRI Study

    PubMed Central

    Guterstam, Arvid; Brozzoli, Claudio; Ehrsson, H. Henrik

    2013-01-01

    The perception of our limbs in space is built upon the integration of visual, tactile, and proprioceptive signals. Accumulating evidence suggests that these signals are combined in areas of premotor, parietal, and cerebellar cortices. However, it remains to be determined whether neuronal populations in these areas integrate hand signals according to basic temporal and spatial congruence principles of multisensory integration. Here, we developed a setup based on advanced 3D video technology that allowed us to manipulate the spatiotemporal relationships of visuotactile (VT) stimuli delivered on a healthy human participant's real hand during fMRI and investigate the ensuing neural and perceptual correlates. Our experiments revealed two novel findings. First, we found responses in premotor, parietal, and cerebellar regions that were dependent upon the spatial and temporal congruence of VT stimuli. This multisensory integration effect required a simultaneous match between the seen and felt postures of the hand, which suggests that congruent visuoproprioceptive signals from the upper limb are essential for successful VT integration. Second, we observed that multisensory conflicts significantly disrupted the default feeling of ownership of the seen real limb, as indexed by complementary subjective, psychophysiological, and BOLD measures. The degree to which self-attribution was impaired could be predicted from the attenuation of neural responses in key multisensory areas. These results elucidate the neural bases of the integration of multisensory hand signals according to basic spatiotemporal principles and demonstrate that the disintegration of these signals leads to “disownership” of the seen real hand. PMID:23946393

  8. Statistical learning of multisensory regularities is enhanced in musicians: An MEG study.

    PubMed

    Paraskevopoulos, Evangelos; Chalas, Nikolas; Kartsidis, Panagiotis; Wollbrink, Andreas; Bamidis, Panagiotis

    2018-07-15

    The present study used magnetoencephalography (MEG) to identify the neural correlates of audiovisual statistical learning, while disentangling the differential contributions of uni- and multi-modal statistical mismatch responses in humans. The applied paradigm was based on a combination of a statistical learning paradigm and a multisensory oddball one, combining an audiovisual, an auditory and a visual stimulation stream, along with the corresponding deviances. Plasticity effects due to musical expertise were investigated by comparing the behavioral and MEG responses of musicians to non-musicians. The behavioral results indicated that the learning was successful for both musicians and non-musicians. The unimodal MEG responses are consistent with previous studies, revealing the contribution of Heschl's gyrus for the identification of auditory statistical mismatches and the contribution of medial temporal and visual association areas for the visual modality. The cortical network underlying audiovisual statistical learning was found to be partly common and partly distinct from the corresponding unimodal networks, comprising right temporal and left inferior frontal sources. Musicians showed enhanced activation in superior temporal and superior frontal gyrus. Connectivity and information processing flow amongst the sources comprising the cortical network of audiovisual statistical learning, as estimated by transfer entropy, was reorganized in musicians, indicating enhanced top-down processing. This neuroplastic effect showed a cross-modal stability between the auditory and audiovisual modalities. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Time perception impairs sensory-motor integration in Parkinson’s disease

    PubMed Central

    2013-01-01

    It is well known that perception and estimation of time are fundamental for the relationship between humans and their environment. However, this temporal information processing is inefficient in patients with Parkinson’ disease (PD), resulting in temporal judgment deficits. In general, the pathophysiology of PD has been described as a dysfunction in the basal ganglia, which is a multisensory integration station. Thus, a deficit in the sensorimotor integration process could explain many of the Parkinson symptoms, such as changes in time perception. This physiological distortion may be better understood if we analyze the neurobiological model of interval timing, expressed within the conceptual framework of a traditional information-processing model called “Scalar Expectancy Theory”. Therefore, in this review we discuss the pathophysiology and sensorimotor integration process in PD, the theories and neural basic mechanisms involved in temporal processing, and the main clinical findings about the impact of time perception in PD. PMID:24131660

  10. Using time to investigate space: a review of tactile temporal order judgments as a window onto spatial processing in touch

    PubMed Central

    Heed, Tobias; Azañón, Elena

    2014-01-01

    To respond to a touch, it is often necessary to localize it in space, and not just on the skin. The computation of this external spatial location involves the integration of somatosensation with visual and proprioceptive information about current body posture. In the past years, the study of touch localization has received substantial attention and has become a central topic in the research field of multisensory integration. In this review, we will explore important findings from this research, zooming in on one specific experimental paradigm, the temporal order judgment (TOJ) task, which has proven particularly fruitful for the investigation of tactile spatial processing. In a typical TOJ task participants perform non-speeded judgments about the order of two tactile stimuli presented in rapid succession to different skin sites. This task could be solved without relying on external spatial coordinates. However, postural manipulations affect TOJ performance, indicating that external coordinates are in fact computed automatically. We show that this makes the TOJ task a reliable indicator of spatial remapping, and provide an overview over the versatile analysis options for TOJ. We introduce current theories of TOJ and touch localization, and then relate TOJ to behavioral and electrophysiological evidence from other paradigms, probing the benefit of TOJ for the study of spatial processing as well as related topics such as multisensory plasticity, body processing, and pain. PMID:24596561

  11. Unisensory processing and multisensory integration in schizophrenia: A high-density electrical mapping study

    PubMed Central

    Stone, David B.; Urrea, Laura J.; Aine, Cheryl J.; Bustillo, Juan R.; Clark, Vincent P.; Stephen, Julia M.

    2011-01-01

    In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder. PMID:21807011

  12. Spatio-temporal processing of tactile stimuli in autistic children

    PubMed Central

    Wada, Makoto; Suzuki, Mayuko; Takaki, Akiko; Miyao, Masutomo; Spence, Charles; Kansaku, Kenji

    2014-01-01

    Altered multisensory integration has been reported in autism; however, little is known concerning how the autistic brain processes spatio-temporal information concerning tactile stimuli. We report a study in which a crossed-hands illusion was investigated in autistic children. Neurotypical individuals often experience a subjective reversal of temporal order judgments when their hands are stimulated while crossed, and the illusion is known to be acquired in early childhood. However, under those conditions where the somatotopic representation is given priority over the actual spatial location of the hands, such reversals may not occur. Here, we showed that a significantly smaller illusory reversal was demonstrated in autistic children than in neurotypical children. Furthermore, in an additional experiment, the young boys who had higher Autism Spectrum Quotient (AQ) scores generally showed a smaller crossed hands deficit. These results suggest that rudimentary spatio-temporal processing of tactile stimuli exists in autistic children, and the altered processing may interfere with the development of an external frame of reference in real-life situations. PMID:25100146

  13. Atypical rapid audio-visual temporal recalibration in autism spectrum disorders.

    PubMed

    Noel, Jean-Paul; De Niear, Matthew A; Stevenson, Ryan; Alais, David; Wallace, Mark T

    2017-01-01

    Changes in sensory and multisensory function are increasingly recognized as a common phenotypic characteristic of Autism Spectrum Disorders (ASD). Furthermore, much recent evidence suggests that sensory disturbances likely play an important role in contributing to social communication weaknesses-one of the core diagnostic features of ASD. An established sensory disturbance observed in ASD is reduced audiovisual temporal acuity. In the current study, we substantially extend these explorations of multisensory temporal function within the framework that an inability to rapidly recalibrate to changes in audiovisual temporal relations may play an important and under-recognized role in ASD. In the paradigm, we present ASD and typically developing (TD) children and adolescents with asynchronous audiovisual stimuli of varying levels of complexity and ask them to perform a simultaneity judgment (SJ). In the critical analysis, we test audiovisual temporal processing on trial t as a condition of trial t - 1. The results demonstrate that individuals with ASD fail to rapidly recalibrate to audiovisual asynchronies in an equivalent manner to their TD counterparts for simple and non-linguistic stimuli (i.e., flashes and beeps, hand-held tools), but exhibit comparable rapid recalibration for speech stimuli. These results are discussed in terms of prior work showing a speech-specific deficit in audiovisual temporal function in ASD, and in light of current theories of autism focusing on sensory noise and stability of perceptual representations. Autism Res 2017, 10: 121-129. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.

  14. Unisensory processing and multisensory integration in schizophrenia: a high-density electrical mapping study.

    PubMed

    Stone, David B; Urrea, Laura J; Aine, Cheryl J; Bustillo, Juan R; Clark, Vincent P; Stephen, Julia M

    2011-10-01

    In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Audiovisual Temporal Perception in Aging: The Role of Multisensory Integration and Age-Related Sensory Loss

    PubMed Central

    Brooks, Cassandra J.; Chan, Yu Man; Anderson, Andrew J.; McKendrick, Allison M.

    2018-01-01

    Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information. PMID:29867415

  16. Audiovisual Temporal Perception in Aging: The Role of Multisensory Integration and Age-Related Sensory Loss.

    PubMed

    Brooks, Cassandra J; Chan, Yu Man; Anderson, Andrew J; McKendrick, Allison M

    2018-01-01

    Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information.

  17. The fMRI BOLD response to unisensory and multisensory smoking cues in nicotine-dependent adults

    PubMed Central

    Cortese, Bernadette M.; Uhde, Thomas W.; Brady, Kathleen T.; McClernon, F. Joseph; Yang, Qing X.; Collins, Heather R.; LeMatty, Todd; Hartwell, Karen J.

    2015-01-01

    Given that the vast majority of functional magnetic resonance imaging (fMRI) studies of drug cue reactivity use unisensory visual cues, but that multisensory cues may elicit greater craving-related brain responses, the current study sought to compare the fMRI BOLD response to unisensory visual and multisensory, visual plus odor, smoking cues in 17 nicotine-dependent adult cigarette smokers. Brain activation to smoking-related, compared to neutral, pictures was assessed under cigarette smoke and odorless odor conditions. While smoking pictures elicited a pattern of activation consistent with the addiction literature, the multisensory (odor + picture) smoking cues elicited significantly greater and more widespread activation in mainly frontal and temporal regions. BOLD signal elicited by the multi-sensory, but not unisensory cues, was significantly related to participants’ level of control over craving as well. Results demonstrated that the co-presentation of cigarette smoke odor with smoking-related visual cues, compared to the visual cues alone, elicited greater levels of craving-related brain activation in key regions implicated in reward. These preliminary findings support future research aimed at a better understanding of multisensory integration of drug cues and craving. PMID:26475784

  18. The fMRI BOLD response to unisensory and multisensory smoking cues in nicotine-dependent adults.

    PubMed

    Cortese, Bernadette M; Uhde, Thomas W; Brady, Kathleen T; McClernon, F Joseph; Yang, Qing X; Collins, Heather R; LeMatty, Todd; Hartwell, Karen J

    2015-12-30

    Given that the vast majority of functional magnetic resonance imaging (fMRI) studies of drug cue reactivity use unisensory visual cues, but that multisensory cues may elicit greater craving-related brain responses, the current study sought to compare the fMRI BOLD response to unisensory visual and multisensory, visual plus odor, smoking cues in 17 nicotine-dependent adult cigarette smokers. Brain activation to smoking-related, compared to neutral, pictures was assessed under cigarette smoke and odorless odor conditions. While smoking pictures elicited a pattern of activation consistent with the addiction literature, the multisensory (odor+picture) smoking cues elicited significantly greater and more widespread activation in mainly frontal and temporal regions. BOLD signal elicited by the multisensory, but not unisensory cues, was significantly related to participants' level of control over craving as well. Results demonstrated that the co-presentation of cigarette smoke odor with smoking-related visual cues, compared to the visual cues alone, elicited greater levels of craving-related brain activation in key regions implicated in reward. These preliminary findings support future research aimed at a better understanding of multisensory integration of drug cues and craving. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  19. Multisensory integration in the basal ganglia.

    PubMed

    Nagy, Attila; Eördegh, Gabriella; Paróczy, Zsuzsanna; Márkus, Zita; Benedek, György

    2006-08-01

    Sensorimotor co-ordination in mammals is achieved predominantly via the activity of the basal ganglia. To investigate the underlying multisensory information processing, we recorded the neuronal responses in the caudate nucleus (CN) and substantia nigra (SN) of anaesthetized cats to visual, auditory or somatosensory stimulation alone and also to their combinations, i.e. multisensory stimuli. The main goal of the study was to ascertain whether multisensory information provides more information to the neurons than do the individual sensory components. A majority of the investigated SN and CN multisensory units exhibited significant cross-modal interactions. The multisensory response enhancements were either additive or superadditive; multisensory response depressions were also detected. CN and SN cells with facilitatory and inhibitory interactions were found in each multisensory combination. The strengths of the multisensory interactions did not differ in the two structures. A significant inverse correlation was found between the strengths of the best unimodal responses and the magnitudes of the multisensory response enhancements, i.e. the neurons with the weakest net unimodal responses exhibited the strongest enhancement effects. The onset latencies of the responses of the integrative CN and SN neurons to the multisensory stimuli were significantly shorter than those to the unimodal stimuli. These results provide evidence that the multisensory CN and SN neurons, similarly to those in the superior colliculus and related structures, have the ability to integrate multisensory information. Multisensory integration may help in the effective processing of sensory events and the changes in the environment during motor actions controlled by the basal ganglia.

  20. Temporal event structure and timing in schizophrenia: preserved binding in a longer "now".

    PubMed

    Martin, Brice; Giersch, Anne; Huron, Caroline; van Wassenhove, Virginie

    2013-01-01

    Patients with schizophrenia experience a loss of temporal continuity or subjective fragmentation along the temporal dimension. Here, we develop the hypothesis that impaired temporal awareness results from a perturbed structuring of events in time-i.e., canonical neural dynamics. To address this, 26 patients and their matched controls took part in two psychophysical studies using desynchronized audiovisual speech. Two tasks were used and compared: first, an identification task testing for multisensory binding impairments in which participants reported what they heard while looking at a speaker's face; in a second task, we tested the perceived simultaneity of the same audiovisual speech stimuli. In both tasks, we used McGurk fusion and combination that are classic ecologically valid multisensory illusions. First, and contrary to previous reports, our results show that patients do not significantly differ from controls in their rate of illusory reports. Second, the illusory reports of patients in the identification task were more sensitive to audiovisual speech desynchronies than those of controls. Third, and surprisingly, patients considered audiovisual speech to be synchronized for longer delays than controls. As such, the temporal tolerance profile observed in a temporal judgement task was less of a predictor for sensory binding in schizophrenia than for that obtained in controls. We interpret our results as an impairment of temporal event structuring in schizophrenia which does not specifically affect sensory binding operations but rather, the explicit access to timing information associated here with audiovisual speech processing. Our findings are discussed in the context of curent neurophysiological frameworks for the binding and the structuring of sensory events in time. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Interactions between the spatial and temporal stimulus factors that influence multisensory integration in human performance.

    PubMed

    Stevenson, Ryan A; Fister, Juliane Krueger; Barnett, Zachary P; Nidiffer, Aaron R; Wallace, Mark T

    2012-05-01

    In natural environments, human sensory systems work in a coordinated and integrated manner to perceive and respond to external events. Previous research has shown that the spatial and temporal relationships of sensory signals are paramount in determining how information is integrated across sensory modalities, but in ecologically plausible settings, these factors are not independent. In the current study, we provide a novel exploration of the impact on behavioral performance for systematic manipulations of the spatial location and temporal synchrony of a visual-auditory stimulus pair. Simple auditory and visual stimuli were presented across a range of spatial locations and stimulus onset asynchronies (SOAs), and participants performed both a spatial localization and simultaneity judgment task. Response times in localizing paired visual-auditory stimuli were slower in the periphery and at larger SOAs, but most importantly, an interaction was found between the two factors, in which the effect of SOA was greater in peripheral as opposed to central locations. Simultaneity judgments also revealed a novel interaction between space and time: individuals were more likely to judge stimuli as synchronous when occurring in the periphery at large SOAs. The results of this study provide novel insights into (a) how the speed of spatial localization of an audiovisual stimulus is affected by location and temporal coincidence and the interaction between these two factors and (b) how the location of a multisensory stimulus impacts judgments concerning the temporal relationship of the paired stimuli. These findings provide strong evidence for a complex interdependency between spatial location and temporal structure in determining the ultimate behavioral and perceptual outcome associated with a paired multisensory (i.e., visual-auditory) stimulus.

  2. Impairments in multisensory processing are not universal to the autism spectrum: no evidence for crossmodal priming deficits in Asperger syndrome.

    PubMed

    David, Nicole; R Schneider, Till; Vogeley, Kai; Engel, Andreas K

    2011-10-01

    Individuals suffering from autism spectrum disorders (ASD) often show a tendency for detail- or feature-based perception (also referred to as "local processing bias") instead of more holistic stimulus processing typical for unaffected people. This local processing bias has been demonstrated for the visual and auditory domains and there is evidence that multisensory processing may also be affected in ASD. Most multisensory processing paradigms used social-communicative stimuli, such as human speech or faces, probing the processing of simultaneously occuring sensory signals. Multisensory processing, however, is not limited to simultaneous stimulation. In this study, we investigated whether multisensory processing deficits in ASD persist when semantically complex but nonsocial stimuli are presented in succession. Fifteen adult individuals with Asperger syndrome and 15 control persons participated in a visual-audio priming task, which required the classification of sounds that were either primed by semantically congruent or incongruent preceding pictures of objects. As expected, performance on congruent trials was faster and more accurate compared with incongruent trials (crossmodal priming effect). The Asperger group, however, did not differ significantly from the control group. Our results do not support a general multisensory processing deficit, which is universal to the entire autism spectrum. Copyright © 2011, International Society for Autism Research, Wiley-Liss, Inc.

  3. Incidental Category Learning and Cognitive Load in a Multisensory Environment across Childhood

    ERIC Educational Resources Information Center

    Broadbent, H. J.; Osborne, T.; Rea, M.; Peng, A.; Mareschal, D.; Kirkham, N. Z.

    2018-01-01

    Multisensory information has been shown to facilitate learning (Bahrick & Lickliter, 2000; Broadbent, White, Mareschal, & Kirkham, 2017; Jordan & Baker, 2011; Shams & Seitz, 2008). However, although research has examined the modulating effect of unisensory and multisensory distractors on multisensory processing, the extent to which…

  4. The Audiovisual Temporal Binding Window Narrows in Early Childhood

    ERIC Educational Resources Information Center

    Lewkowicz, David J.; Flom, Ross

    2014-01-01

    Binding is key in multisensory perception. This study investigated the audio-visual (A-V) temporal binding window in 4-, 5-, and 6-year-old children (total N = 120). Children watched a person uttering a syllable whose auditory and visual components were either temporally synchronized or desynchronized by 366, 500, or 666 ms. They were asked…

  5. The multisensory function of the human primary visual cortex.

    PubMed

    Murray, Micah M; Thelen, Antonia; Thut, Gregor; Romei, Vincenzo; Martuzzi, Roberto; Matusz, Pawel J

    2016-03-01

    It has been nearly 10 years since Ghazanfar and Schroeder (2006) proposed that the neocortex is essentially multisensory in nature. However, it is only recently that sufficient and hard evidence that supports this proposal has accrued. We review evidence that activity within the human primary visual cortex plays an active role in multisensory processes and directly impacts behavioural outcome. This evidence emerges from a full pallet of human brain imaging and brain mapping methods with which multisensory processes are quantitatively assessed by taking advantage of particular strengths of each technique as well as advances in signal analyses. Several general conclusions about multisensory processes in primary visual cortex of humans are supported relatively solidly. First, haemodynamic methods (fMRI/PET) show that there is both convergence and integration occurring within primary visual cortex. Second, primary visual cortex is involved in multisensory processes during early post-stimulus stages (as revealed by EEG/ERP/ERFs as well as TMS). Third, multisensory effects in primary visual cortex directly impact behaviour and perception, as revealed by correlational (EEG/ERPs/ERFs) as well as more causal measures (TMS/tACS). While the provocative claim of Ghazanfar and Schroeder (2006) that the whole of neocortex is multisensory in function has yet to be demonstrated, this can now be considered established in the case of the human primary visual cortex. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Stepping to phase-perturbed metronome cues: multisensory advantage in movement synchrony but not correction

    PubMed Central

    Wright, Rachel L.; Spurgeon, Laura C.; Elliott, Mark T.

    2014-01-01

    Humans can synchronize movements with auditory beats or rhythms without apparent effort. This ability to entrain to the beat is considered automatic, such that any perturbations are corrected for, even if the perturbation was not consciously noted. Temporal correction of upper limb (e.g., finger tapping) and lower limb (e.g., stepping) movements to a phase perturbed auditory beat usually results in individuals being back in phase after just a few beats. When a metronome is presented in more than one sensory modality, a multisensory advantage is observed, with reduced temporal variability in finger tapping movements compared to unimodal conditions. Here, we investigate synchronization of lower limb movements (stepping in place) to auditory, visual and combined auditory-visual (AV) metronome cues. In addition, we compare movement corrections to phase advance and phase delay perturbations in the metronome for the three sensory modality conditions. We hypothesized that, as with upper limb movements, there would be a multisensory advantage, with stepping variability being lowest in the bimodal condition. As such, we further expected correction to the phase perturbation to be quickest in the bimodal condition. Our results revealed lower variability in the asynchronies between foot strikes and the metronome beats in the bimodal condition, compared to unimodal conditions. However, while participants corrected substantially quicker to perturbations in auditory compared to visual metronomes, there was no multisensory advantage in the phase correction task—correction under the bimodal condition was almost identical to the auditory-only (AO) condition. On the whole, we noted that corrections in the stepping task were smaller than those previously reported for finger tapping studies. We conclude that temporal corrections are not only affected by the reliability of the sensory information, but also the complexity of the movement itself. PMID:25309397

  7. Stepping to phase-perturbed metronome cues: multisensory advantage in movement synchrony but not correction.

    PubMed

    Wright, Rachel L; Elliott, Mark T

    2014-01-01

    Humans can synchronize movements with auditory beats or rhythms without apparent effort. This ability to entrain to the beat is considered automatic, such that any perturbations are corrected for, even if the perturbation was not consciously noted. Temporal correction of upper limb (e.g., finger tapping) and lower limb (e.g., stepping) movements to a phase perturbed auditory beat usually results in individuals being back in phase after just a few beats. When a metronome is presented in more than one sensory modality, a multisensory advantage is observed, with reduced temporal variability in finger tapping movements compared to unimodal conditions. Here, we investigate synchronization of lower limb movements (stepping in place) to auditory, visual and combined auditory-visual (AV) metronome cues. In addition, we compare movement corrections to phase advance and phase delay perturbations in the metronome for the three sensory modality conditions. We hypothesized that, as with upper limb movements, there would be a multisensory advantage, with stepping variability being lowest in the bimodal condition. As such, we further expected correction to the phase perturbation to be quickest in the bimodal condition. Our results revealed lower variability in the asynchronies between foot strikes and the metronome beats in the bimodal condition, compared to unimodal conditions. However, while participants corrected substantially quicker to perturbations in auditory compared to visual metronomes, there was no multisensory advantage in the phase correction task-correction under the bimodal condition was almost identical to the auditory-only (AO) condition. On the whole, we noted that corrections in the stepping task were smaller than those previously reported for finger tapping studies. We conclude that temporal corrections are not only affected by the reliability of the sensory information, but also the complexity of the movement itself.

  8. Audiovisual integration supports face-name associative memory formation.

    PubMed

    Lee, Hweeling; Stirnberg, Rüdiger; Stöcker, Tony; Axmacher, Nikolai

    2017-10-01

    Prior multisensory experience influences how we perceive our environment, and hence how memories are encoded for subsequent retrieval. This study investigated if audiovisual (AV) integration and associative memory formation rely on overlapping or distinct processes. Our functional magnetic resonance imaging results demonstrate that the neural mechanisms underlying AV integration and associative memory overlap substantially. In particular, activity in anterior superior temporal sulcus (STS) is increased during AV integration and also determines the success of novel AV face-name association formation. Dynamic causal modeling results further demonstrate how the anterior STS interacts with the associative memory system to facilitate successful memory formation for AV face-name associations. Specifically, the connection of fusiform gyrus to anterior STS is enhanced while the reverse connection is reduced when participants subsequently remembered both face and name. Collectively, our results demonstrate how multisensory associative memories can be formed for subsequent retrieval.

  9. An autism-associated serotonin transporter variant disrupts multisensory processing.

    PubMed

    Siemann, J K; Muller, C L; Forsberg, C G; Blakely, R D; Veenstra-VanderWeele, J; Wallace, M T

    2017-03-21

    Altered sensory processing is observed in many children with autism spectrum disorder (ASD), with growing evidence that these impairments extend to the integration of information across the different senses (that is, multisensory function). The serotonin system has an important role in sensory development and function, and alterations of serotonergic signaling have been suggested to have a role in ASD. A gain-of-function coding variant in the serotonin transporter (SERT) associates with sensory aversion in humans, and when expressed in mice produces traits associated with ASD, including disruptions in social and communicative function and repetitive behaviors. The current study set out to test whether these mice also exhibit changes in multisensory function when compared with wild-type (WT) animals on the same genetic background. Mice were trained to respond to auditory and visual stimuli independently before being tested under visual, auditory and paired audiovisual (multisensory) conditions. WT mice exhibited significant gains in response accuracy under audiovisual conditions. In contrast, although the SERT mutant animals learned the auditory and visual tasks comparably to WT littermates, they failed to show behavioral gains under multisensory conditions. We believe these results provide the first behavioral evidence of multisensory deficits in a genetic mouse model related to ASD and implicate the serotonin system in multisensory processing and in the multisensory changes seen in ASD.

  10. Sensory processes modulate differences in multi-component behavior and cognitive control between childhood and adulthood.

    PubMed

    Gohil, Krutika; Bluschke, Annet; Roessner, Veit; Stock, Ann-Kathrin; Beste, Christian

    2017-10-01

    Many everyday tasks require executive functions to achieve a certain goal. Quite often, this requires the integration of information derived from different sensory modalities. Children are less likely to integrate information from different modalities and, at the same time, also do not command fully developed executive functions, as compared to adults. Yet still, the role of developmental age-related effects on multisensory integration processes has not been examined within the context of multicomponent behavior until now (i.e., the concatenation of different executive subprocesses). This is problematic because differences in multisensory integration might actually explain a significant amount of the developmental effects that have traditionally been attributed to changes in executive functioning. In a system, neurophysiological approach combining electroencephaloram (EEG) recordings and source localization analyses, we therefore examined this question. The results show that differences in how children and adults accomplish multicomponent behavior do not solely depend on developmental differences in executive functioning. Instead, the observed developmental differences in response selection processes (reflected by the P3 ERP) were largely dependent on the complexity of integrating temporally separated stimuli from different modalities. This effect was related to activation differences in medial frontal and inferior parietal cortices. Primary perceptual gating or attentional selection processes (P1 and N1 ERPs) were not affected. The results show that differences in multisensory integration explain parts of transformations in cognitive processes between childhood and adulthood that have traditionally been attributed to changes in executive functioning, especially when these require the integration of multiple modalities during response selection. Hum Brain Mapp 38:4933-4945, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  11. A Computational Analysis of Neural Mechanisms Underlying the Maturation of Multisensory Speech Integration in Neurotypical Children and Those on the Autism Spectrum

    PubMed Central

    Cuppini, Cristiano; Ursino, Mauro; Magosso, Elisa; Ross, Lars A.; Foxe, John J.; Molholm, Sophie

    2017-01-01

    Failure to appropriately develop multisensory integration (MSI) of audiovisual speech may affect a child's ability to attain optimal communication. Studies have shown protracted development of MSI into late-childhood and identified deficits in MSI in children with an autism spectrum disorder (ASD). Currently, the neural basis of acquisition of this ability is not well understood. Here, we developed a computational model informed by neurophysiology to analyze possible mechanisms underlying MSI maturation, and its delayed development in ASD. The model posits that strengthening of feedforward and cross-sensory connections, responsible for the alignment of auditory and visual speech sound representations in posterior superior temporal gyrus/sulcus, can explain behavioral data on the acquisition of MSI. This was simulated by a training phase during which the network was exposed to unisensory and multisensory stimuli, and projections were crafted by Hebbian rules of potentiation and depression. In its mature architecture, the network also reproduced the well-known multisensory McGurk speech effect. Deficits in audiovisual speech perception in ASD were well accounted for by fewer multisensory exposures, compatible with a lack of attention, but not by reduced synaptic connectivity or synaptic plasticity. PMID:29163099

  12. Modality distribution of sensory neurons in the feline caudate nucleus and the substantia nigra.

    PubMed

    Márkus, Zita; Eördegh, Gabriella; Paróczy, Zsuzsanna; Benedek, G; Nagy, A

    2008-09-01

    Despite extensive analysis of the motor functions of the basal ganglia and the fact that multisensory information processing appears critical for the execution of their behavioral action, little is known concerning the sensory functions of the caudate nucleus (CN) and the substantia nigra (SN). In the present study, we set out to describe the sensory modality distribution and to determine the proportions of multisensory units within the CN and the SN. The separate single sensory modality tests demonstrated that a majority of the neurons responded to only one modality, so that they seemed to be unimodal. In contrast with these findings, a large proportion of these neurons exhibited significant multisensory cross-modal interactions. Thus, these neurons should also be classified as multisensory. Our results suggest that a surprisingly high proportion of sensory neurons in the basal ganglia are multisensory, and demonstrate that an analysis without a consideration of multisensory cross-modal interactions may strongly underrepresent the number of multisensory units. We conclude that a majority of the sensory neurons in the CN and SN process multisensory information and only a minority of these units are clearly unimodal.

  13. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding

    PubMed Central

    Desantis, Andrea; Haggard, Patrick

    2016-01-01

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events. PMID:27982063

  14. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding.

    PubMed

    Desantis, Andrea; Haggard, Patrick

    2016-12-16

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events.

  15. Enhanced peripheral visual processing in congenitally deaf humans is supported by multiple brain regions, including primary auditory cortex.

    PubMed

    Scott, Gregory D; Karns, Christina M; Dow, Mark W; Stevens, Courtney; Neville, Helen J

    2014-01-01

    Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants were limited by inter-subject variability of Heschl's gyrus. In addition to reorganized auditory cortex (cross-modal plasticity), a second gap in our understanding is the contribution of altered modality-specific cortices (visual intramodal plasticity in this case), as well as supramodal and multisensory cortices, especially when target detection is required across contrasts. Here we address these gaps by comparing fMRI signal change for peripheral vs. perifoveal visual stimulation (11-15° vs. 2-7°) in congenitally deaf and hearing participants in a blocked experimental design with two analytical approaches: a Heschl's gyrus region of interest analysis and a whole brain analysis. Our results using individually-defined primary auditory cortex (Heschl's gyrus) indicate that fMRI signal change for more peripheral stimuli was greater than perifoveal in deaf but not in hearing participants. Whole-brain analyses revealed differences between deaf and hearing participants for peripheral vs. perifoveal visual processing in extrastriate visual cortex including primary auditory cortex, MT+/V5, superior-temporal auditory, and multisensory and/or supramodal regions, such as posterior parietal cortex (PPC), frontal eye fields, anterior cingulate, and supplementary eye fields. Overall, these data demonstrate the contribution of neuroplasticity in multiple systems including primary auditory cortex, supramodal, and multisensory regions, to altered visual processing in congenitally deaf adults.

  16. Multisensory constraints on awareness

    PubMed Central

    Deroy, Ophelia; Chen, Yi-Chuan; Spence, Charles

    2014-01-01

    Given that multiple senses are often stimulated at the same time, perceptual awareness is most likely to take place in multisensory situations. However, theories of awareness are based on studies and models established for a single sense (mostly vision). Here, we consider the methodological and theoretical challenges raised by taking a multisensory perspective on perceptual awareness. First, we consider how well tasks designed to study unisensory awareness perform when used in multisensory settings, stressing that studies using binocular rivalry, bistable figure perception, continuous flash suppression, the attentional blink, repetition blindness and backward masking can demonstrate multisensory influences on unisensory awareness, but fall short of tackling multisensory awareness directly. Studies interested in the latter phenomenon rely on a method of subjective contrast and can, at best, delineate conditions under which individuals report experiencing a multisensory object or two unisensory objects. As there is not a perfect match between these conditions and those in which multisensory integration and binding occur, the link between awareness and binding advocated for visual information processing needs to be revised for multisensory cases. These challenges point at the need to question the very idea of multisensory awareness. PMID:24639579

  17. Prediction and constraint in audiovisual speech perception

    PubMed Central

    Peelle, Jonathan E.; Sommers, Mitchell S.

    2015-01-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. PMID:25890390

  18. Seeing voices: High-density electrical mapping and source-analysis of the multisensory mismatch negativity evoked during the McGurk illusion.

    PubMed

    Saint-Amour, Dave; De Sanctis, Pierfilippo; Molholm, Sophie; Ritter, Walter; Foxe, John J

    2007-02-01

    Seeing a speaker's facial articulatory gestures powerfully affects speech perception, helping us overcome noisy acoustical environments. One particularly dramatic illustration of visual influences on speech perception is the "McGurk illusion", where dubbing an auditory phoneme onto video of an incongruent articulatory movement can often lead to illusory auditory percepts. This illusion is so strong that even in the absence of any real change in auditory stimulation, it activates the automatic auditory change-detection system, as indexed by the mismatch negativity (MMN) component of the auditory event-related potential (ERP). We investigated the putative left hemispheric dominance of McGurk-MMN using high-density ERPs in an oddball paradigm. Topographic mapping of the initial McGurk-MMN response showed a highly lateralized left hemisphere distribution, beginning at 175 ms. Subsequently, scalp activity was also observed over bilateral fronto-central scalp with a maximal amplitude at approximately 290 ms, suggesting later recruitment of right temporal cortices. Strong left hemisphere dominance was again observed during the last phase of the McGurk-MMN waveform (350-400 ms). Source analysis indicated bilateral sources in the temporal lobe just posterior to primary auditory cortex. While a single source in the right superior temporal gyrus (STG) accounted for the right hemisphere activity, two separate sources were required, one in the left transverse gyrus and the other in STG, to account for left hemisphere activity. These findings support the notion that visually driven multisensory illusory phonetic percepts produce an auditory-MMN cortical response and that left hemisphere temporal cortex plays a crucial role in this process.

  19. Seeing voices: High-density electrical mapping and source-analysis of the multisensory mismatch negativity evoked during the McGurk illusion

    PubMed Central

    Saint-Amour, Dave; De Sanctis, Pierfilippo; Molholm, Sophie; Ritter, Walter; Foxe, John J.

    2006-01-01

    Seeing a speaker’s facial articulatory gestures powerfully affects speech perception, helping us overcome noisy acoustical environments. One particularly dramatic illustration of visual influences on speech perception is the “McGurk illusion”, where dubbing an auditory phoneme onto video of an incongruent articulatory movement can often lead to illusory auditory percepts. This illusion is so strong that even in the absence of any real change in auditory stimulation, it activates the automatic auditory change-detection system, as indexed by the mismatch negativity (MMN) component of the auditory event-related potential (ERP). We investigated the putative left hemispheric dominance of McGurk-MMN using high-density ERPs in an oddball paradigm. Topographic mapping of the initial McGurk-MMN response showed a highly lateralized left hemisphere distribution, beginning at 175 ms. Subsequently, scalp activity was also observed over bilateral fronto-central scalp with a maximal amplitude at ~290 ms, suggesting later recruitment of right temporal cortices. Strong left hemisphere dominance was again observed during the last phase of the McGurk-MMN waveform (350–400 ms). Source analysis indicated bilateral sources in the temporal lobe just posterior to primary auditory cortex. While a single source in the right superior temporal gyrus (STG) accounted for the right hemisphere activity, two separate sources were required, one in the left transverse gyrus and the other in STG, to account for left hemisphere activity. These findings support the notion that visually driven multisensory illusory phonetic percepts produce an auditory-MMN cortical response and that left hemisphere temporal cortex plays a crucial role in this process. PMID:16757004

  20. A General Audiovisual Temporal Processing Deficit in Adult Readers With Dyslexia.

    PubMed

    Francisco, Ana A; Jesse, Alexandra; Groen, Margriet A; McQueen, James M

    2017-01-01

    Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of audiovisual speech and nonspeech stimuli, their time window of audiovisual integration for speech (using incongruent /aCa/ syllables), and their audiovisual perception of phonetic categories. Adult readers with dyslexia showed less sensitivity to audiovisual simultaneity than typical readers for both speech and nonspeech events. We found no differences between readers with dyslexia and typical readers in the temporal window of integration for audiovisual speech or in the audiovisual perception of phonetic categories. The results suggest an audiovisual temporal deficit in dyslexia that is not specific to speech-related events. But the differences found for audiovisual temporal sensitivity did not translate into a deficit in audiovisual speech perception. Hence, there seems to be a hiatus between simultaneity judgment and perception, suggesting a multisensory system that uses different mechanisms across tasks. Alternatively, it is possible that the audiovisual deficit in dyslexia is only observable when explicit judgments about audiovisual simultaneity are required.

  1. Sensory processing during viewing of cinematographic material: Computational modeling and functional neuroimaging

    PubMed Central

    Bordier, Cecile; Puja, Francesco; Macaluso, Emiliano

    2013-01-01

    The investigation of brain activity using naturalistic, ecologically-valid stimuli is becoming an important challenge for neuroscience research. Several approaches have been proposed, primarily relying on data-driven methods (e.g. independent component analysis, ICA). However, data-driven methods often require some post-hoc interpretation of the imaging results to draw inferences about the underlying sensory, motor or cognitive functions. Here, we propose using a biologically-plausible computational model to extract (multi-)sensory stimulus statistics that can be used for standard hypothesis-driven analyses (general linear model, GLM). We ran two separate fMRI experiments, which both involved subjects watching an episode of a TV-series. In Exp 1, we manipulated the presentation by switching on-and-off color, motion and/or sound at variable intervals, whereas in Exp 2, the video was played in the original version, with all the consequent continuous changes of the different sensory features intact. Both for vision and audition, we extracted stimulus statistics corresponding to spatial and temporal discontinuities of low-level features, as well as a combined measure related to the overall stimulus saliency. Results showed that activity in occipital visual cortex and the superior temporal auditory cortex co-varied with changes of low-level features. Visual saliency was found to further boost activity in extra-striate visual cortex plus posterior parietal cortex, while auditory saliency was found to enhance activity in the superior temporal cortex. Data-driven ICA analyses of the same datasets also identified “sensory” networks comprising visual and auditory areas, but without providing specific information about the possible underlying processes, e.g., these processes could relate to modality, stimulus features and/or saliency. We conclude that the combination of computational modeling and GLM enables the tracking of the impact of bottom–up signals on brain activity during viewing of complex and dynamic multisensory stimuli, beyond the capability of purely data-driven approaches. PMID:23202431

  2. How music alters a kiss: superior temporal gyrus controls fusiform-amygdalar effective connectivity.

    PubMed

    Pehrs, Corinna; Deserno, Lorenz; Bakels, Jan-Hendrik; Schlochtermeier, Lorna H; Kappelhoff, Hermann; Jacobs, Arthur M; Fritz, Thomas Hans; Koelsch, Stefan; Kuchinke, Lars

    2014-11-01

    While watching movies, the brain integrates the visual information and the musical soundtrack into a coherent percept. Multisensory integration can lead to emotion elicitation on which soundtrack valences may have a modulatory impact. Here, dynamic kissing scenes from romantic comedies were presented to 22 participants (13 females) during functional magnetic resonance imaging scanning. The kissing scenes were either accompanied by happy music, sad music or no music. Evidence from cross-modal studies motivated a predefined three-region network for multisensory integration of emotion, consisting of fusiform gyrus (FG), amygdala (AMY) and anterior superior temporal gyrus (aSTG). The interactions in this network were investigated using dynamic causal models of effective connectivity. This revealed bilinear modulations by happy and sad music with suppression effects on the connectivity from FG and AMY to aSTG. Non-linear dynamic causal modeling showed a suppressive gating effect of aSTG on fusiform-amygdalar connectivity. In conclusion, fusiform to amygdala coupling strength is modulated via feedback through aSTG as region for multisensory integration of emotional material. This mechanism was emotion-specific and more pronounced for sad music. Therefore, soundtrack valences may modulate emotion elicitation in movies by differentially changing preprocessed visual information to the amygdala. © The Author (2013). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  3. How music alters a kiss: superior temporal gyrus controls fusiform–amygdalar effective connectivity

    PubMed Central

    Deserno, Lorenz; Bakels, Jan-Hendrik; Schlochtermeier, Lorna H.; Kappelhoff, Hermann; Jacobs, Arthur M.; Fritz, Thomas Hans; Koelsch, Stefan; Kuchinke, Lars

    2014-01-01

    While watching movies, the brain integrates the visual information and the musical soundtrack into a coherent percept. Multisensory integration can lead to emotion elicitation on which soundtrack valences may have a modulatory impact. Here, dynamic kissing scenes from romantic comedies were presented to 22 participants (13 females) during functional magnetic resonance imaging scanning. The kissing scenes were either accompanied by happy music, sad music or no music. Evidence from cross-modal studies motivated a predefined three-region network for multisensory integration of emotion, consisting of fusiform gyrus (FG), amygdala (AMY) and anterior superior temporal gyrus (aSTG). The interactions in this network were investigated using dynamic causal models of effective connectivity. This revealed bilinear modulations by happy and sad music with suppression effects on the connectivity from FG and AMY to aSTG. Non-linear dynamic causal modeling showed a suppressive gating effect of aSTG on fusiform–amygdalar connectivity. In conclusion, fusiform to amygdala coupling strength is modulated via feedback through aSTG as region for multisensory integration of emotional material. This mechanism was emotion-specific and more pronounced for sad music. Therefore, soundtrack valences may modulate emotion elicitation in movies by differentially changing preprocessed visual information to the amygdala. PMID:24298171

  4. The Race that Precedes Coactivation: Development of Multisensory Facilitation in Children

    ERIC Educational Resources Information Center

    Barutchu, Ayla; Crewther, David P.; Crewther, Sheila G.

    2009-01-01

    Rationale: The facilitating effect of multisensory integration on motor responses in adults is much larger than predicted by race-models and is in accordance with the idea of coactivation. However, the development of multisensory facilitation of endogenously driven motor processes and its relationship to the development of complex cognitive skills…

  5. Developmental trends in the facilitation of multisensory objects with distractors

    PubMed Central

    Downing, Harriet C.; Barutchu, Ayla; Crewther, Sheila G.

    2015-01-01

    Sensory integration and the ability to discriminate target objects from distractors are critical to survival, yet the developmental trajectories of these abilities are unknown. This study investigated developmental changes in 9- (n = 18) and 11-year-old (n = 20) children, adolescents (n = 19) and adults (n = 22) using an audiovisual object discrimination task with uni- and multisensory distractors. Reaction times (RTs) were slower with visual/audiovisual distractors, and although all groups demonstrated facilitation of multisensory RTs in these conditions, children's and adolescents' responses corresponded to fewer race model violations than adults', suggesting protracted maturation of multisensory processes. Multisensory facilitation could not be explained by changes in RT variability, suggesting that tests of race model violations may still have theoretical value at least for familiar multisensory stimuli. PMID:25653630

  6. Incidental category learning and cognitive load in a multisensory environment across childhood.

    PubMed

    Broadbent, H J; Osborne, T; Rea, M; Peng, A; Mareschal, D; Kirkham, N Z

    2018-06-01

    Multisensory information has been shown to facilitate learning (Bahrick & Lickliter, 2000; Broadbent, White, Mareschal, & Kirkham, 2017; Jordan & Baker, 2011; Shams & Seitz, 2008). However, although research has examined the modulating effect of unisensory and multisensory distractors on multisensory processing, the extent to which a concurrent unisensory or multisensory cognitive load task would interfere with or support multisensory learning remains unclear. This study examined the role of concurrent task modality on incidental category learning in 6- to 10-year-olds. Participants were engaged in a multisensory learning task while also performing either a unisensory (visual or auditory only) or multisensory (audiovisual) concurrent task (CT). We found that engaging in an auditory CT led to poorer performance on incidental category learning compared with an audiovisual or visual CT, across groups. In 6-year-olds, category test performance was at chance in the auditory-only CT condition, suggesting auditory concurrent tasks may interfere with learning in younger children, but the addition of visual information may serve to focus attention. These findings provide novel insight into the use of multisensory concurrent information on incidental learning. Implications for the deployment of multisensory learning tasks within education across development and developmental changes in modality dominance and ability to switch flexibly across modalities are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  7. Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity

    PubMed Central

    Gibney, Kyla D.; Aligbe, Enimielen; Eggleston, Brady A.; Nunes, Sarah R.; Kerkhoff, Willa G.; Dean, Cassandra L.; Kwakye, Leslie D.

    2017-01-01

    The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller’s inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial role in multisensory integration for both highly complex and simple multisensory tasks and that attention may interact differently with multisensory processing in individuals who do not strongly integrate multisensory information. PMID:28163675

  8. Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity.

    PubMed

    Gibney, Kyla D; Aligbe, Enimielen; Eggleston, Brady A; Nunes, Sarah R; Kerkhoff, Willa G; Dean, Cassandra L; Kwakye, Leslie D

    2017-01-01

    The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller's inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial role in multisensory integration for both highly complex and simple multisensory tasks and that attention may interact differently with multisensory processing in individuals who do not strongly integrate multisensory information.

  9. Assessing the Role of the 'Unity Assumption' on Multisensory Integration: A Review.

    PubMed

    Chen, Yi-Chuan; Spence, Charles

    2017-01-01

    There has been longstanding interest from both experimental psychologists and cognitive neuroscientists in the potential modulatory role of various top-down factors on multisensory integration/perception in humans. One such top-down influence, often referred to in the literature as the 'unity assumption,' is thought to occur in those situations in which an observer considers that various of the unisensory stimuli that they have been presented with belong to one and the same object or event (Welch and Warren, 1980). Here, we review the possible factors that may lead to the emergence of the unity assumption. We then critically evaluate the evidence concerning the consequences of the unity assumption from studies of the spatial and temporal ventriloquism effects, from the McGurk effect, and from the Colavita visual dominance paradigm. The research that has been published to date using these tasks provides support for the claim that the unity assumption influences multisensory perception under at least a subset of experimental conditions. We then consider whether the notion has been superseded in recent years by the introduction of priors in Bayesian causal inference models of human multisensory perception. We suggest that the prior of common cause (that is, the prior concerning whether multisensory signals originate from the same source or not) offers the most useful way to quantify the unity assumption as a continuous cognitive variable.

  10. The vestibular system: a spatial reference for bodily self-consciousness

    PubMed Central

    Pfeiffer, Christian; Serino, Andrea; Blanke, Olaf

    2014-01-01

    Self-consciousness is the remarkable human experience of being a subject: the “I”. Self-consciousness is typically bound to a body, and particularly to the spatial dimensions of the body, as well as to its location and displacement in the gravitational field. Because the vestibular system encodes head position and movement in three-dimensional space, vestibular cortical processing likely contributes to spatial aspects of bodily self-consciousness. We review here recent data showing vestibular effects on first-person perspective (the feeling from where “I” experience the world) and self-location (the feeling where “I” am located in space). We compare these findings to data showing vestibular effects on mental spatial transformation, self-motion perception, and body representation showing vestibular contributions to various spatial representations of the body with respect to the external world. Finally, we discuss the role for four posterior brain regions that process vestibular and other multisensory signals to encode spatial aspects of bodily self-consciousness: temporoparietal junction, parietoinsular vestibular cortex, ventral intraparietal region, and medial superior temporal region. We propose that vestibular processing in these cortical regions is critical in linking multisensory signals from the body (personal and peripersonal space) with external (extrapersonal) space. Therefore, the vestibular system plays a critical role for neural representations of spatial aspects of bodily self-consciousness. PMID:24860446

  11. Prediction and constraint in audiovisual speech perception.

    PubMed

    Peelle, Jonathan E; Sommers, Mitchell S

    2015-07-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Multisensory integration mechanisms during aging

    PubMed Central

    Freiherr, Jessica; Lundström, Johan N.; Habel, Ute; Reetz, Kathrin

    2013-01-01

    The rapid demographical shift occurring in our society implies that understanding of healthy aging and age-related diseases is one of our major future challenges. Sensory impairments have an enormous impact on our lives and are closely linked to cognitive functioning. Due to the inherent complexity of sensory perceptions, we are commonly presented with a complex multisensory stimulation and the brain integrates the information from the individual sensory channels into a unique and holistic percept. The cerebral processes involved are essential for our perception of sensory stimuli and becomes especially important during the perception of emotional content. Despite ongoing deterioration of the individual sensory systems during aging, there is evidence for an increase in, or maintenance of, multisensory integration processing in aging individuals. Within this comprehensive literature review on multisensory integration we aim to highlight basic mechanisms and potential compensatory strategies the human brain utilizes to help maintain multisensory integration capabilities during healthy aging to facilitate a broader understanding of age-related pathological conditions. Further our goal was to identify where further research is needed. PMID:24379773

  13. Ventral and dorsal streams processing visual motion perception (FDG-PET study)

    PubMed Central

    2012-01-01

    Background Earlier functional imaging studies on visually induced self-motion perception (vection) disclosed a bilateral network of activations within primary and secondary visual cortex areas which was combined with signal decreases, i.e., deactivations, in multisensory vestibular cortex areas. This finding led to the concept of a reciprocal inhibitory interaction between the visual and vestibular systems. In order to define areas involved in special aspects of self-motion perception such as intensity and duration of the perceived circular vection (CV) or the amount of head tilt, correlation analyses of the regional cerebral glucose metabolism, rCGM (measured by fluorodeoxyglucose positron-emission tomography, FDG-PET) and these perceptual covariates were performed in 14 healthy volunteers. For analyses of the visual-vestibular interaction, the CV data were compared to a random dot motion stimulation condition (not inducing vection) and a control group at rest (no stimulation at all). Results Group subtraction analyses showed that the visual-vestibular interaction was modified during CV, i.e., the activations within the cerebellar vermis and parieto-occipital areas were enhanced. The correlation analysis between the rCGM and the intensity of visually induced vection, experienced as body tilt, showed a relationship for areas of the multisensory vestibular cortical network (inferior parietal lobule bilaterally, anterior cingulate gyrus), the medial parieto-occipital cortex, the frontal eye fields and the cerebellar vermis. The “earlier” multisensory vestibular areas like the parieto-insular vestibular cortex and the superior temporal gyrus did not appear in the latter analysis. The duration of perceived vection after stimulus stop was positively correlated with rCGM in medial temporal lobe areas bilaterally, which included the (para-)hippocampus, known to be involved in various aspects of memory processing. The amount of head tilt was found to be positively correlated with the rCGM of bilateral basal ganglia regions responsible for the control of motor function of the head. Conclusions Our data gave further insights into subfunctions within the complex cortical network involved in the processing of visual-vestibular interaction during CV. Specific areas of this cortical network could be attributed to the ventral stream (“what” pathway) responsible for the duration after stimulus stop and to the dorsal stream (“where/how” pathway) responsible for intensity aspects. PMID:22800430

  14. Visual and Haptic Shape Processing in the Human Brain: Unisensory Processing, Multisensory Convergence, and Top-Down Influences.

    PubMed

    Lee Masson, Haemy; Bulthé, Jessica; Op de Beeck, Hans P; Wallraven, Christian

    2016-08-01

    Humans are highly adept at multisensory processing of object shape in both vision and touch. Previous studies have mostly focused on where visually perceived object-shape information can be decoded, with haptic shape processing receiving less attention. Here, we investigate visuo-haptic shape processing in the human brain using multivoxel correlation analyses. Importantly, we use tangible, parametrically defined novel objects as stimuli. Two groups of participants first performed either a visual or haptic similarity-judgment task. The resulting perceptual object-shape spaces were highly similar and matched the physical parameter space. In a subsequent fMRI experiment, objects were first compared within the learned modality and then in the other modality in a one-back task. When correlating neural similarity spaces with perceptual spaces, visually perceived shape was decoded well in the occipital lobe along with the ventral pathway, whereas haptically perceived shape information was mainly found in the parietal lobe, including frontal cortex. Interestingly, ventrolateral occipito-temporal cortex decoded shape in both modalities, highlighting this as an area capable of detailed visuo-haptic shape processing. Finally, we found haptic shape representations in early visual cortex (in the absence of visual input), when participants switched from visual to haptic exploration, suggesting top-down involvement of visual imagery on haptic shape processing. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. Nonvisual influences on visual-information processing in the superior colliculus.

    PubMed

    Stein, B E; Jiang, W; Wallace, M T; Stanford, T R

    2001-01-01

    Although visually responsive neurons predominate in the deep layers of the superior colliculus (SC), the majority of them also receive sensory inputs from nonvisual sources (i.e. auditory and/or somatosensory). Most of these 'multisensory' neurons are able to synthesize their cross-modal inputs and, as a consequence, their responses to visual stimuli can be profoundly enhanced or depressed in the presence of a nonvisual cue. Whether response enhancement or response depression is produced by this multisensory interaction is predictable based on several factors. These include: the organization of a neuron's visual and nonvisual receptive fields; the relative spatial relationships of the different stimuli (to their respective receptive fields and to one another); and whether or not the neuron is innervated by a select population of cortical neurons. The response enhancement or depression of SC neurons via multisensory integration has significant survival value via its profound impact on overt attentive/orientation behaviors. Nevertheless, these multisensory processes are not present at birth, and require an extensive period of postnatal maturation. It seems likely that the sensory experiences obtained during this period play an important role in crafting the processes underlying these multisensory interactions.

  16. Hemispheric asymmetry: Looking for a novel signature of the modulation of spatial attention in multisensory processing.

    PubMed

    Chen, Yi-Chuan; Spence, Charles

    2017-06-01

    The extent to which attention modulates multisensory processing in a top-down fashion is still a subject of debate among researchers. Typically, cognitive psychologists interested in this question have manipulated the participants' attention in terms of single/dual tasking or focal/divided attention between sensory modalities. We suggest an alternative approach, one that builds on the extensive older literature highlighting hemispheric asymmetries in the distribution of spatial attention. Specifically, spatial attention in vision, audition, and touch is typically biased preferentially toward the right hemispace, especially under conditions of high perceptual load. We review the evidence demonstrating such an attentional bias toward the right in extinction patients and healthy adults, along with the evidence of such rightward-biased attention in multisensory experimental settings. We then evaluate those studies that have demonstrated either a more pronounced multisensory effect in right than in left hemispace, or else similar effects in the two hemispaces. The results suggest that the influence of rightward-biased attention is more likely to be observed when the crossmodal signals interact at later stages of information processing and under conditions of higher perceptual load-that is, conditions under which attention is perhaps a compulsory enhancer of information processing. We therefore suggest that the spatial asymmetry in attention may provide a useful signature of top-down attentional modulation in multisensory processing.

  17. Music acquisition: effects of enculturation and formal training on development.

    PubMed

    Hannon, Erin E; Trainor, Laurel J

    2007-11-01

    Musical structure is complex, consisting of a small set of elements that combine to form hierarchical levels of pitch and temporal structure according to grammatical rules. As with language, different systems use different elements and rules for combination. Drawing on recent findings, we propose that music acquisition begins with basic features, such as peripheral frequency-coding mechanisms and multisensory timing connections, and proceeds through enculturation, whereby everyday exposure to a particular music system creates, in a systematic order of acquisition, culture-specific brain structures and representations. Finally, we propose that formal musical training invokes domain-specific processes that affect salience of musical input and the amount of cortical tissue devoted to its processing, as well as domain-general processes of attention and executive functioning.

  18. Early Visual Deprivation Alters Multisensory Processing in Peripersonal Space

    ERIC Educational Resources Information Center

    Collignon, Olivier; Charbonneau, Genevieve; Lassonde, Maryse; Lepore, Franco

    2009-01-01

    Multisensory peripersonal space develops in a maturational process that is thought to be influenced by early sensory experience. We investigated the role of vision in the effective development of audiotactile interactions in peripersonal space. Early blind (EB), late blind (LB) and sighted control (SC) participants were asked to lateralize…

  19. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality.

    PubMed

    Cecere, Roberto; Gross, Joachim; Thut, Gregor

    2016-06-01

    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  20. Early multisensory interactions affect the competition among multiple visual objects.

    PubMed

    Van der Burg, Erik; Talsma, Durk; Olivers, Christian N L; Hickey, Clayton; Theeuwes, Jan

    2011-04-01

    In dynamic cluttered environments, audition and vision may benefit from each other in determining what deserves further attention and what does not. We investigated the underlying neural mechanisms responsible for attentional guidance by audiovisual stimuli in such an environment. Event-related potentials (ERPs) were measured during visual search through dynamic displays consisting of line elements that randomly changed orientation. Search accuracy improved when a target orientation change was synchronized with an auditory signal as compared to when the auditory signal was absent or synchronized with a distractor orientation change. The ERP data show that behavioral benefits were related to an early multisensory interaction over left parieto-occipital cortex (50-60 ms post-stimulus onset), which was followed by an early positive modulation (80-100 ms) over occipital and temporal areas contralateral to the audiovisual event, an enhanced N2pc (210-250 ms), and a contralateral negative slow wave (CNSW). The early multisensory interaction was correlated with behavioral search benefits, indicating that participants with a strong multisensory interaction benefited the most from the synchronized auditory signal. We suggest that an auditory signal enhances the neural response to a synchronized visual event, which increases the chances of selection in a multiple object environment. Copyright © 2010 Elsevier Inc. All rights reserved.

  1. Beta/Gamma Oscillations and Event-Related Potentials Indicate Aberrant Multisensory Processing in Schizophrenia

    PubMed Central

    Balz, Johanna; Roa Romero, Yadira; Keil, Julian; Krebber, Martin; Niedeggen, Michael; Gallinat, Jürgen; Senkowski, Daniel

    2016-01-01

    Recent behavioral and neuroimaging studies have suggested multisensory processing deficits in patients with schizophrenia (SCZ). Thus far, the neural mechanisms underlying these deficits are not well understood. Previous studies with unisensory stimulation have shown altered neural oscillations in SCZ. As such, altered oscillations could contribute to aberrant multisensory processing in this patient group. To test this assumption, we conducted an electroencephalography (EEG) study in 15 SCZ and 15 control participants in whom we examined neural oscillations and event-related potentials (ERPs) in the sound-induced flash illusion (SIFI). In the SIFI multiple auditory stimuli that are presented alongside a single visual stimulus can induce the illusory percept of multiple visual stimuli. In SCZ and control participants we compared ERPs and neural oscillations between trials that induced an illusion and trials that did not induce an illusion. On the behavioral level, SCZ (55.7%) and control participants (55.4%) did not significantly differ in illusion rates. The analysis of ERPs revealed diminished amplitudes and altered multisensory processing in SCZ compared to controls around 135 ms after stimulus onset. Moreover, the analysis of neural oscillations revealed altered 25–35 Hz power after 100 to 150 ms over occipital scalp for SCZ compared to controls. Our findings extend previous observations of aberrant neural oscillations in unisensory perception paradigms. They suggest that altered ERPs and altered occipital beta/gamma band power reflect aberrant multisensory processing in SCZ. PMID:27999553

  2. A Double Dissociation between Anterior and Posterior Superior Temporal Gyrus for Processing Audiovisual Speech Demonstrated by Electrocorticography.

    PubMed

    Ozker, Muge; Schepers, Inga M; Magnotti, John F; Yoshor, Daniel; Beauchamp, Michael S

    2017-06-01

    Human speech can be comprehended using only auditory information from the talker's voice. However, comprehension is improved if the talker's face is visible, especially if the auditory information is degraded as occurs in noisy environments or with hearing loss. We explored the neural substrates of audiovisual speech perception using electrocorticography, direct recording of neural activity using electrodes implanted on the cortical surface. We observed a double dissociation in the responses to audiovisual speech with clear and noisy auditory component within the superior temporal gyrus (STG), a region long known to be important for speech perception. Anterior STG showed greater neural activity to audiovisual speech with clear auditory component, whereas posterior STG showed similar or greater neural activity to audiovisual speech in which the speech was replaced with speech-like noise. A distinct border between the two response patterns was observed, demarcated by a landmark corresponding to the posterior margin of Heschl's gyrus. To further investigate the computational roles of both regions, we considered Bayesian models of multisensory integration, which predict that combining the independent sources of information available from different modalities should reduce variability in the neural responses. We tested this prediction by measuring the variability of the neural responses to single audiovisual words. Posterior STG showed smaller variability than anterior STG during presentation of audiovisual speech with noisy auditory component. Taken together, these results suggest that posterior STG but not anterior STG is important for multisensory integration of noisy auditory and visual speech.

  3. Severe Cross-Modal Object Recognition Deficits in Rats Treated Sub-Chronically with NMDA Receptor Antagonists are Reversed by Systemic Nicotine: Implications for Abnormal Multisensory Integration in Schizophrenia

    PubMed Central

    Jacklin, Derek L; Goel, Amit; Clementino, Kyle J; Hall, Alexander W M; Talpos, John C; Winters, Boyer D

    2012-01-01

    Schizophrenia is a complex and debilitating disorder, characterized by positive, negative, and cognitive symptoms. Among the cognitive deficits observed in patients with schizophrenia, recent work has indicated abnormalities in multisensory integration, a process that is important for the formation of comprehensive environmental percepts and for the appropriate guidance of behavior. Very little is known about the neural bases of such multisensory integration deficits, partly because of the lack of viable behavioral tasks to assess this process in animal models. In this study, we used our recently developed rodent cross-modal object recognition (CMOR) task to investigate multisensory integration functions in rats treated sub-chronically with one of two N-methyl-D-aspartate receptor (NMDAR) antagonists, MK-801, or ketamine; such treatment is known to produce schizophrenia-like symptoms. Rats treated with the NMDAR antagonists were impaired on the standard spontaneous object recognition (SOR) task, unimodal (tactile or visual only) versions of SOR, and the CMOR task with intermediate to long retention delays between acquisition and testing phases, but they displayed a selective CMOR task deficit when mnemonic demand was minimized. This selective impairment in multisensory information processing was dose-dependently reversed by acute systemic administration of nicotine. These findings suggest that persistent NMDAR hypofunction may contribute to the multisensory integration deficits observed in patients with schizophrenia and highlight the valuable potential of the CMOR task to facilitate further systematic investigation of the neural bases of, and potential treatments for, this hitherto overlooked aspect of cognitive dysfunction in schizophrenia. PMID:22669170

  4. The question of simultaneity in multisensory integration

    NASA Astrophysics Data System (ADS)

    Leone, Lynnette; McCourt, Mark E.

    2012-03-01

    Early reports of audiovisual (AV) multisensory integration (MI) indicated that unisensory stimuli must evoke simultaneous physiological responses to produce decreases in reaction time (RT) such that for unisensory stimuli with unequal RTs the stimulus eliciting the faster RT had to be delayed relative to the stimulus eliciting the slower RT. The "temporal rule" states that MI depends on the temporal proximity of unisensory stimuli, the neural responses to which must fall within a window of integration. Ecological validity demands that MI should occur only for simultaneous events (which may give rise to non-simultaneous neural activations). However, spurious neural response simultaneities which are unrelated to singular environmental multisensory occurrences must somehow be rejected. Using an RT/race model paradigm we measured AV MI as a function of stimulus onset asynchrony (SOA: +/-200 ms, 50 ms intervals) under fully dark adapted conditions for visual (V) stimuli that were either weak (scotopic 525 nm flashes; 511 ms mean RT) or strong (photopic 630 nm flashes; 356 ms mean RT). Auditory (A) stimulus (1000 Hz pure tone) intensity was constant. Despite the 155 ms slower mean RT to the scotopic versus photopic stimulus, facilitative AV MI in both conditions nevertheless occurred exclusively at an SOA of 0 ms. Thus, facilitative MI demands both physical and physiological simultaneity. We consider the mechanisms by which the nervous system may take account of variations in response latency arising from changes in stimulus intensity in order to selectively integrate only those physiological simultaneities that arise from physical simultaneities.

  5. Read My Lips: Brain Dynamics Associated with Audiovisual Integration and Deviance Detection.

    PubMed

    Tse, Chun-Yu; Gratton, Gabriele; Garnsey, Susan M; Novak, Michael A; Fabiani, Monica

    2015-09-01

    Information from different modalities is initially processed in different brain areas, yet real-world perception often requires the integration of multisensory signals into a single percept. An example is the McGurk effect, in which people viewing a speaker whose lip movements do not match the utterance perceive the spoken sounds incorrectly, hearing them as more similar to those signaled by the visual rather than the auditory input. This indicates that audiovisual integration is important for generating the phoneme percept. Here we asked when and where the audiovisual integration process occurs, providing spatial and temporal boundaries for the processes generating phoneme perception. Specifically, we wanted to separate audiovisual integration from other processes, such as simple deviance detection. Building on previous work employing ERPs, we used an oddball paradigm in which task-irrelevant audiovisually deviant stimuli were embedded in strings of non-deviant stimuli. We also recorded the event-related optical signal, an imaging method combining spatial and temporal resolution, to investigate the time course and neuroanatomical substrate of audiovisual integration. We found that audiovisual deviants elicit a short duration response in the middle/superior temporal gyrus, whereas audiovisual integration elicits a more extended response involving also inferior frontal and occipital regions. Interactions between audiovisual integration and deviance detection processes were observed in the posterior/superior temporal gyrus. These data suggest that dynamic interactions between inferior frontal cortex and sensory regions play a significant role in multimodal integration.

  6. Parietal disruption alters audiovisual binding in the sound-induced flash illusion.

    PubMed

    Kamke, Marc R; Vieth, Harrison E; Cottrell, David; Mattingley, Jason B

    2012-09-01

    Selective attention and multisensory integration are fundamental to perception, but little is known about whether, or under what circumstances, these processes interact to shape conscious awareness. Here, we used transcranial magnetic stimulation (TMS) to investigate the causal role of attention-related brain networks in multisensory integration between visual and auditory stimuli in the sound-induced flash illusion. The flash illusion is a widely studied multisensory phenomenon in which a single flash of light is falsely perceived as multiple flashes in the presence of irrelevant sounds. We investigated the hypothesis that extrastriate regions involved in selective attention, specifically within the right parietal cortex, exert an influence on the multisensory integrative processes that cause the flash illusion. We found that disruption of the right angular gyrus, but not of the adjacent supramarginal gyrus or of a sensory control site, enhanced participants' veridical perception of the multisensory events, thereby reducing their susceptibility to the illusion. Our findings suggest that the same parietal networks that normally act to enhance perception of attended events also play a role in the binding of auditory and visual stimuli in the sound-induced flash illusion. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Delayed audiovisual integration of patients with mild cognitive impairment and Alzheimer's disease compared with normal aged controls.

    PubMed

    Wu, Jinglong; Yang, Jiajia; Yu, Yinghua; Li, Qi; Nakamura, Naoya; Shen, Yong; Ohta, Yasuyuki; Yu, Shengyuan; Abe, Koji

    2012-01-01

    The human brain can anatomically combine task-relevant information from different sensory pathways to form a unified perception; this process is called multisensory integration. The aim of the present study was to test whether the multisensory integration abilities of patients with mild cognitive impairment (MCI) and Alzheimer's disease (AD) differed from those of normal aged controls (NC). A total of 64 subjects were divided into three groups: NC individuals (n = 24), MCI patients (n = 19), and probable AD patients (n = 21). All of the subjects were asked to perform three separate audiovisual integration tasks and were instructed to press the response key associated with the auditory, visual, or audiovisual stimuli in the three tasks. The accuracy and response time (RT) of each task were measured, and the RTs were analyzed using cumulative distribution functions to observe the audiovisual integration. Our results suggest that the mean RT of patients with AD was significantly longer than those of patients with MCI and NC individuals. Interestingly, we found that patients with both MCI and AD exhibited adequate audiovisual integration, and a greater peak (time bin with the highest percentage of benefit) and broader temporal window (time duration of benefit) of multisensory enhancement were observed. However, the onset time and peak benefit of audiovisual integration in MCI and AD patients occurred significantly later than did those of the NC. This finding indicates that the cognitive functional deficits of patients with MCI and AD contribute to the differences in performance enhancements of audiovisual integration compared with NC.

  8. Delayed Audiovisual Integration of Patients with Mild Cognitive Impairment and Alzheimer’s Disease Compared with Normal Aged Controls

    PubMed Central

    Wu, Jinglong; Yang, Jiajia; Yu, Yinghua; Li, Qi; Nakamura, Naoya; Shen, Yong; Ohta, Yasuyuki; Yu, Shengyuan; Abe, Koji

    2013-01-01

    The human brain can anatomically combine task-relevant information from different sensory pathways to form a unified perception; this process is called multisensory integration. The aim of the present study was to test whether the multisensory integration abilities of patients with mild cognitive impairment (MCI) and Alzheimer’s disease (AD) differed from those of normal aged controls (NC). A total of 64 subjects were divided into three groups: NC individuals (n = 24), MCI patients (n = 19), and probable AD patients (n = 21). All of the subjects were asked to perform three separate audiovisual integration tasks and were instructed to press the response key associated with the auditory, visual, or audiovisual stimuli in the three tasks. The accuracy and response time (RT) of each task were measured, and the RTs were analyzed using cumulative distribution functions to observe the audiovisual integration. Our results suggest that the mean RT of patients with AD was significantly longer than those of patients with MCI and NC individuals. Interestingly, we found that patients with both MCI and AD exhibited adequate audiovisual integration, and a greater peak (time bin with the highest percentage of benefit) and broader temporal window (time duration of benefit) of multisensory enhancement were observed. However, the onset time and peak benefit of audiovisual integration in MCI and AD patients occurred significantly later than did those of the NC. This finding indicates that the cognitive functional deficits of patients with MCI and AD contribute to the differences in performance enhancements of audiovisual integration compared with NC. PMID:22810093

  9. Audiovisual Temporal Processing and Synchrony Perception in the Rat.

    PubMed

    Schormans, Ashley L; Scott, Kaela E; Vo, Albert M Q; Tyker, Anna; Typlt, Marei; Stolzberg, Daniel; Allman, Brian L

    2016-01-01

    Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer's ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats ( n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats ( n = 7) perceived the synchronous audiovisual stimuli to be "visual first" for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20-40 ms. Ultimately, given that our behavioral and electrophysiological results were consistent with studies conducted on human participants and previous recordings made in multisensory brain regions of different species, we suggest that the rat represents an effective model for studying audiovisual temporal synchrony at both the neuronal and perceptual level.

  10. Audiovisual Temporal Processing and Synchrony Perception in the Rat

    PubMed Central

    Schormans, Ashley L.; Scott, Kaela E.; Vo, Albert M. Q.; Tyker, Anna; Typlt, Marei; Stolzberg, Daniel; Allman, Brian L.

    2017-01-01

    Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer’s ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats (n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats (n = 7) perceived the synchronous audiovisual stimuli to be “visual first” for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20–40 ms. Ultimately, given that our behavioral and electrophysiological results were consistent with studies conducted on human participants and previous recordings made in multisensory brain regions of different species, we suggest that the rat represents an effective model for studying audiovisual temporal synchrony at both the neuronal and perceptual level. PMID:28119580

  11. Basic multisensory functions can be acquired after congenital visual pattern deprivation in humans.

    PubMed

    Putzar, Lisa; Gondan, Matthias; Röder, Brigitte

    2012-01-01

    People treated for bilateral congenital cataracts offer a model to study the influence of visual deprivation in early infancy on visual and multisensory development. We investigated cross-modal integration capabilities in cataract patients using a simple detection task that provided redundant information to two different senses. In both patients and controls, redundancy gains were consistent with coactivation models, indicating an integrated processing of modality-specific information. This finding is in contrast with recent studies showing impaired higher-level multisensory interactions in cataract patients. The present results suggest that basic cross-modal integrative processes for simple short stimuli do not depend on visual and/or crossmodal input since birth.

  12. Supramodal processing optimizes visual perceptual learning and plasticity.

    PubMed

    Zilber, Nicolas; Ciuciu, Philippe; Gramfort, Alexandre; Azizi, Leila; van Wassenhove, Virginie

    2014-06-01

    Multisensory interactions are ubiquitous in cortex and it has been suggested that sensory cortices may be supramodal i.e. capable of functional selectivity irrespective of the sensory modality of inputs (Pascual-Leone and Hamilton, 2001; Renier et al., 2013; Ricciardi and Pietrini, 2011; Voss and Zatorre, 2012). Here, we asked whether learning to discriminate visual coherence could benefit from supramodal processing. To this end, three groups of participants were briefly trained to discriminate which of a red or green intermixed population of random-dot-kinematograms (RDKs) was most coherent in a visual display while being recorded with magnetoencephalography (MEG). During training, participants heard no sound (V), congruent acoustic textures (AV) or auditory noise (AVn); importantly, congruent acoustic textures shared the temporal statistics - i.e. coherence - of visual RDKs. After training, the AV group significantly outperformed participants trained in V and AVn although they were not aware of their progress. In pre- and post-training blocks, all participants were tested without sound and with the same set of RDKs. When contrasting MEG data collected in these experimental blocks, selective differences were observed in the dynamic pattern and the cortical loci responsive to visual RDKs. First and common to all three groups, vlPFC showed selectivity to the learned coherence levels whereas selectivity in visual motion area hMT+ was only seen for the AV group. Second and solely for the AV group, activity in multisensory cortices (mSTS, pSTS) correlated with post-training performances; additionally, the latencies of these effects suggested feedback from vlPFC to hMT+ possibly mediated by temporal cortices in AV and AVn groups. Altogether, we interpret our results in the context of the Reverse Hierarchy Theory of learning (Ahissar and Hochstein, 2004) in which supramodal processing optimizes visual perceptual learning by capitalizing on sensory-invariant representations - here, global coherence levels across sensory modalities. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Perceived Synchrony of Frog Multimodal Signal Components Is Influenced by Content and Order.

    PubMed

    Taylor, Ryan C; Page, Rachel A; Klein, Barrett A; Ryan, Michael J; Hunter, Kimberly L

    2017-10-01

    Multimodal signaling is common in communication systems. Depending on the species, individual signal components may be produced synchronously as a result of physiological constraint (fixed) or each component may be produced independently (fluid) in time. For animals that rely on fixed signals, a basic prediction is that asynchrony between the components should degrade the perception of signal salience, reducing receiver response. Male túngara frogs, Physalaemus pustulosus, produce a fixed multisensory courtship signal by vocalizing with two call components (whines and chucks) and inflating a vocal sac (visual component). Using a robotic frog, we tested female responses to variation in the temporal arrangement between acoustic and visual components. When the visual component lagged a complex call (whine + chuck), females largely rejected this asynchronous multisensory signal in favor of the complex call absent the visual cue. When the chuck component was removed from one call, but the robofrog inflation lagged the complex call, females responded strongly to the asynchronous multimodal signal. When the chuck component was removed from both calls, females reversed preference and responded positively to the asynchronous multisensory signal. When the visual component preceded the call, females responded as often to the multimodal signal as to the call alone. These data show that asynchrony of a normally fixed signal does reduce receiver responsiveness. The magnitude and overall response, however, depend on specific temporal interactions between the acoustic and visual components. The sensitivity of túngara frogs to lagging visual cues, but not leading ones, and the influence of acoustic signal content on the perception of visual asynchrony is similar to those reported in human psychophysics literature. Virtually all acoustically communicating animals must conduct auditory scene analyses and identify the source of signals. Our data suggest that some basic audiovisual neural integration processes may be at work in the vertebrate brain. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology 2017. This work is written by US Government employees and is in the public domain in the US.

  14. Multisensory Emplaced Learning: Resituating Situated Learning in a Moving World

    ERIC Educational Resources Information Center

    Fors, Vaike; Backstrom, Asa; Pink, Sarah

    2013-01-01

    This article outlines the implications of a theory of "sensory-emplaced learning" for understanding the interrelationships between the embodied and environmental in learning processes. Understanding learning as multisensory and contingent within everyday place-events, this framework analytically describes how people establish themselves as…

  15. Multisensory Emplaced Learning: Resituating Situated Learning in a Moving World

    ERIC Educational Resources Information Center

    Fors, Vaike; Backstrom, Asa; Pink, Sarah

    2013-01-01

    This article outlines the implications of a theory of "sensory-emplaced learning" for understanding the interrelationships between the embodied and environmental in learning processes. Understanding learning as multisensory and contingent within everyday place-events, this framework analytically describes how people establish themselves…

  16. Altered Neural Oscillations During Multisensory Integration in Adolescents with Fetal Alcohol Spectrum Disorder.

    PubMed

    Bolaños, Alfredo D; Coffman, Brian A; Candelaria-Cook, Felicha T; Kodituwakku, Piyadasa; Stephen, Julia M

    2017-12-01

    Children with fetal alcohol spectrum disorder (FASD), who were exposed to alcohol in utero, display a broad range of sensory, cognitive, and behavioral deficits, which are broadly theorized to be rooted in altered brain function and structure. Based on the role of neural oscillations in multisensory integration from past studies, we hypothesized that adolescents with FASD would show a decrease in oscillatory power during event-related gamma oscillatory activity (30 to 100 Hz), when compared to typically developing healthy controls (HC), and that such decrease in oscillatory power would predict behavioral performance. We measured sensory neurophysiology using magnetoencephalography (MEG) during passive auditory, somatosensory, and multisensory (synchronous) stimulation in 19 adolescents (12 to 21 years) with FASD and 23 age- and gender-matched HC. We employed a cross-hemisphere multisensory paradigm to assess interhemispheric connectivity deficits in children with FASD. Time-frequency analysis of MEG data revealed a significant decrease in gamma oscillatory power for both unisensory and multisensory conditions in the FASD group relative to HC, based on permutation testing of significant group differences. Greater beta oscillatory power (15 to 30 Hz) was also noted in the FASD group compared to HC in both unisensory and multisensory conditions. Regression analysis revealed greater predictive power of multisensory oscillations from unisensory oscillations in the FASD group compared to the HC group. Furthermore, multisensory oscillatory power, for both groups, predicted performance on the Intra-Extradimensional Set Shift Task and the Cambridge Gambling Task. Altered oscillatory power in the FASD group may reflect a restricted ability to process somatosensory and multisensory stimuli during day-to-day interactions. These alterations in neural oscillations may be associated with the neurobehavioral deficits experienced by adolescents with FASD and may carry over to adulthood. Copyright © 2017 by the Research Society on Alcoholism.

  17. Audiovisual synchrony enhances BOLD responses in a brain network including multisensory STS while also enhancing target-detection performance for both modalities

    PubMed Central

    Marchant, Jennifer L; Ruff, Christian C; Driver, Jon

    2012-01-01

    The brain seeks to combine related inputs from different senses (e.g., hearing and vision), via multisensory integration. Temporal information can indicate whether stimuli in different senses are related or not. A recent human fMRI study (Noesselt et al. [2007]: J Neurosci 27:11431–11441) used auditory and visual trains of beeps and flashes with erratic timing, manipulating whether auditory and visual trains were synchronous or unrelated in temporal pattern. A region of superior temporal sulcus (STS) showed higher BOLD signal for the synchronous condition. But this could not be related to performance, and it remained unclear if the erratic, unpredictable nature of the stimulus trains was important. Here we compared synchronous audiovisual trains to asynchronous trains, while using a behavioral task requiring detection of higher-intensity target events in either modality. We further varied whether the stimulus trains had predictable temporal pattern or not. Synchrony (versus lag) between auditory and visual trains enhanced behavioral sensitivity (d') to intensity targets in either modality, regardless of predictable versus unpredictable patterning. The analogous contrast in fMRI revealed BOLD increases in several brain areas, including the left STS region reported by Noesselt et al. [2007: J Neurosci 27:11431–11441]. The synchrony effect on BOLD here correlated with the subject-by-subject impact on performance. Predictability of temporal pattern did not affect target detection performance or STS activity, but did lead to an interaction with audiovisual synchrony for BOLD in inferior parietal cortex. PMID:21953980

  18. Visual-somatosensory integration and balance: evidence for psychophysical integrative differences in aging.

    PubMed

    Mahoney, Jeannette R; Holtzer, Roee; Verghese, Joe

    2014-01-01

    Research detailing multisensory integration (MSI) processes in aging and their association with clinically relevant outcomes is virtually non-existent. To our knowledge, the relationship between MSI and balance has not been well-established in aging. Given known alterations in unisensory processing with increasing age, the aims of the current study were to determine differential behavioral patterns of MSI in aging and investigate whether MSI was significantly associated with balance and fall-risk. Seventy healthy older adults (M = 75 years; 58% female) participated in the current study. Participants were instructed to make speeded responses to visual, somatosensory, and visual-somatosensory (VS) stimuli. Based on reaction times (RTs) to all stimuli, participants were classified into one of two groups (MSI or NO MSI), depending on their MSI RT benefit. Static balance was assessed using mean unipedal stance time. Overall, results revealed that RTs to VS stimuli were significantly shorter than those elicited to constituent unisensory conditions. Further, the current experimental design afforded differential patterns of multisensory processing, with 75% of the elderly sample demonstrating multisensory enhancements. Interestingly, 25% of older adults did not demonstrate multisensory RT facilitation; a finding that was attributed to extremely fast RTs overall and specifically in response to somatosensory inputs. Individuals in the NO MSI group maintained significantly better unipedal stance times and reported less falls, compared to elders in the MSI group. This study reveals the existence of differential patterns of multisensory processing in aging, while describing the clinical translational value of MSI enhancements in predicting balance and falls risk.

  19. Visual-Somatosensory Integration and Balance: Evidence for Psychophysical Integrative Differences in Aging

    PubMed Central

    Mahoney, Jeannette R.; Holtzer, Roee; Verghese, Joe

    2014-01-01

    Research detailing multisensory integration (MSI) processes in aging and their association with clinically relevant outcomes is virtually non-existent. To our knowledge, the relationship between MSI and balance has not been well-established in aging. Given known alterations in unisensory processing with increasing age, the aims of the current study were to determine differential behavioral patterns of MSI in aging and investigate whether MSI was significantly associated with balance and fall-risk. Seventy healthy older adults (M = 75 years; 58% female) participated in the current study. Participants were instructed to make speeded responses to visual, somatosensory, and visual-somatosensory (VS) stimuli. Based on reaction times (RTs) to all stimuli, participants were classified into one of two groups (MSI or NO MSI), depending on their MSI RT benefit. Static balance was assessed using mean unipedal stance time. Overall, results revealed that RTs to VS stimuli were significantly shorter than those elicited to constituent unisensory conditions. Further, the current experimental design afforded differential patterns of multisensory processing, with 75% of the elderly sample demonstrating multisensory enhancements. Interestingly, 25% of older adults did not demonstrate multisensory RT facilitation; a finding that was attributed to extremely fast RTs overall and specifically in response to somatosensory inputs. Individuals in the NO MSI group maintained significantly better unipedal stance times and reported less falls, compared to elders in the MSI group. This study reveals the existence of differential patterns of multisensory processing in aging, while describing the clinical translational value of MSI enhancements in predicting balance and falls risk. PMID:25102664

  20. Multisensory integration across the senses in young and old adults

    PubMed Central

    Mahoney, Jeannette R.; Li, Po Ching Clara; Oh-Park, Mooyeon; Verghese, Joe; Holtzer, Roee

    2011-01-01

    Stimuli are processed concurrently and across multiple sensory inputs. Here we directly compared the effect of multisensory integration (MSI) on reaction time across three paired sensory inputs in eighteen young (M=19.17 yrs) and eighteen old (M=76.44 yrs) individuals. Participants were determined to be non-demented and without any medical or psychiatric conditions that would affect their performance. Participants responded to randomly presented unisensory (auditory, visual, somatosensory) stimuli and three paired sensory inputs consisting of auditory-somatosensory (AS) auditory-visual (AV) and visual-somatosensory (VS) stimuli. Results revealed that reaction time (RT) to all multisensory pairings was significantly faster than those elicited to the constituent unisensory conditions across age groups; findings that could not be accounted for by simple probability summation. Both young and old participants responded the fastest to multisensory pairings containing somatosensory input. Compared to younger adults, older adults demonstrated a significantly greater RT benefit when processing concurrent VS information. In terms of co-activation, older adults demonstrated a significant increase in the magnitude of visual-somatosensory co-activation (i.e., multisensory integration), while younger adults demonstrated a significant increase in the magnitude of auditory-visual and auditory-somatosensory co-activation. This study provides first evidence in support of the facilitative effect of pairing somatosensory with visual stimuli in older adults. PMID:22024545

  1. Thalamic connections of the core auditory cortex and rostral supratemporal plane in the macaque monkey.

    PubMed

    Scott, Brian H; Saleem, Kadharbatcha S; Kikuchi, Yukiko; Fukushima, Makoto; Mishkin, Mortimer; Saunders, Richard C

    2017-11-01

    In the primate auditory cortex, information flows serially in the mediolateral dimension from core, to belt, to parabelt. In the caudorostral dimension, stepwise serial projections convey information through the primary, rostral, and rostrotemporal (AI, R, and RT) core areas on the supratemporal plane, continuing to the rostrotemporal polar area (RTp) and adjacent auditory-related areas of the rostral superior temporal gyrus (STGr) and temporal pole. In addition to this cascade of corticocortical connections, the auditory cortex receives parallel thalamocortical projections from the medial geniculate nucleus (MGN). Previous studies have examined the projections from MGN to auditory cortex, but most have focused on the caudal core areas AI and R. In this study, we investigated the full extent of connections between MGN and AI, R, RT, RTp, and STGr using retrograde and anterograde anatomical tracers. Both AI and R received nearly 90% of their thalamic inputs from the ventral subdivision of the MGN (MGv; the primary/lemniscal auditory pathway). By contrast, RT received only ∼45% from MGv, and an equal share from the dorsal subdivision (MGd). Area RTp received ∼25% of its inputs from MGv, but received additional inputs from multisensory areas outside the MGN (30% in RTp vs. 1-5% in core areas). The MGN input to RTp distinguished this rostral extension of auditory cortex from the adjacent auditory-related cortex of the STGr, which received 80% of its thalamic input from multisensory nuclei (primarily medial pulvinar). Anterograde tracers identified complementary descending connections by which highly processed auditory information may modulate thalamocortical inputs. © 2017 Wiley Periodicals, Inc.

  2. Behavioral Impact of Unisensory and Multisensory Audio-Tactile Events: Pros and Cons for Interlimb Coordination in Juggling

    PubMed Central

    Zelic, Gregory; Mottet, Denis; Lagarde, Julien

    2012-01-01

    Recent behavioral neuroscience research revealed that elementary reactive behavior can be improved in the case of cross-modal sensory interactions thanks to underlying multisensory integration mechanisms. Can this benefit be generalized to an ongoing coordination of movements under severe physical constraints? We choose a juggling task to examine this question. A central issue well-known in juggling lies in establishing and maintaining a specific temporal coordination among balls, hands, eyes and posture. Here, we tested whether providing additional timing information about the balls and hands motions by using external sound and tactile periodic stimulations, the later presented at the wrists, improved the behavior of jugglers. One specific combination of auditory and tactile metronome led to a decrease of the spatiotemporal variability of the juggler's performance: a simple sound associated to left and right tactile cues presented antiphase to each other, which corresponded to the temporal pattern of hands movement in the juggling task. A contrario, no improvements were obtained in the case of other auditory and tactile combinations. We even found a degraded performance when tactile events were presented alone. The nervous system thus appears able to integrate in efficient way environmental information brought by different sensory modalities, but only if the information specified matches specific features of the coordination pattern. We discuss the possible implications of these results for the understanding of the neuronal integration process implied in audio-tactile interaction in the context of complex voluntary movement, and considering the well-known gating effect of movement on vibrotactile perception. PMID:22384211

  3. A Double Dissociation between Anterior and Posterior Superior Temporal Gyrus for Processing Audiovisual Speech Demonstrated by Electrocorticography

    PubMed Central

    Ozker, Muge; Schepers, Inga M.; Magnotti, John F.; Yoshor, Daniel; Beauchamp, Michael S.

    2017-01-01

    Human speech can be comprehended using only auditory information from the talker’s voice. However, comprehension is improved if the talker’s face is visible, especially if the auditory information is degraded as occurs in noisy environments or with hearing loss. We explored the neural substrates of audiovisual speech perception using electrocorticography, direct recording of neural activity using electrodes implanted on the cortical surface. We observed a double dissociation in the responses to audiovisual speech with clear and noisy auditory component within the superior temporal gyrus (STG), a region long known to be important for speech perception. Anterior STG showed greater neural activity to audiovisual speech with clear auditory component, whereas posterior STG showed similar or greater neural activity to audiovisual speech in which the speech was replaced with speech-like noise. A distinct border between the two response patterns was observed, demarcated by a landmark corresponding to the posterior margin of Heschl’s gyrus. To further investigate the computational roles of both regions, we considered Bayesian models of multisensory integration, which predict that combining the independent sources of information available from different modalities should reduce variability in the neural responses. We tested this prediction by measuring the variability of the neural responses to single audiovisual words. Posterior STG showed smaller variability than anterior STG during presentation of audiovisual speech with noisy auditory component. Taken together, these results suggest that posterior STG but not anterior STG is important for multisensory integration of noisy auditory and visual speech. PMID:28253074

  4. From multisensory integration in peripersonal space to bodily self-consciousness: from statistical regularities to statistical inference.

    PubMed

    Noel, Jean-Paul; Blanke, Olaf; Serino, Andrea

    2018-06-06

    Integrating information across sensory systems is a critical step toward building a cohesive representation of the environment and one's body, and as illustrated by numerous illusions, scaffolds subjective experience of the world and self. In the last years, classic principles of multisensory integration elucidated in the subcortex have been translated into the language of statistical inference understood by the neocortical mantle. Most importantly, a mechanistic systems-level description of multisensory computations via probabilistic population coding and divisive normalization is actively being put forward. In parallel, by describing and understanding bodily illusions, researchers have suggested multisensory integration of bodily inputs within the peripersonal space as a key mechanism in bodily self-consciousness. Importantly, certain aspects of bodily self-consciousness, although still very much a minority, have been recently casted under the light of modern computational understandings of multisensory integration. In doing so, we argue, the field of bodily self-consciousness may borrow mechanistic descriptions regarding the neural implementation of inference computations outlined by the multisensory field. This computational approach, leveraged on the understanding of multisensory processes generally, promises to advance scientific comprehension regarding one of the most mysterious questions puzzling humankind, that is, how our brain creates the experience of a self in interaction with the environment. © 2018 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of New York Academy of Sciences.

  5. Deconstructing the McGurk-MacDonald Illusion

    ERIC Educational Resources Information Center

    Soto-Faraco, Salvador; Alsius, Agnes

    2009-01-01

    Cross-modal illusions such as the McGurk-MacDonald effect have been used to illustrate the automatic, encapsulated nature of multisensory integration. This characterization is based in the widespread assumption that the illusory percept arising from intersensory conflict reflects only the end-product of the multisensory integration process, with…

  6. Multisensory Instruction in Foreign Language Education.

    ERIC Educational Resources Information Center

    Robles, Teresita del Rosario Caballero; Uglem, Craig Thomas Chase

    This paper reviews some theories that through history have explained the process of learning. It also taps some new findings on how the brain learns. Multisensory instruction is a pedagogic strategy that covers the greatest number of individual preferences in the classroom, language laboratories, and multimedia rooms for a constant and diverse…

  7. Multisensory Interference in Early Deaf Adults

    ERIC Educational Resources Information Center

    Heimler, Benedetta; Baruffaldi, Francesca; Bonmassar, Claudia; Venturini, Marta; Pavani, Francesco

    2017-01-01

    Multisensory interactions in deaf cognition are largely unexplored. Unisensory studies suggest that behavioral/neural changes may be more prominent for visual compared to tactile processing in early deaf adults. Here we test whether such an asymmetry results in increased saliency of vision over touch during visuo-tactile interactions. About 23…

  8. Evidence for Diminished Multisensory Integration in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Stevenson, Ryan A.; Siemann, Justin K.; Woynaroski, Tiffany G.; Schneider, Brittany C.; Eberly, Haley E.; Camarata, Stephen M.; Wallace, Mark T.

    2014-01-01

    Individuals with autism spectrum disorders (ASD) exhibit alterations in sensory processing, including changes in the integration of information across the different sensory modalities. In the current study, we used the sound-induced flash illusion to assess multisensory integration in children with ASD and typically-developing (TD) controls.…

  9. Attention distributed across sensory modalities enhances perceptual performance

    PubMed Central

    Mishra, Jyoti; Gazzaley, Adam

    2012-01-01

    This study investigated the interaction between top-down attentional control and multisensory processing in humans. Using semantically congruent and incongruent audiovisual stimulus streams, we found target detection to be consistently improved in the setting of distributed audiovisual attention versus focused visual attention. This performance benefit was manifested as faster reaction times for congruent audiovisual stimuli, and as accuracy improvements for incongruent stimuli, resulting in a resolution of stimulus interference. Electrophysiological recordings revealed that these behavioral enhancements were associated with reduced neural processing of both auditory and visual components of the audiovisual stimuli under distributed vs. focused visual attention. These neural changes were observed at early processing latencies, within 100–300 ms post-stimulus onset, and localized to auditory, visual, and polysensory temporal cortices. These results highlight a novel neural mechanism for top-down driven performance benefits via enhanced efficacy of sensory neural processing during distributed audiovisual attention relative to focused visual attention. PMID:22933811

  10. The effect of early visual deprivation on the neural bases of multisensory processing.

    PubMed

    Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte

    2015-06-01

    Developmental vision is deemed to be necessary for the maturation of multisensory cortical circuits. Thus far, this has only been investigated in animal studies, which have shown that congenital visual deprivation markedly reduces the capability of neurons to integrate cross-modal inputs. The present study investigated the effect of transient congenital visual deprivation on the neural mechanisms of multisensory processing in humans. We used functional magnetic resonance imaging to compare responses of visual and auditory cortical areas to visual, auditory and audio-visual stimulation in cataract-reversal patients and normally sighted controls. The results showed that cataract-reversal patients, unlike normally sighted controls, did not exhibit multisensory integration in auditory areas. Furthermore, cataract-reversal patients, but not normally sighted controls, exhibited lower visual cortical processing within visual cortex during audio-visual stimulation than during visual stimulation. These results indicate that congenital visual deprivation affects the capability of cortical areas to integrate cross-modal inputs in humans, possibly because visual processing is suppressed during cross-modal stimulation. Arguably, the lack of vision in the first months after birth may result in a reorganization of visual cortex, including the suppression of noisy visual input from the deprived retina in order to reduce interference during auditory processing. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Parietal and temporal activity during a multimodal dance video game: an fNIRS study.

    PubMed

    Tachibana, Atsumichi; Noah, J Adam; Bronner, Shaw; Ono, Yumie; Onozuka, Minoru

    2011-10-03

    Using functional near infrared spectroscopy (fNIRS) we studied how playing a dance video game employs coordinated activation of sensory-motor integration centers of the superior parietal lobe (SPL) and superior temporal gyrus (STG). Subjects played a dance video game, in a block design with 30s of activity alternating with 30s of rest, while changes in oxy-hemoglobin (oxy-Hb) levels were continuously measured. The game was modified to compare difficult (4-arrow), simple (2-arrow), and stepping conditions. Oxy-Hb levels were greatest with increased task difficulty. The quick-onset, trapezoidal time-course increase in SPL oxy-Hb levels reflected the on-off neuronal response of spatial orienting and rhythmic motor timing that were required during the activity. Slow-onset, bell-shaped increases in oxy-Hb levels observed in STG suggested the gradually increasing load of directing multisensory information to downstream processing centers associated with motor behavior and control. Differences in temporal relationships of SPL and STG oxy-Hb concentration levels may reflect the functional roles of these brain structures during the task period. NIRS permits insights into temporal relationships of cortical hemodynamics during real motor tasks. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  12. Over my fake body: body ownership illusions for studying the multisensory basis of own-body perception

    PubMed Central

    Kilteni, Konstantina; Maselli, Antonella; Kording, Konrad P.; Slater, Mel

    2015-01-01

    Which is my body and how do I distinguish it from the bodies of others, or from objects in the surrounding environment? The perception of our own body and more particularly our sense of body ownership is taken for granted. Nevertheless, experimental findings from body ownership illusions (BOIs), show that under specific multisensory conditions, we can experience artificial body parts or fake bodies as our own body parts or body, respectively. The aim of the present paper is to discuss how and why BOIs are induced. We review several experimental findings concerning the spatial, temporal, and semantic principles of crossmodal stimuli that have been applied to induce BOIs. On the basis of these principles, we discuss theoretical approaches concerning the underlying mechanism of BOIs. We propose a conceptualization based on Bayesian causal inference for addressing how our nervous system could infer whether an object belongs to our own body, using multisensory, sensorimotor, and semantic information, and we discuss how this can account for several experimental findings. Finally, we point to neural network models as an implementational framework within which the computational problem behind BOIs could be addressed in the future. PMID:25852524

  13. Neural Correlates of Temporal Complexity and Synchrony during Audiovisual Correspondence Detection.

    PubMed

    Baumann, Oliver; Vromen, Joyce M G; Cheung, Allen; McFadyen, Jessica; Ren, Yudan; Guo, Christine C

    2018-01-01

    We often perceive real-life objects as multisensory cues through space and time. A key challenge for audiovisual integration is to match neural signals that not only originate from different sensory modalities but also that typically reach the observer at slightly different times. In humans, complex, unpredictable audiovisual streams lead to higher levels of perceptual coherence than predictable, rhythmic streams. In addition, perceptual coherence for complex signals seems less affected by increased asynchrony between visual and auditory modalities than for simple signals. Here, we used functional magnetic resonance imaging to determine the human neural correlates of audiovisual signals with different levels of temporal complexity and synchrony. Our study demonstrated that greater perceptual asynchrony and lower signal complexity impaired performance in an audiovisual coherence-matching task. Differences in asynchrony and complexity were also underpinned by a partially different set of brain regions. In particular, our results suggest that, while regions in the dorsolateral prefrontal cortex (DLPFC) were modulated by differences in memory load due to stimulus asynchrony, areas traditionally thought to be involved in speech production and recognition, such as the inferior frontal and superior temporal cortex, were modulated by the temporal complexity of the audiovisual signals. Our results, therefore, indicate specific processing roles for different subregions of the fronto-temporal cortex during audiovisual coherence detection.

  14. Neural Correlates of Temporal Complexity and Synchrony during Audiovisual Correspondence Detection

    PubMed Central

    Ren, Yudan

    2018-01-01

    Abstract We often perceive real-life objects as multisensory cues through space and time. A key challenge for audiovisual integration is to match neural signals that not only originate from different sensory modalities but also that typically reach the observer at slightly different times. In humans, complex, unpredictable audiovisual streams lead to higher levels of perceptual coherence than predictable, rhythmic streams. In addition, perceptual coherence for complex signals seems less affected by increased asynchrony between visual and auditory modalities than for simple signals. Here, we used functional magnetic resonance imaging to determine the human neural correlates of audiovisual signals with different levels of temporal complexity and synchrony. Our study demonstrated that greater perceptual asynchrony and lower signal complexity impaired performance in an audiovisual coherence-matching task. Differences in asynchrony and complexity were also underpinned by a partially different set of brain regions. In particular, our results suggest that, while regions in the dorsolateral prefrontal cortex (DLPFC) were modulated by differences in memory load due to stimulus asynchrony, areas traditionally thought to be involved in speech production and recognition, such as the inferior frontal and superior temporal cortex, were modulated by the temporal complexity of the audiovisual signals. Our results, therefore, indicate specific processing roles for different subregions of the fronto-temporal cortex during audiovisual coherence detection. PMID:29354682

  15. Visually induced gains in pitch discrimination: Linking audio-visual processing with auditory abilities.

    PubMed

    Møller, Cecilie; Højlund, Andreas; Bærentsen, Klaus B; Hansen, Niels Chr; Skewes, Joshua C; Vuust, Peter

    2018-05-01

    Perception is fundamentally a multisensory experience. The principle of inverse effectiveness (PoIE) states how the multisensory gain is maximal when responses to the unisensory constituents of the stimuli are weak. It is one of the basic principles underlying multisensory processing of spatiotemporally corresponding crossmodal stimuli that are well established at behavioral as well as neural levels. It is not yet clear, however, how modality-specific stimulus features influence discrimination of subtle changes in a crossmodally corresponding feature belonging to another modality. Here, we tested the hypothesis that reliance on visual cues to pitch discrimination follow the PoIE at the interindividual level (i.e., varies with varying levels of auditory-only pitch discrimination abilities). Using an oddball pitch discrimination task, we measured the effect of varying visually perceived vertical position in participants exhibiting a wide range of pitch discrimination abilities (i.e., musicians and nonmusicians). Visual cues significantly enhanced pitch discrimination as measured by the sensitivity index d', and more so in the crossmodally congruent than incongruent condition. The magnitude of gain caused by compatible visual cues was associated with individual pitch discrimination thresholds, as predicted by the PoIE. This was not the case for the magnitude of the congruence effect, which was unrelated to individual pitch discrimination thresholds, indicating that the pitch-height association is robust to variations in auditory skills. Our findings shed light on individual differences in multisensory processing by suggesting that relevant multisensory information that crucially aids some perceivers' performance may be of less importance to others, depending on their unisensory abilities.

  16. Modality-specific spectral dynamics in response to visual and tactile sequential shape information processing tasks: An MEG study using multivariate pattern classification analysis.

    PubMed

    Gohel, Bakul; Lee, Peter; Jeong, Yong

    2016-08-01

    Brain regions that respond to more than one sensory modality are characterized as multisensory regions. Studies on the processing of shape or object information have revealed recruitment of the lateral occipital cortex, posterior parietal cortex, and other regions regardless of input sensory modalities. However, it remains unknown whether such regions show similar (modality-invariant) or different (modality-specific) neural oscillatory dynamics, as recorded using magnetoencephalography (MEG), in response to identical shape information processing tasks delivered to different sensory modalities. Modality-invariant or modality-specific neural oscillatory dynamics indirectly suggest modality-independent or modality-dependent participation of particular brain regions, respectively. Therefore, this study investigated the modality-specificity of neural oscillatory dynamics in the form of spectral power modulation patterns in response to visual and tactile sequential shape-processing tasks that are well-matched in terms of speed and content between the sensory modalities. Task-related changes in spectral power modulation and differences in spectral power modulation between sensory modalities were investigated at source-space (voxel) level, using a multivariate pattern classification (MVPC) approach. Additionally, whole analyses were extended from the voxel level to the independent-component level to take account of signal leakage effects caused by inverse solution. The modality-specific spectral dynamics in multisensory and higher-order brain regions, such as the lateral occipital cortex, posterior parietal cortex, inferior temporal cortex, and other brain regions, showed task-related modulation in response to both sensory modalities. This suggests modality-dependency of such brain regions on the input sensory modality for sequential shape-information processing. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. The integration processing of the visual and auditory information in videos of real-world events: an ERP study.

    PubMed

    Liu, Baolin; Wang, Zhongning; Jin, Zhixing

    2009-09-11

    In real life, the human brain usually receives information through visual and auditory channels and processes the multisensory information, but studies on the integration processing of the dynamic visual and auditory information are relatively few. In this paper, we have designed an experiment, where through the presentation of common scenario, real-world videos, with matched and mismatched actions (images) and sounds as stimuli, we aimed to study the integration processing of synchronized visual and auditory information in videos of real-world events in the human brain, through the use event-related potentials (ERPs) methods. Experimental results showed that videos of mismatched actions (images) and sounds would elicit a larger P400 as compared to videos of matched actions (images) and sounds. We believe that the P400 waveform might be related to the cognitive integration processing of mismatched multisensory information in the human brain. The results also indicated that synchronized multisensory information would interfere with each other, which would influence the results of the cognitive integration processing.

  18. Neuroimaging investigations of dorsal stream processing and effects of stimulus synchrony in schizophrenia.

    PubMed

    Sanfratello, Lori; Aine, Cheryl; Stephen, Julia

    2018-05-25

    Impairments in auditory and visual processing are common in schizophrenia (SP). In the unisensory realm visual deficits are primarily noted for the dorsal visual stream. In addition, insensitivity to timing offsets between stimuli are widely reported for SP. The aim of the present study was to test at the physiological level differences in dorsal/ventral stream visual processing and timing sensitivity between SP and healthy controls (HC) using MEG and a simple auditory/visual task utilizing a variety of multisensory conditions. The paradigm included all combinations of synchronous/asynchronous and central/peripheral stimuli, yielding 4 task conditions. Both HC and SP groups showed activation in parietal areas (dorsal visual stream) during all multisensory conditions, with parietal areas showing decreased activation for SP relative to HC, and a significantly delayed peak of activation for SP in intraparietal sulcus (IPS). We also observed a differential effect of stimulus synchrony on HC and SP parietal response. Furthermore, a (negative) correlation was found between SP positive symptoms and activity in IPS. Taken together, our results provide evidence of impairment of the dorsal visual stream in SP during a multisensory task, along with an altered response to timing offsets between presented multisensory stimuli. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. A Multisensory Aquatic Environment for Individuals with Intellectual/Developmental Disabilities

    ERIC Educational Resources Information Center

    Potter, Cindy; Erzen, Carol

    2008-01-01

    This article presents the eighth of a 12-part series exploring the benefits of aquatic therapy and recreation for people with special needs. Here, the authors describe the process of development and installation of an aquatic multisensory environment (MSE) and the many factors that one should consider for a successful result. There are many…

  20. Multisensory integration, sensory substitution and visual rehabilitation.

    PubMed

    Proulx, Michael J; Ptito, Maurice; Amedi, Amir

    2014-04-01

    Sensory substitution has advanced remarkably over the past 35 years since first introduced to the scientific literature by Paul Bach-y-Rita. In this issue dedicated to his memory, we describe a collection of reviews that assess the current state of neuroscience research on sensory substitution, visual rehabilitation, and multisensory processes. Copyright © 2014. Published by Elsevier Ltd.

  1. A psychophysical investigation of differences between synchrony and temporal order judgments.

    PubMed

    Love, Scott A; Petrini, Karin; Cheng, Adam; Pollick, Frank E

    2013-01-01

    Synchrony judgments involve deciding whether cues to an event are in synch or out of synch, while temporal order judgments involve deciding which of the cues came first. When the cues come from different sensory modalities these judgments can be used to investigate multisensory integration in the temporal domain. However, evidence indicates that that these two tasks should not be used interchangeably as it is unlikely that they measure the same perceptual mechanism. The current experiment further explores this issue across a variety of different audiovisual stimulus types. Participants were presented with 5 audiovisual stimulus types, each at 11 parametrically manipulated levels of cue asynchrony. During separate blocks, participants had to make synchrony judgments or temporal order judgments. For some stimulus types many participants were unable to successfully make temporal order judgments, but they were able to make synchrony judgments. The mean points of subjective simultaneity for synchrony judgments were all video-leading, while those for temporal order judgments were all audio-leading. In the within participants analyses no correlation was found across the two tasks for either the point of subjective simultaneity or the temporal integration window. Stimulus type influenced how the two tasks differed; nevertheless, consistent differences were found between the two tasks regardless of stimulus type. Therefore, in line with previous work, we conclude that synchrony and temporal order judgments are supported by different perceptual mechanisms and should not be interpreted as being representative of the same perceptual process.

  2. A Psychophysical Investigation of Differences between Synchrony and Temporal Order Judgments

    PubMed Central

    Love, Scott A.; Petrini, Karin; Cheng, Adam; Pollick, Frank E.

    2013-01-01

    Background Synchrony judgments involve deciding whether cues to an event are in synch or out of synch, while temporal order judgments involve deciding which of the cues came first. When the cues come from different sensory modalities these judgments can be used to investigate multisensory integration in the temporal domain. However, evidence indicates that that these two tasks should not be used interchangeably as it is unlikely that they measure the same perceptual mechanism. The current experiment further explores this issue across a variety of different audiovisual stimulus types. Methodology/Principal Findings Participants were presented with 5 audiovisual stimulus types, each at 11 parametrically manipulated levels of cue asynchrony. During separate blocks, participants had to make synchrony judgments or temporal order judgments. For some stimulus types many participants were unable to successfully make temporal order judgments, but they were able to make synchrony judgments. The mean points of subjective simultaneity for synchrony judgments were all video-leading, while those for temporal order judgments were all audio-leading. In the within participants analyses no correlation was found across the two tasks for either the point of subjective simultaneity or the temporal integration window. Conclusions Stimulus type influenced how the two tasks differed; nevertheless, consistent differences were found between the two tasks regardless of stimulus type. Therefore, in line with previous work, we conclude that synchrony and temporal order judgments are supported by different perceptual mechanisms and should not be interpreted as being representative of the same perceptual process. PMID:23349971

  3. Functional neuroimaging studies in addiction: multisensory drug stimuli and neural cue reactivity.

    PubMed

    Yalachkov, Yavor; Kaiser, Jochen; Naumer, Marcus J

    2012-02-01

    Neuroimaging studies on cue reactivity have substantially contributed to the understanding of addiction. In the majority of studies drug cues were presented in the visual modality. However, exposure to conditioned cues in real life occurs often simultaneously in more than one sensory modality. Therefore, multisensory cues should elicit cue reactivity more consistently than unisensory stimuli and increase the ecological validity and the reliability of brain activation measurements. This review includes the data from 44 whole-brain functional neuroimaging studies with a total of 1168 subjects (812 patients and 356 controls). Correlations between neural cue reactivity and clinical covariates such as craving have been reported significantly more often for multisensory than unisensory cues in the motor cortex, insula and posterior cingulate cortex. Thus, multisensory drug cues are particularly effective in revealing brain-behavior relationships in neurocircuits of addiction responsible for motivation, craving awareness and self-related processing. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Multisensory effects on somatosensation: a trimodal visuo-vestibular-tactile interaction

    PubMed Central

    Kaliuzhna, Mariia; Ferrè, Elisa Raffaella; Herbelin, Bruno; Blanke, Olaf; Haggard, Patrick

    2016-01-01

    Vestibular information about self-motion is combined with other sensory signals. Previous research described both visuo-vestibular and vestibular-tactile bilateral interactions, but the simultaneous interaction between all three sensory modalities has not been explored. Here we exploit a previously reported visuo-vestibular integration to investigate multisensory effects on tactile sensitivity in humans. Tactile sensitivity was measured during passive whole body rotations alone or in conjunction with optic flow, creating either purely vestibular or visuo-vestibular sensations of self-motion. Our results demonstrate that tactile sensitivity is modulated by perceived self-motion, as provided by a combined visuo-vestibular percept, and not by the visual and vestibular cues independently. We propose a hierarchical multisensory interaction that underpins somatosensory modulation: visual and vestibular cues are first combined to produce a multisensory self-motion percept. Somatosensory processing is then enhanced according to the degree of perceived self-motion. PMID:27198907

  5. Nonvisual spatial navigation fMRI lateralizes mesial temporal lobe epilepsy in a patient with congenital blindness.

    PubMed

    Toller, Gianina; Adhimoolam, Babu; Grunwald, Thomas; Huppertz, Hans-Jürgen; König, Kristina; Jokeit, Hennric

    2015-01-01

    Nonvisual spatial navigation functional magnetic resonance imaging (fMRI) may help clinicians determine memory lateralization in blind individuals with refractory mesial temporal lobe epilepsy (MTLE). We report on an exceptional case of a congenitally blind woman with late-onset left MTLE undergoing presurgical memory fMRI. To activate mesial temporal structures despite the lack of visual memory, the patient was requested to recall familiar routes using nonvisual multisensory and verbal cues. Our findings demonstrate the diagnostic value of a nonvisual fMRI task to lateralize MTLE despite congenital blindness and may therefore contribute to the risk assessment for postsurgical amnesia in rare cases with refractory MTLE and accompanying congenital blindness.

  6. Behavioral, Perceptual, and Neural Alterations in Sensory and Multisensory Function in Autism Spectrum Disorder

    PubMed Central

    Baum, Sarah H.; Stevenson, Ryan A.; Wallace, Mark T.

    2015-01-01

    Although sensory processing challenges have been noted since the first clinical descriptions of autism, it has taken until the release of the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) in 2013 for sensory problems to be included as part of the core symptoms of autism spectrum disorder (ASD) in the diagnostic profile. Because sensory information forms the building blocks for higher-order social and cognitive functions, we argue that sensory processing is not only an additional piece of the puzzle, but rather a critical cornerstone for characterizing and understanding ASD. In this review we discuss what is currently known about sensory processing in ASD, how sensory function fits within contemporary models of ASD, and what is understood about the differences in the underlying neural processing of sensory and social communication observed between individuals with and without ASD. In addition to highlighting the sensory features associated with ASD, we also emphasize the importance of multisensory processing in building perceptual and cognitive representations, and how deficits in multisensory integration may also be a core characteristic of ASD. PMID:26455789

  7. The Dynamic Multisensory Engram: Neural Circuitry Underlying Crossmodal Object Recognition in Rats Changes with the Nature of Object Experience.

    PubMed

    Jacklin, Derek L; Cloke, Jacob M; Potvin, Alphonse; Garrett, Inara; Winters, Boyer D

    2016-01-27

    Rats, humans, and monkeys demonstrate robust crossmodal object recognition (CMOR), identifying objects across sensory modalities. We have shown that rats' performance of a spontaneous tactile-to-visual CMOR task requires functional integration of perirhinal (PRh) and posterior parietal (PPC) cortices, which seemingly provide visual and tactile object feature processing, respectively. However, research with primates has suggested that PRh is sufficient for multisensory object representation. We tested this hypothesis in rats using a modification of the CMOR task in which multimodal preexposure to the to-be-remembered objects significantly facilitates performance. In the original CMOR task, with no preexposure, reversible lesions of PRh or PPC produced patterns of impairment consistent with modality-specific contributions. Conversely, in the CMOR task with preexposure, PPC lesions had no effect, whereas PRh involvement was robust, proving necessary for phases of the task that did not require PRh activity when rats did not have preexposure; this pattern was supported by results from c-fos imaging. We suggest that multimodal preexposure alters the circuitry responsible for object recognition, in this case obviating the need for PPC contributions and expanding PRh involvement, consistent with the polymodal nature of PRh connections and results from primates indicating a key role for PRh in multisensory object representation. These findings have significant implications for our understanding of multisensory information processing, suggesting that the nature of an individual's past experience with an object strongly determines the brain circuitry involved in representing that object's multisensory features in memory. The ability to integrate information from multiple sensory modalities is crucial to the survival of organisms living in complex environments. Appropriate responses to behaviorally relevant objects are informed by integration of multisensory object features. We used crossmodal object recognition tasks in rats to study the neurobiological basis of multisensory object representation. When rats had no prior exposure to the to-be-remembered objects, the spontaneous ability to recognize objects across sensory modalities relied on functional interaction between multiple cortical regions. However, prior multisensory exploration of the task-relevant objects remapped cortical contributions, negating the involvement of one region and significantly expanding the role of another. This finding emphasizes the dynamic nature of cortical representation of objects in relation to past experience. Copyright © 2016 the authors 0270-6474/16/361273-17$15.00/0.

  8. Enhanced multisensory integration and motor reactivation after active motor learning of audiovisual associations.

    PubMed

    Butler, Andrew J; James, Thomas W; James, Karin Harman

    2011-11-01

    Everyday experience affords us many opportunities to learn about objects through multiple senses using physical interaction. Previous work has shown that active motor learning of unisensory items enhances memory and leads to the involvement of motor systems during subsequent perception. However, the impact of active motor learning on subsequent perception and recognition of associations among multiple senses has not been investigated. Twenty participants were included in an fMRI study that explored the impact of active motor learning on subsequent processing of unisensory and multisensory stimuli. Participants were exposed to visuo-motor associations between novel objects and novel sounds either through self-generated actions on the objects or by observing an experimenter produce the actions. Immediately after exposure, accuracy, RT, and BOLD fMRI measures were collected with unisensory and multisensory stimuli in associative perception and recognition tasks. Response times during audiovisual associative and unisensory recognition were enhanced by active learning, as was accuracy during audiovisual associative recognition. The difference in motor cortex activation between old and new associations was greater for the active than the passive group. Furthermore, functional connectivity between visual and motor cortices was stronger after active learning than passive learning. Active learning also led to greater activation of the fusiform gyrus during subsequent unisensory visual perception. Finally, brain regions implicated in audiovisual integration (e.g., STS) showed greater multisensory gain after active learning than after passive learning. Overall, the results show that active motor learning modulates the processing of multisensory associations.

  9. Attention and multisensory modulation argue against total encapsulation.

    PubMed

    de Haas, Benjamin; Schwarzkopf, Dietrich Samuel; Rees, Geraint

    2016-01-01

    Firestone & Scholl (F&S) postulate that vision proceeds without any direct interference from cognition. We argue that this view is extreme and not in line with the available evidence. Specifically, we discuss two well-established counterexamples: Attention directly affects core aspects of visual processing, and multisensory modulations of vision originate on multiple levels, some of which are unlikely to fall "within perception."

  10. Multisensory integration and attention in autism spectrum disorder: evidence from event-related potentials.

    PubMed

    Magnée, Maurice J C M; de Gelder, Beatrice; van Engeland, Herman; Kemner, Chantal

    2011-01-01

    Successful integration of various simultaneously perceived perceptual signals is crucial for social behavior. Recent findings indicate that this multisensory integration (MSI) can be modulated by attention. Theories of Autism Spectrum Disorders (ASDs) suggest that MSI is affected in this population while it remains unclear to what extent this is related to impairments in attentional capacity. In the present study Event-related potentials (ERPs) following emotionally congruent and incongruent face-voice pairs were measured in 23 high-functioning, adult ASD individuals and 24 age- and IQ-matched controls. MSI was studied while the attention of the participants was manipulated. ERPs were measured at typical auditory and visual processing peaks, namely, P2 and N170. While controls showed MSI during divided attention and easy selective attention tasks, individuals with ASD showed MSI during easy selective attention tasks only. It was concluded that individuals with ASD are able to process multisensory emotional stimuli, but this is differently modulated by attention mechanisms in these participants, especially those associated with divided attention. This atypical interaction between attention and MSI is also relevant to treatment strategies, with training of multisensory attentional control possibly being more beneficial than conventional sensory integration therapy.

  11. Multisensory System for Fruit Harvesting Robots. Experimental Testing in Natural Scenarios and with Different Kinds of Crops

    PubMed Central

    Fernández, Roemi; Salinas, Carlota; Montes, Héctor; Sarria, Javier

    2014-01-01

    The motivation of this research was to explore the feasibility of detecting and locating fruits from different kinds of crops in natural scenarios. To this end, a unique, modular and easily adaptable multisensory system and a set of associated pre-processing algorithms are proposed. The offered multisensory rig combines a high resolution colour camera and a multispectral system for the detection of fruits, as well as for the discrimination of the different elements of the plants, and a Time-Of-Flight (TOF) camera that provides fast acquisition of distances enabling the localisation of the targets in the coordinate space. A controlled lighting system completes the set-up, increasing its flexibility for being used in different working conditions. The pre-processing algorithms designed for the proposed multisensory system include a pixel-based classification algorithm that labels areas of interest that belong to fruits and a registration algorithm that combines the results of the aforementioned classification algorithm with the data provided by the TOF camera for the 3D reconstruction of the desired regions. Several experimental tests have been carried out in outdoors conditions in order to validate the capabilities of the proposed system. PMID:25615730

  12. Multi-modal distraction: insights from children's limited attention.

    PubMed

    Matusz, Pawel J; Broadbent, Hannah; Ferrari, Jessica; Forrest, Benjamin; Merkley, Rebecca; Scerif, Gaia

    2015-03-01

    How does the multi-sensory nature of stimuli influence information processing? Cognitive systems with limited selective attention can elucidate these processes. Six-year-olds, 11-year-olds and 20-year-olds engaged in a visual search task that required them to detect a pre-defined coloured shape under conditions of low or high visual perceptual load. On each trial, a peripheral distractor that could be either compatible or incompatible with the current target colour was presented either visually, auditorily or audiovisually. Unlike unimodal distractors, audiovisual distractors elicited reliable compatibility effects across the two levels of load in adults and in the older children, but high visual load significantly reduced distraction for all children, especially the youngest participants. This study provides the first demonstration that multi-sensory distraction has powerful effects on selective attention: Adults and older children alike allocate attention to potentially relevant information across multiple senses. However, poorer attentional resources can, paradoxically, shield the youngest children from the deleterious effects of multi-sensory distraction. Furthermore, we highlight how developmental research can enrich the understanding of distinct mechanisms controlling adult selective attention in multi-sensory environments. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Audio-Tactile Integration in Congenitally and Late Deaf Cochlear Implant Users

    PubMed Central

    Nava, Elena; Bottari, Davide; Villwock, Agnes; Fengler, Ineke; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte

    2014-01-01

    Several studies conducted in mammals and humans have shown that multisensory processing may be impaired following congenital sensory loss and in particular if no experience is achieved within specific early developmental time windows known as sensitive periods. In this study we investigated whether basic multisensory abilities are impaired in hearing-restored individuals with deafness acquired at different stages of development. To this aim, we tested congenitally and late deaf cochlear implant (CI) recipients, age-matched with two groups of hearing controls, on an audio-tactile redundancy paradigm, in which reaction times to unimodal and crossmodal redundant signals were measured. Our results showed that both congenitally and late deaf CI recipients were able to integrate audio-tactile stimuli, suggesting that congenital and acquired deafness does not prevent the development and recovery of basic multisensory processing. However, we found that congenitally deaf CI recipients had a lower multisensory gain compared to their matched controls, which may be explained by their faster responses to tactile stimuli. We discuss this finding in the context of reorganisation of the sensory systems following sensory loss and the possibility that these changes cannot be “rewired” through auditory reafferentation. PMID:24918766

  14. Audio-tactile integration in congenitally and late deaf cochlear implant users.

    PubMed

    Nava, Elena; Bottari, Davide; Villwock, Agnes; Fengler, Ineke; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte

    2014-01-01

    Several studies conducted in mammals and humans have shown that multisensory processing may be impaired following congenital sensory loss and in particular if no experience is achieved within specific early developmental time windows known as sensitive periods. In this study we investigated whether basic multisensory abilities are impaired in hearing-restored individuals with deafness acquired at different stages of development. To this aim, we tested congenitally and late deaf cochlear implant (CI) recipients, age-matched with two groups of hearing controls, on an audio-tactile redundancy paradigm, in which reaction times to unimodal and crossmodal redundant signals were measured. Our results showed that both congenitally and late deaf CI recipients were able to integrate audio-tactile stimuli, suggesting that congenital and acquired deafness does not prevent the development and recovery of basic multisensory processing. However, we found that congenitally deaf CI recipients had a lower multisensory gain compared to their matched controls, which may be explained by their faster responses to tactile stimuli. We discuss this finding in the context of reorganisation of the sensory systems following sensory loss and the possibility that these changes cannot be "rewired" through auditory reafferentation.

  15. Learning multisensory representations for auditory-visual transfer of sequence category knowledge: a probabilistic language of thought approach.

    PubMed

    Yildirim, Ilker; Jacobs, Robert A

    2015-06-01

    If a person is trained to recognize or categorize objects or events using one sensory modality, the person can often recognize or categorize those same (or similar) objects and events via a novel modality. This phenomenon is an instance of cross-modal transfer of knowledge. Here, we study the Multisensory Hypothesis which states that people extract the intrinsic, modality-independent properties of objects and events, and represent these properties in multisensory representations. These representations underlie cross-modal transfer of knowledge. We conducted an experiment evaluating whether people transfer sequence category knowledge across auditory and visual domains. Our experimental data clearly indicate that we do. We also developed a computational model accounting for our experimental results. Consistent with the probabilistic language of thought approach to cognitive modeling, our model formalizes multisensory representations as symbolic "computer programs" and uses Bayesian inference to learn these representations. Because the model demonstrates how the acquisition and use of amodal, multisensory representations can underlie cross-modal transfer of knowledge, and because the model accounts for subjects' experimental performances, our work lends credence to the Multisensory Hypothesis. Overall, our work suggests that people automatically extract and represent objects' and events' intrinsic properties, and use these properties to process and understand the same (and similar) objects and events when they are perceived through novel sensory modalities.

  16. Multisensory perceptual learning of temporal order: audiovisual learning transfers to vision but not audition.

    PubMed

    Alais, David; Cass, John

    2010-06-23

    An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question. Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes. The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order may be optimised to object-centered rather than viewer-centered constraints.

  17. Alterations to multisensory and unisensory integration by stimulus competition

    PubMed Central

    Rowland, Benjamin A.; Stanford, Terrence R.; Stein, Barry E.

    2011-01-01

    In environments containing sensory events at competing locations, selecting a target for orienting requires prioritization of stimulus values. Although the superior colliculus (SC) is causally linked to the stimulus selection process, the manner in which SC multisensory integration operates in a competitive stimulus environment is unknown. Here we examined how the activity of visual-auditory SC neurons is affected by placement of a competing target in the opposite hemifield, a stimulus configuration that would, in principle, promote interhemispheric competition for access to downstream motor circuitry. Competitive interactions between the targets were evident in how they altered unisensory and multisensory responses of individual neurons. Responses elicited by a cross-modal stimulus (multisensory responses) proved to be substantially more resistant to competitor-induced depression than were unisensory responses (evoked by the component modality-specific stimuli). Similarly, when a cross-modal stimulus served as the competitor, it exerted considerably more depression than did its individual component stimuli, in some cases producing more depression than predicted by their linear sum. These findings suggest that multisensory integration can help resolve competition among multiple targets by enhancing orientation to the location of cross-modal events while simultaneously suppressing orientation to events at alternate locations. PMID:21957224

  18. Alterations to multisensory and unisensory integration by stimulus competition.

    PubMed

    Pluta, Scott R; Rowland, Benjamin A; Stanford, Terrence R; Stein, Barry E

    2011-12-01

    In environments containing sensory events at competing locations, selecting a target for orienting requires prioritization of stimulus values. Although the superior colliculus (SC) is causally linked to the stimulus selection process, the manner in which SC multisensory integration operates in a competitive stimulus environment is unknown. Here we examined how the activity of visual-auditory SC neurons is affected by placement of a competing target in the opposite hemifield, a stimulus configuration that would, in principle, promote interhemispheric competition for access to downstream motor circuitry. Competitive interactions between the targets were evident in how they altered unisensory and multisensory responses of individual neurons. Responses elicited by a cross-modal stimulus (multisensory responses) proved to be substantially more resistant to competitor-induced depression than were unisensory responses (evoked by the component modality-specific stimuli). Similarly, when a cross-modal stimulus served as the competitor, it exerted considerably more depression than did its individual component stimuli, in some cases producing more depression than predicted by their linear sum. These findings suggest that multisensory integration can help resolve competition among multiple targets by enhancing orientation to the location of cross-modal events while simultaneously suppressing orientation to events at alternate locations.

  19. Neural pathways for visual speech perception

    PubMed Central

    Bernstein, Lynne E.; Liebenthal, Einat

    2014-01-01

    This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody) can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns of activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1) The visual perception of speech relies on visual pathway representations of speech qua speech. (2) A proposed site of these representations, the temporal visual speech area (TVSA) has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS). (3) Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA. PMID:25520611

  20. Cortical Hubs Form a Module for Multisensory Integration on Top of the Hierarchy of Cortical Networks

    PubMed Central

    Zamora-López, Gorka; Zhou, Changsong; Kurths, Jürgen

    2009-01-01

    Sensory stimuli entering the nervous system follow particular paths of processing, typically separated (segregated) from the paths of other modal information. However, sensory perception, awareness and cognition emerge from the combination of information (integration). The corticocortical networks of cats and macaque monkeys display three prominent characteristics: (i) modular organisation (facilitating the segregation), (ii) abundant alternative processing paths and (iii) the presence of highly connected hubs. Here, we study in detail the organisation and potential function of the cortical hubs by graph analysis and information theoretical methods. We find that the cortical hubs form a spatially delocalised, but topologically central module with the capacity to integrate multisensory information in a collaborative manner. With this, we resolve the underlying anatomical substrate that supports the simultaneous capacity of the cortex to segregate and to integrate multisensory information. PMID:20428515

  1. Olfactory-visual integration facilitates perception of subthreshold negative emotion.

    PubMed

    Novak, Lucas R; Gitelman, Darren R; Schuyler, Brianna; Li, Wen

    2015-10-01

    A fast growing literature of multisensory emotion integration notwithstanding, the chemical senses, intimately associated with emotion, have been largely overlooked. Moreover, an ecologically highly relevant principle of "inverse effectiveness", rendering maximal integration efficacy with impoverished sensory input, remains to be assessed in emotion integration. Presenting minute, subthreshold negative (vs. neutral) cues in faces and odors, we demonstrated olfactory-visual emotion integration in improved emotion detection (especially among individuals with weaker perception of unimodal negative cues) and response enhancement in the amygdala. Moreover, while perceptual gain for visual negative emotion involved the posterior superior temporal sulcus/pSTS, perceptual gain for olfactory negative emotion engaged both the associative olfactory (orbitofrontal) cortex and amygdala. Dynamic causal modeling (DCM) analysis of fMRI timeseries further revealed connectivity strengthening among these areas during crossmodal emotion integration. That multisensory (but not low-level unisensory) areas exhibited both enhanced response and region-to-region coupling favors a top-down (vs. bottom-up) account for olfactory-visual emotion integration. Current findings thus confirm the involvement of multisensory convergence areas, while highlighting unique characteristics of olfaction-related integration. Furthermore, successful crossmodal binding of subthreshold aversive cues not only supports the principle of "inverse effectiveness" in emotion integration but also accentuates the automatic, unconscious quality of crossmodal emotion synthesis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Decentralized Multisensory Information Integration in Neural Systems.

    PubMed

    Zhang, Wen-Hao; Chen, Aihua; Rasch, Malte J; Wu, Si

    2016-01-13

    How multiple sensory cues are integrated in neural circuitry remains a challenge. The common hypothesis is that information integration might be accomplished in a dedicated multisensory integration area receiving feedforward inputs from the modalities. However, recent experimental evidence suggests that it is not a single multisensory brain area, but rather many multisensory brain areas that are simultaneously involved in the integration of information. Why many mutually connected areas should be needed for information integration is puzzling. Here, we investigated theoretically how information integration could be achieved in a distributed fashion within a network of interconnected multisensory areas. Using biologically realistic neural network models, we developed a decentralized information integration system that comprises multiple interconnected integration areas. Studying an example of combining visual and vestibular cues to infer heading direction, we show that such a decentralized system is in good agreement with anatomical evidence and experimental observations. In particular, we show that this decentralized system can integrate information optimally. The decentralized system predicts that optimally integrated information should emerge locally from the dynamics of the communication between brain areas and sheds new light on the interpretation of the connectivity between multisensory brain areas. To extract information reliably from ambiguous environments, the brain integrates multiple sensory cues, which provide different aspects of information about the same entity of interest. Here, we propose a decentralized architecture for multisensory integration. In such a system, no processor is in the center of the network topology and information integration is achieved in a distributed manner through reciprocally connected local processors. Through studying the inference of heading direction with visual and vestibular cues, we show that the decentralized system can integrate information optimally, with the reciprocal connections between processers determining the extent of cue integration. Our model reproduces known multisensory integration behaviors observed in experiments and sheds new light on our understanding of how information is integrated in the brain. Copyright © 2016 Zhang et al.

  3. Decentralized Multisensory Information Integration in Neural Systems

    PubMed Central

    Zhang, Wen-hao; Chen, Aihua

    2016-01-01

    How multiple sensory cues are integrated in neural circuitry remains a challenge. The common hypothesis is that information integration might be accomplished in a dedicated multisensory integration area receiving feedforward inputs from the modalities. However, recent experimental evidence suggests that it is not a single multisensory brain area, but rather many multisensory brain areas that are simultaneously involved in the integration of information. Why many mutually connected areas should be needed for information integration is puzzling. Here, we investigated theoretically how information integration could be achieved in a distributed fashion within a network of interconnected multisensory areas. Using biologically realistic neural network models, we developed a decentralized information integration system that comprises multiple interconnected integration areas. Studying an example of combining visual and vestibular cues to infer heading direction, we show that such a decentralized system is in good agreement with anatomical evidence and experimental observations. In particular, we show that this decentralized system can integrate information optimally. The decentralized system predicts that optimally integrated information should emerge locally from the dynamics of the communication between brain areas and sheds new light on the interpretation of the connectivity between multisensory brain areas. SIGNIFICANCE STATEMENT To extract information reliably from ambiguous environments, the brain integrates multiple sensory cues, which provide different aspects of information about the same entity of interest. Here, we propose a decentralized architecture for multisensory integration. In such a system, no processor is in the center of the network topology and information integration is achieved in a distributed manner through reciprocally connected local processors. Through studying the inference of heading direction with visual and vestibular cues, we show that the decentralized system can integrate information optimally, with the reciprocal connections between processers determining the extent of cue integration. Our model reproduces known multisensory integration behaviors observed in experiments and sheds new light on our understanding of how information is integrated in the brain. PMID:26758843

  4. Semantic integration of differently asynchronous audio-visual information in videos of real-world events in cognitive processing: an ERP study.

    PubMed

    Liu, Baolin; Wu, Guangning; Wang, Zhongning; Ji, Xiang

    2011-07-01

    In the real world, some of the auditory and visual information received by the human brain are temporally asynchronous. How is such information integrated in cognitive processing in the brain? In this paper, we aimed to study the semantic integration of differently asynchronous audio-visual information in cognitive processing using ERP (event-related potential) method. Subjects were presented with videos of real world events, in which the auditory and visual information are temporally asynchronous. When the critical action was prior to the sound, sounds incongruous with the preceding critical actions elicited a N400 effect when compared to congruous condition. This result demonstrates that semantic contextual integration indexed by N400 also applies to cognitive processing of multisensory information. In addition, the N400 effect is early in latency when contrasted with other visually induced N400 studies. It is shown that cross modal information is facilitated in time when contrasted with visual information in isolation. When the sound was prior to the critical action, a larger late positive wave was observed under the incongruous condition compared to congruous condition. P600 might represent a reanalysis process, in which the mismatch between the critical action and the preceding sound was evaluated. It is shown that environmental sound may affect the cognitive processing of a visual event. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  5. Unconscious integration of multisensory bodily inputs in the peripersonal space shapes bodily self-consciousness.

    PubMed

    Salomon, Roy; Noel, Jean-Paul; Łukowska, Marta; Faivre, Nathan; Metzinger, Thomas; Serino, Andrea; Blanke, Olaf

    2017-09-01

    Recent studies have highlighted the role of multisensory integration as a key mechanism of self-consciousness. In particular, integration of bodily signals within the peripersonal space (PPS) underlies the experience of the self in a body we own (self-identification) and that is experienced as occupying a specific location in space (self-location), two main components of bodily self-consciousness (BSC). Experiments investigating the effects of multisensory integration on BSC have typically employed supra-threshold sensory stimuli, neglecting the role of unconscious sensory signals in BSC, as tested in other consciousness research. Here, we used psychophysical techniques to test whether multisensory integration of bodily stimuli underlying BSC also occurs for multisensory inputs presented below the threshold of conscious perception. Our results indicate that visual stimuli rendered invisible through continuous flash suppression boost processing of tactile stimuli on the body (Exp. 1), and enhance the perception of near-threshold tactile stimuli (Exp. 2), only once they entered PPS. We then employed unconscious multisensory stimulation to manipulate BSC. Participants were presented with tactile stimulation on their body and with visual stimuli on a virtual body, seen at a distance, which were either visible or rendered invisible. We found that participants reported higher self-identification with the virtual body in the synchronous visuo-tactile stimulation (as compared to asynchronous stimulation; Exp. 3), and shifted their self-location toward the virtual body (Exp.4), even if stimuli were fully invisible. Our results indicate that multisensory inputs, even outside of awareness, are integrated and affect the phenomenological content of self-consciousness, grounding BSC firmly in the field of psychophysical consciousness studies. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Neural Dynamics of Audiovisual Synchrony and Asynchrony Perception in 6-Month-Old Infants

    PubMed Central

    Kopp, Franziska; Dietrich, Claudia

    2013-01-01

    Young infants are sensitive to multisensory temporal synchrony relations, but the neural dynamics of temporal interactions between vision and audition in infancy are not well understood. We investigated audiovisual synchrony and asynchrony perception in 6-month-old infants using event-related brain potentials (ERP). In a prior behavioral experiment (n = 45), infants were habituated to an audiovisual synchronous stimulus and tested for recovery of interest by presenting an asynchronous test stimulus in which the visual stream was delayed with respect to the auditory stream by 400 ms. Infants who behaviorally discriminated the change in temporal alignment were included in further analyses. In the EEG experiment (final sample: n = 15), synchronous and asynchronous stimuli (visual delay of 400 ms) were presented in random order. Results show latency shifts in the auditory ERP components N1 and P2 as well as the infant ERP component Nc. Latencies in the asynchronous condition were significantly longer than in the synchronous condition. After video onset but preceding the auditory onset, amplitude modulations propagating from posterior to anterior sites and related to the Pb component of infants’ ERP were observed. Results suggest temporal interactions between the two modalities. Specifically, they point to the significance of anticipatory visual motion for auditory processing, and indicate young infants’ predictive capacities for audiovisual temporal synchrony relations. PMID:23346071

  7. Visual Attentional Engagement Deficits in Children with Specific Language Impairment and Their Role in Real-Time Language Processing

    PubMed Central

    Dispaldro, Marco; Leonard, Laurence B.; Corradi, Nicola; Ruffino, Milena; Bronte, Tiziana; Facoetti, Andrea

    2015-01-01

    In order to become a proficient user of language, infants must detect temporal cues embedded within the noisy acoustic spectra of ongoing speech by efficient attentional engagement. According to the neuro-constructivist approach, a multi-sensory dysfunction of attentional engagement – hampering the temporal sampling of stimuli – might be responsible for language deficits typically shown in children with Specific Language Impairment (SLI). In the present study, the efficiency of visual attentional engagement was investigated in 22 children with SLI and 22 typically developing (TD) children by measuring attentional masking (AM). AM refers to impaired identification of the first of two sequentially presented masked objects (O1 and O2) in which the O1-O2 interval was manipulated. Lexical and grammatical comprehension abilities were also tested in both groups. Children with SLI showed a sluggish engagement of temporal attention, and individual differences in AM accounted for a significant percentage of unique variance in grammatical performance. Our results suggest that an attentional engagement deficit – probably linked to a dysfunction of the right fronto-parietal attentional network – might be a contributing factor in these children’s language impairments. PMID:23154040

  8. Visual attentional engagement deficits in children with specific language impairment and their role in real-time language processing.

    PubMed

    Dispaldro, Marco; Leonard, Laurence B; Corradi, Nicola; Ruffino, Milena; Bronte, Tiziana; Facoetti, Andrea

    2013-09-01

    In order to become a proficient user of language, infants must detect temporal cues embedded within the noisy acoustic spectra of ongoing speech by efficient attentional engagement. According to the neuro-constructivist approach, a multi-sensory dysfunction of attentional engagement - hampering the temporal sampling of stimuli - might be responsible for language deficits typically shown in children with Specific Language Impairment (SLI). In the present study, the efficiency of visual attentional engagement was investigated in 22 children with SLI and 22 typically developing (TD) children by measuring attentional masking (AM). AM refers to impaired identification of the first of two sequentially presented masked objects (O1 and O2) in which the O1-O2 interval was manipulated. Lexical and grammatical comprehension abilities were also tested in both groups. Children with SLI showed a sluggish engagement of temporal attention, and individual differences in AM accounted for a significant percentage of unique variance in grammatical performance. Our results suggest that an attentional engagement deficit - probably linked to a dysfunction of the right fronto-parietal attentional network - might be a contributing factor in these children's language impairments. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Multisensory integration in complete unawareness: evidence from audiovisual congruency priming.

    PubMed

    Faivre, Nathan; Mudrik, Liad; Schwartz, Naama; Koch, Christof

    2014-11-01

    Multisensory integration is thought to require conscious perception. Although previous studies have shown that an invisible stimulus could be integrated with an audible one, none have demonstrated integration of two subliminal stimuli of different modalities. Here, pairs of identical or different audiovisual target letters (the sound /b/ with the written letter "b" or "m," respectively) were preceded by pairs of masked identical or different audiovisual prime digits (the sound /6/ with the written digit "6" or "8," respectively). In three experiments, awareness of the audiovisual digit primes was manipulated, such that participants were either unaware of the visual digit, the auditory digit, or both. Priming of the semantic relations between the auditory and visual digits was found in all experiments. Moreover, a further experiment showed that unconscious multisensory integration was not obtained when participants did not undergo prior conscious training of the task. This suggests that following conscious learning, unconscious processing suffices for multisensory integration. © The Author(s) 2014.

  10. I feel your voice. Cultural differences in the multisensory perception of emotion.

    PubMed

    Tanaka, Akihiro; Koizumi, Ai; Imai, Hisato; Hiramatsu, Saori; Hiramoto, Eriko; de Gelder, Beatrice

    2010-09-01

    Cultural differences in emotion perception have been reported mainly for facial expressions and to a lesser extent for vocal expressions. However, the way in which the perceiver combines auditory and visual cues may itself be subject to cultural variability. Our study investigated cultural differences between Japanese and Dutch participants in the multisensory perception of emotion. A face and a voice, expressing either congruent or incongruent emotions, were presented on each trial. Participants were instructed to judge the emotion expressed in one of the two sources. The effect of to-be-ignored voice information on facial judgments was larger in Japanese than in Dutch participants, whereas the effect of to-be-ignored face information on vocal judgments was smaller in Japanese than in Dutch participants. This result indicates that Japanese people are more attuned than Dutch people to vocal processing in the multisensory perception of emotion. Our findings provide the first evidence that multisensory integration of affective information is modulated by perceivers' cultural background.

  11. The time course of auditory-visual processing of speech and body actions: evidence for the simultaneous activation of an extended neural network for semantic processing.

    PubMed

    Meyer, Georg F; Harrison, Neil R; Wuerger, Sophie M

    2013-08-01

    An extensive network of cortical areas is involved in multisensory object and action recognition. This network draws on inferior frontal, posterior temporal, and parietal areas; activity is modulated by familiarity and the semantic congruency of auditory and visual component signals even if semantic incongruences are created by combining visual and auditory signals representing very different signal categories, such as speech and whole body actions. Here we present results from a high-density ERP study designed to examine the time-course and source location of responses to semantically congruent and incongruent audiovisual speech and body actions to explore whether the network involved in action recognition consists of a hierarchy of sequentially activated processing modules or a network of simultaneously active processing sites. We report two main results:1) There are no significant early differences in the processing of congruent and incongruent audiovisual action sequences. The earliest difference between congruent and incongruent audiovisual stimuli occurs between 240 and 280 ms after stimulus onset in the left temporal region. Between 340 and 420 ms, semantic congruence modulates responses in central and right frontal areas. Late differences (after 460 ms) occur bilaterally in frontal areas.2) Source localisation (dipole modelling and LORETA) reveals that an extended network encompassing inferior frontal, temporal, parasaggital, and superior parietal sites are simultaneously active between 180 and 420 ms to process auditory–visual action sequences. Early activation (before 120 ms) can be explained by activity in mainly sensory cortices. . The simultaneous activation of an extended network between 180 and 420 ms is consistent with models that posit parallel processing of complex action sequences in frontal, temporal and parietal areas rather than models that postulate hierarchical processing in a sequence of brain regions. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. The Temporal Pole Top-Down Modulates the Ventral Visual Stream During Social Cognition.

    PubMed

    Pehrs, Corinna; Zaki, Jamil; Schlochtermeier, Lorna H; Jacobs, Arthur M; Kuchinke, Lars; Koelsch, Stefan

    2017-01-01

    The temporal pole (TP) has been associated with diverse functions of social cognition and emotion processing. Although the underlying mechanism remains elusive, one possibility is that TP acts as domain-general hub integrating socioemotional information. To test this, 26 participants were presented with 60 empathy-evoking film clips during fMRI scanning. The film clips were preceded by a linguistic sad or neutral context and half of the clips were accompanied by sad music. In line with its hypothesized role, TP was involved in the processing of sad context and furthermore tracked participants' empathic concern. To examine the neuromodulatory impact of TP, we applied nonlinear dynamic causal modeling to a multisensory integration network from previous work consisting of superior temporal gyrus (STG), fusiform gyrus (FG), and amygdala, which was extended by an additional node in the TP. Bayesian model comparison revealed a gating of STG and TP on fusiform-amygdalar coupling and an increase of TP to FG connectivity during the integration of contextual information. Moreover, these backward projections were strengthened by emotional music. The findings indicate that during social cognition, TP integrates information from different modalities and top-down modulates lower-level perceptual areas in the ventral visual stream as a function of integration demands. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Parameters of semantic multisensory integration depend on timing and modality order among people on the autism spectrum: evidence from event-related potentials.

    PubMed

    Russo, N; Mottron, L; Burack, J A; Jemel, B

    2012-07-01

    Individuals with autism spectrum disorders (ASD) report difficulty integrating simultaneously presented visual and auditory stimuli (Iarocci & McDonald, 2006), albeit showing enhanced perceptual processing of unisensory stimuli, as well as an enhanced role of perception in higher-order cognitive tasks (Enhanced Perceptual Functioning (EPF) model; Mottron, Dawson, Soulières, Hubert, & Burack, 2006). Individuals with an ASD also integrate auditory-visual inputs over longer periods of time than matched typically developing (TD) peers (Kwakye, Foss-Feig, Cascio, Stone & Wallace, 2011). To tease apart the dichotomy of both extended multisensory processing and enhanced perceptual processing, we used behavioral and electrophysiological measurements of audio-visual integration among persons with ASD. 13 TD and 14 autistics matched on IQ completed a forced choice multisensory semantic congruence task requiring speeded responses regarding the congruence or incongruence of animal sounds and pictures. Stimuli were presented simultaneously or sequentially at various stimulus onset asynchronies in both auditory first and visual first presentations. No group differences were noted in reaction time (RT) or accuracy. The latency at which congruent and incongruent waveforms diverged was the component of interest. In simultaneous presentations, congruent and incongruent waveforms diverged earlier (circa 150 ms) among persons with ASD than among TD individuals (around 350 ms). In sequential presentations, asymmetries in the timing of neuronal processing were noted in ASD which depended on stimulus order, but these were consistent with the nature of specific perceptual strengths in this group. These findings extend the Enhanced Perceptual Functioning Model to the multisensory domain, and provide a more nuanced context for interpreting ERP findings of impaired semantic processing in ASD. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. A Novel Multisensory Integration Task Reveals Robust Deficits in Rodent Models of Schizophrenia: Converging Evidence for Remediation via Nicotinic Receptor Stimulation of Inhibitory Transmission in the Prefrontal Cortex.

    PubMed

    Cloke, Jacob M; Nguyen, Robin; Chung, Beryl Y T; Wasserman, David I; De Lisio, Stephanie; Kim, Jun Chul; Bailey, Craig D C; Winters, Boyer D

    2016-12-14

    Atypical multisensory integration is an understudied cognitive symptom in schizophrenia. Procedures to evaluate multisensory integration in rodent models are lacking. We developed a novel multisensory object oddity (MSO) task to assess multisensory integration in ketamine-treated rats, a well established model of schizophrenia. Ketamine-treated rats displayed a selective MSO task impairment with tactile-visual and olfactory-visual sensory combinations, whereas basic unisensory perception was unaffected. Orbitofrontal cortex (OFC) administration of nicotine or ABT-418, an α 4 β 2 nicotinic acetylcholine receptor (nAChR) agonist, normalized MSO task performance in ketamine-treated rats and this effect was blocked by GABA A receptor antagonism. GABAergic currents were also decreased in OFC of ketamine-treated rats and were normalized by activation of α 4 β 2 nAChRs. Furthermore, parvalbumin (PV) immunoreactivity was decreased in the OFC of ketamine-treated rats. Accordingly, silencing of PV interneurons in OFC of PV-Cre mice using DREADDs (Designer Receptors Exclusively Activated by Designer Drugs) selectively impaired MSO task performance and this was reversed by ABT-418. Likewise, clozapine-N-oxide-induced inhibition of PV interneurons in brain slices was reversed by activation of α 4 β 2 nAChRs. These findings strongly imply a role for prefrontal GABAergic transmission in the integration of multisensory object features, a cognitive process with relevance to schizophrenia. Accordingly, nAChR agonism, which improves various facets of cognition in schizophrenia, reversed the severe MSO task impairment in this study and appears to do so via a GABAergic mechanism. Interactions between GABAergic and nAChR receptor systems warrant further investigation for potential therapeutic applications. The novel behavioral procedure introduced in the current study is acutely sensitive to schizophrenia-relevant cognitive impairment and should prove highly valuable for such research. Adaptive behaviors are driven by integration of information from different sensory modalities. Multisensory integration is disrupted in patients with schizophrenia, but little is known about the neural basis of this cognitive symptom. Development and validation of multisensory integration tasks for animal models is essential given the strong link between functional outcome and cognitive impairment in schizophrenia. We present a novel multisensory object oddity procedure that detects selective multisensory integration deficits in a rat model of schizophrenia using various combinations of sensory modalities. Moreover, converging data are consistent with a nicotinic-GABAergic mechanism of multisensory integration in the prefrontal cortex, results with strong clinical relevance to the study of cognitive impairment and treatment in schizophrenia. Copyright © 2016 the authors 0270-6474/16/3612571-16$15.00/0.

  15. The multisensory body revealed through its cast shadows.

    PubMed

    Pavani, Francesco; Galfano, Giovanni

    2015-01-01

    One key issue when conceiving the body as a multisensory object is how the cognitive system integrates visible instances of the self and other bodies with one's own somatosensory processing, to achieve self-recognition and body ownership. Recent research has strongly suggested that shadows cast by our own body have a special status for cognitive processing, directing attention to the body in a fast and highly specific manner. The aim of the present article is to review the most recent scientific contributions addressing how body shadows affect both sensory/perceptual and attentional processes. The review examines three main points: (1) body shadows as a special window to investigate the construction of multisensory body perception; (2) experimental paradigms and related findings; (3) open questions and future trajectories. The reviewed literature suggests that shadows cast by one's own body promote binding between personal and extrapersonal space and elicit automatic orienting of attention toward the body-part casting the shadow. Future research should address whether the effects exerted by body shadows are similar to those observed when observers are exposed to other visual instances of their body. The results will further clarify the processes underlying the merging of vision and somatosensation when creating body representations.

  16. The multisensory body revealed through its cast shadows

    PubMed Central

    Pavani, Francesco; Galfano, Giovanni

    2015-01-01

    One key issue when conceiving the body as a multisensory object is how the cognitive system integrates visible instances of the self and other bodies with one’s own somatosensory processing, to achieve self-recognition and body ownership. Recent research has strongly suggested that shadows cast by our own body have a special status for cognitive processing, directing attention to the body in a fast and highly specific manner. The aim of the present article is to review the most recent scientific contributions addressing how body shadows affect both sensory/perceptual and attentional processes. The review examines three main points: (1) body shadows as a special window to investigate the construction of multisensory body perception; (2) experimental paradigms and related findings; (3) open questions and future trajectories. The reviewed literature suggests that shadows cast by one’s own body promote binding between personal and extrapersonal space and elicit automatic orienting of attention toward the body-part casting the shadow. Future research should address whether the effects exerted by body shadows are similar to those observed when observers are exposed to other visual instances of their body. The results will further clarify the processes underlying the merging of vision and somatosensation when creating body representations. PMID:26042079

  17. Brain mechanisms in religion and spirituality: An integrative predictive processing framework.

    PubMed

    van Elk, Michiel; Aleman, André

    2017-02-01

    We present the theory of predictive processing as a unifying framework to account for the neurocognitive basis of religion and spirituality. Our model is substantiated by discussing four different brain mechanisms that play a key role in religion and spirituality: temporal brain areas are associated with religious visions and ecstatic experiences; multisensory brain areas and the default mode network are involved in self-transcendent experiences; the Theory of Mind-network is associated with prayer experiences and over attribution of intentionality; top-down mechanisms instantiated in the anterior cingulate cortex and the medial prefrontal cortex could be involved in acquiring and maintaining intuitive supernatural beliefs. We compare the predictive processing model with two-systems accounts of religion and spirituality, by highlighting the central role of prediction error monitoring. We conclude by presenting novel predictions for future research and by discussing the philosophical and theological implications of neuroscientific research on religion and spirituality. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Pay Attention!: Sluggish Multisensory Attentional Shifting as a Core Deficit in Developmental Dyslexia.

    PubMed

    Krause, Margaret B

    2015-11-01

    The aim of this review is to provide a background on the neurocognitive aspects of the reading process and review neuroscientific studies of individuals with developmental dyslexia, which provide evidence for amodal processing deficits. Hari, Renvall, and Tanskanen (2001) propose amodal sluggish attentional shifting (SAS) as a causal factor for temporal processing deficits in dyslexia. Undergirding this theory is the notion that when dyslexics are faced with rapid sequences of stimuli, their automatic attentional systems fail to disengage efficiently, which leads to difficulty when moving from one item to the next (Lallier et al., ). This results in atypical perception of rapid stimulus sequences. Until recently, the SAS theory, particularly the examination of amodal attentional deficits, was studied solely through the use of behavioural measures (Facoetti et al., ; Facoetti, Lorusso, Cattaneo, Galli, & Molteni, ). This paper examines evidence within the literature that provides a basis for further exploration of amodal SAS as an underlying deficit in developmental dyslexia. Copyright © 2015 John Wiley & Sons, Ltd.

  19. Women process multisensory emotion expressions more efficiently than men.

    PubMed

    Collignon, O; Girard, S; Gosselin, F; Saint-Amour, D; Lepore, F; Lassonde, M

    2010-01-01

    Despite claims in the popular press, experiments investigating whether female are more efficient than male observers at processing expression of emotions produced inconsistent findings. In the present study, participants were asked to categorize fear and disgust expressions displayed auditorily, visually, or audio-visually. Results revealed an advantage of women in all the conditions of stimulus presentation. We also observed more nonlinear probabilistic summation in the bimodal conditions in female than male observers, indicating greater neural integration of different sensory-emotional informations. These findings indicate robust differences between genders in the multisensory perception of emotion expression.

  20. Content congruency and its interplay with temporal synchrony modulate integration between rhythmic audiovisual streams.

    PubMed

    Su, Yi-Huang

    2014-01-01

    Both lower-level stimulus factors (e.g., temporal proximity) and higher-level cognitive factors (e.g., content congruency) are known to influence multisensory integration. The former can direct attention in a converging manner, and the latter can indicate whether information from the two modalities belongs together. The present research investigated whether and how these two factors interacted in the perception of rhythmic, audiovisual (AV) streams derived from a human movement scenario. Congruency here was based on sensorimotor correspondence pertaining to rhythm perception. Participants attended to bimodal stimuli consisting of a humanlike figure moving regularly to a sequence of auditory beat, and detected a possible auditory temporal deviant. The figure moved either downwards (congruently) or upwards (incongruently) to the downbeat, while in both situations the movement was either synchronous with the beat, or lagging behind it. Greater cross-modal binding was expected to hinder deviant detection. Results revealed poorer detection for congruent than for incongruent streams, suggesting stronger integration in the former. False alarms increased in asynchronous stimuli only for congruent streams, indicating greater tendency for deviant report due to visual capture of asynchronous auditory events. In addition, a greater increase in perceived synchrony was associated with a greater reduction in false alarms for congruent streams, while the pattern was reversed for incongruent ones. These results demonstrate that content congruency as a top-down factor not only promotes integration, but also modulates bottom-up effects of synchrony. Results are also discussed regarding how theories of integration and attentional entrainment may be combined in the context of rhythmic multisensory stimuli.

  1. Age-related differences in audiovisual interactions of semantically different stimuli.

    PubMed

    Viggiano, Maria Pia; Giovannelli, Fabio; Giganti, Fiorenza; Rossi, Arianna; Metitieri, Tiziana; Rebai, Mohamed; Guerrini, Renzo; Cincotta, Massimo

    2017-01-01

    Converging results have shown that adults benefit from congruent multisensory stimulation in the identification of complex stimuli, whereas the developmental trajectory of the ability to integrate multisensory inputs in children is less well understood. In this study we explored the effects of audiovisual semantic congruency on identification of visually presented stimuli belonging to different categories, using a cross-modal approach. Four groups of children ranging in age from 6 to 13 years and adults were administered an object identification task of visually presented pictures belonging to living and nonliving entities. Stimuli were presented in visual, congruent audiovisual, incongruent audiovisual, and noise conditions. Results showed that children under 12 years of age did not benefit from multisensory presentation in speeding up the identification. In children the incoherent audiovisual condition had an interfering effect, especially for the identification of living things. These data suggest that the facilitating effect of the audiovisual interaction into semantic factors undergoes developmental changes and the consolidation of adult-like processing of multisensory stimuli begins in late childhood. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  2. Impairments of Multisensory Integration and Cross-Sensory Learning as Pathways to Dyslexia

    PubMed Central

    Hahn, Noemi; Foxe, John J.; Molholm, Sophie

    2014-01-01

    Two sensory systems are intrinsic to learning to read. Written words enter the brain through the visual system and associated sounds through the auditory system. The task before the beginning reader is quite basic. She must learn correspondences between orthographic tokens and phonemic utterances, and she must do this to the point that there is seamless automatic ‘connection’ between these sensorially distinct units of language. It is self-evident then that learning to read requires formation of cross-sensory associations to the point that deeply encoded multisensory representations are attained. While the majority of individuals manage this task to a high degree of expertise, some struggle to attain even rudimentary capabilities. Why do dyslexic individuals, who learn well in myriad other domains, fail at this particular task? Here, we examine the literature as it pertains to multisensory processing in dyslexia. We find substantial support for multisensory deficits in dyslexia, and make the case that to fully understand its neurological basis, it will be necessary to thoroughly probe the integrity of auditory-visual integration mechanisms. PMID:25265514

  3. Nonvisual Multisensory Impairment of Body Perception in Anorexia Nervosa: A Systematic Review of Neuropsychological Studies

    PubMed Central

    Gaudio, Santino; Brooks, Samantha Jane; Riva, Giuseppe

    2014-01-01

    Background Body image distortion is a central symptom of Anorexia Nervosa (AN). Even if corporeal awareness is multisensory majority of AN studies mainly investigated visual misperception. We systematically reviewed AN studies that have investigated different nonvisual sensory inputs using an integrative multisensory approach to body perception. We also discussed the findings in the light of AN neuroimaging evidence. Methods PubMed and PsycINFO were searched until March, 2014. To be included in the review, studies were mainly required to: investigate a sample of patients with current or past AN and a control group and use tasks that directly elicited one or more nonvisual sensory domains. Results Thirteen studies were included. They studied a total of 223 people with current or past AN and 273 control subjects. Overall, results show impairment in tactile and proprioceptive domains of body perception in AN patients. Interoception and multisensory integration have been poorly explored directly in AN patients. A limitation of this review is the relatively small amount of literature available. Conclusions Our results showed that AN patients had a multisensory impairment of body perception that goes beyond visual misperception and involves tactile and proprioceptive sensory components. Furthermore, impairment of tactile and proprioceptive components may be associated with parietal cortex alterations in AN patients. Interoception and multisensory integration have been weakly explored directly. Further research, using multisensory approaches as well as neuroimaging techniques, is needed to better define the complexity of body image distortion in AN. Key Findings The review suggests an altered capacity of AN patients in processing and integration of bodily signals: body parts are experienced as dissociated from their holistic and perceptive dimensions. Specifically, it is likely that not only perception but memory, and in particular sensorimotor/proprioceptive memory, probably shapes bodily experience in patients with AN. PMID:25303480

  4. Exogenous spatial attention decreases audiovisual integration.

    PubMed

    Van der Stoep, N; Van der Stigchel, S; Nijboer, T C W

    2015-02-01

    Multisensory integration (MSI) and spatial attention are both mechanisms through which the processing of sensory information can be facilitated. Studies on the interaction between spatial attention and MSI have mainly focused on the interaction between endogenous spatial attention and MSI. Most of these studies have shown that endogenously attending a multisensory target enhances MSI. It is currently unclear, however, whether and how exogenous spatial attention and MSI interact. In the current study, we investigated the interaction between these two important bottom-up processes in two experiments. In Experiment 1 the target location was task-relevant, and in Experiment 2 the target location was task-irrelevant. Valid or invalid exogenous auditory cues were presented before the onset of unimodal auditory, unimodal visual, and audiovisual targets. We observed reliable cueing effects and multisensory response enhancement in both experiments. To examine whether audiovisual integration was influenced by exogenous spatial attention, the amount of race model violation was compared between exogenously attended and unattended targets. In both Experiment 1 and Experiment 2, a decrease in MSI was observed when audiovisual targets were exogenously attended, compared to when they were not. The interaction between exogenous attention and MSI was less pronounced in Experiment 2. Therefore, our results indicate that exogenous attention diminishes MSI when spatial orienting is relevant. The results are discussed in terms of models of multisensory integration and attention.

  5. Functional near-infrared spectroscopy (fNIRS) brain imaging of multi-sensory integration during computerized dynamic posturography in middle-aged and older adults.

    PubMed

    Lin, Chia-Cheng; Barker, Jeffrey W; Sparto, Patrick J; Furman, Joseph M; Huppert, Theodore J

    2017-04-01

    Studies suggest that aging affects the sensory re-weighting process, but the neuroimaging evidence is minimal. Functional Near-Infrared Spectroscopy (fNIRS) is a novel neuroimaging tool that can detect brain activities during dynamic movement condition. In this study, fNIRS was used to investigate the hemodynamic changes in the frontal-lateral, temporal-parietal, and occipital regions of interest (ROIs) during four sensory integration conditions that manipulated visual and somatosensory feedback in 15 middle-aged and 15 older adults. The results showed that the temporal-parietal ROI was activated more when somatosensory and visual information were absent in both groups, which indicated the sole use of vestibular input for maintaining balance. While both older adults and middle-aged adults had greater activity in most brain ROIs during changes in the sensory conditions, the older adults had greater increases in the occipital ROI and frontal-lateral ROIs. These findings suggest a cortical component to sensory re-weighting that is more distributed and requires greater attention in older adults.

  6. Voice over: Audio-visual congruency and content recall in the gallery setting

    PubMed Central

    Fairhurst, Merle T.; Scott, Minnie; Deroy, Ophelia

    2017-01-01

    Experimental research has shown that pairs of stimuli which are congruent and assumed to ‘go together’ are recalled more effectively than an item presented in isolation. Will this multisensory memory benefit occur when stimuli are richer and longer, in an ecological setting? In the present study, we focused on an everyday situation of audio-visual learning and manipulated the relationship between audio guide tracks and viewed portraits in the galleries of the Tate Britain. By varying the gender and narrative style of the voice-over, we examined how the perceived congruency and assumed unity of the audio guide track with painted portraits affected subsequent recall. We show that tracks perceived as best matching the viewed portraits led to greater recall of both sensory and linguistic content. We provide the first evidence that manipulating crossmodal congruence and unity assumptions can effectively impact memory in a multisensory ecological setting, even in the absence of precise temporal alignment between sensory cues. PMID:28636667

  7. Voice over: Audio-visual congruency and content recall in the gallery setting.

    PubMed

    Fairhurst, Merle T; Scott, Minnie; Deroy, Ophelia

    2017-01-01

    Experimental research has shown that pairs of stimuli which are congruent and assumed to 'go together' are recalled more effectively than an item presented in isolation. Will this multisensory memory benefit occur when stimuli are richer and longer, in an ecological setting? In the present study, we focused on an everyday situation of audio-visual learning and manipulated the relationship between audio guide tracks and viewed portraits in the galleries of the Tate Britain. By varying the gender and narrative style of the voice-over, we examined how the perceived congruency and assumed unity of the audio guide track with painted portraits affected subsequent recall. We show that tracks perceived as best matching the viewed portraits led to greater recall of both sensory and linguistic content. We provide the first evidence that manipulating crossmodal congruence and unity assumptions can effectively impact memory in a multisensory ecological setting, even in the absence of precise temporal alignment between sensory cues.

  8. Multi-Sensory Features for Personnel Detection at Border Crossings

    DTIC Science & Technology

    2011-07-08

    challenging problem. Video sensors consume high amounts of power and require a large volume for storage. Hence, it is preferable to use non- imaging sensors...temporal distribution of gait beats [5]. At border crossings, animals such as mules, horses, or donkeys are often known to carry loads. Animal hoof...field, passive ultrasonic, sonar, and both infrared and visi- ble video sensors. Each sensor suite is placed along the path with a spacing of 40 to

  9. Evolving spatio-temporal data machines based on the NeuCube neuromorphic framework: Design methodology and selected applications.

    PubMed

    Kasabov, Nikola; Scott, Nathan Matthew; Tu, Enmei; Marks, Stefan; Sengupta, Neelava; Capecci, Elisa; Othman, Muhaini; Doborjeh, Maryam Gholami; Murli, Norhanifah; Hartono, Reggio; Espinosa-Ramos, Josafath Israel; Zhou, Lei; Alvi, Fahad Bashir; Wang, Grace; Taylor, Denise; Feigin, Valery; Gulyaev, Sergei; Mahmoud, Mahmoud; Hou, Zeng-Guang; Yang, Jie

    2016-06-01

    The paper describes a new type of evolving connectionist systems (ECOS) called evolving spatio-temporal data machines based on neuromorphic, brain-like information processing principles (eSTDM). These are multi-modular computer systems designed to deal with large and fast spatio/spectro temporal data using spiking neural networks (SNN) as major processing modules. ECOS and eSTDM in particular can learn incrementally from data streams, can include 'on the fly' new input variables, new output class labels or regression outputs, can continuously adapt their structure and functionality, can be visualised and interpreted for new knowledge discovery and for a better understanding of the data and the processes that generated it. eSTDM can be used for early event prediction due to the ability of the SNN to spike early, before whole input vectors (they were trained on) are presented. A framework for building eSTDM called NeuCube along with a design methodology for building eSTDM using this is presented. The implementation of this framework in MATLAB, Java, and PyNN (Python) is presented. The latter facilitates the use of neuromorphic hardware platforms to run the eSTDM. Selected examples are given of eSTDM for pattern recognition and early event prediction on EEG data, fMRI data, multisensory seismic data, ecological data, climate data, audio-visual data. Future directions are discussed, including extension of the NeuCube framework for building neurogenetic eSTDM and also new applications of eSTDM. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Spatiotemporal Processing in Crossmodal Interactions for Perception of the External World: A Review

    PubMed Central

    Hidaka, Souta; Teramoto, Wataru; Sugita, Yoichi

    2015-01-01

    Research regarding crossmodal interactions has garnered much interest in the last few decades. A variety of studies have demonstrated that multisensory information (vision, audition, tactile sensation, and so on) can perceptually interact with each other in the spatial and temporal domains. Findings regarding crossmodal interactions in the spatiotemporal domain (i.e., motion processing) have also been reported, with updates in the last few years. In this review, we summarize past and recent findings on spatiotemporal processing in crossmodal interactions regarding perception of the external world. A traditional view regarding crossmodal interactions holds that vision is superior to audition in spatial processing, but audition is dominant over vision in temporal processing. Similarly, vision is considered to have dominant effects over the other sensory modalities (i.e., visual capture) in spatiotemporal processing. However, recent findings demonstrate that sound could have a driving effect on visual motion perception. Moreover, studies regarding perceptual associative learning reported that, after association is established between a sound sequence without spatial information and visual motion information, the sound sequence could trigger visual motion perception. Other sensory information, such as motor action or smell, has also exhibited similar driving effects on visual motion perception. Additionally, recent brain imaging studies demonstrate that similar activation patterns could be observed in several brain areas, including the motion processing areas, between spatiotemporal information from different sensory modalities. Based on these findings, we suggest that multimodal information could mutually interact in spatiotemporal processing in the percept of the external world and that common perceptual and neural underlying mechanisms would exist for spatiotemporal processing. PMID:26733827

  11. Brain dynamics in ASD during movie-watching show idiosyncratic functional integration and segregation.

    PubMed

    Bolton, Thomas A W; Jochaut, Delphine; Giraud, Anne-Lise; Van De Ville, Dimitri

    2018-06-01

    To refine our understanding of autism spectrum disorders (ASD), studies of the brain in dynamic, multimodal and ecological experimental settings are required. One way to achieve this is to compare the neural responses of ASD and typically developing (TD) individuals when viewing a naturalistic movie, but the temporal complexity of the stimulus hampers this task, and the presence of intrinsic functional connectivity (FC) may overshadow movie-driven fluctuations. Here, we detected inter-subject functional correlation (ISFC) transients to disentangle movie-induced functional changes from underlying resting-state activity while probing FC dynamically. When considering the number of significant ISFC excursions triggered by the movie across the brain, connections between remote functional modules were more heterogeneously engaged in the ASD population. Dynamically tracking the temporal profiles of those ISFC changes and tying them to specific movie subparts, this idiosyncrasy in ASD responses was then shown to involve functional integration and segregation mechanisms such as response inhibition, background suppression, or multisensory integration, while low-level visual processing was spared. Through the application of a new framework for the study of dynamic experimental paradigms, our results reveal a temporally localized idiosyncrasy in ASD responses, specific to short-lived episodes of long-range functional interplays. © 2018 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  12. Multisensory Integration of Sounds and Vibrotactile Stimuli in Processing Streams for “What” and “Where”

    PubMed Central

    Renier, Laurent A.; Anurova, Irina; De Volder, Anne G.; Carlson, Synnöve; VanMeter, John; Rauschecker, Josef P.

    2012-01-01

    The segregation between cortical pathways for the identification and localization of objects is thought of as a general organizational principle in the brain. Yet, little is known about the unimodal versus multimodal nature of these processing streams. The main purpose of the present study was to test whether the auditory and tactile dual pathways converged into specialized multisensory brain areas. We used functional magnetic resonance imaging (fMRI) to compare directly in the same subjects the brain activation related to localization and identification of comparable auditory and vibrotactile stimuli. Results indicate that the right inferior frontal gyrus (IFG) and both left and right insula were more activated during identification conditions than during localization in both touch and audition. The reverse dissociation was found for the left and right inferior parietal lobules (IPL), the left superior parietal lobule (SPL) and the right precuneus-SPL, which were all more activated during localization conditions in the two modalities. We propose that specialized areas in the right IFG and the left and right insula are multisensory operators for the processing of stimulus identity whereas parts of the left and right IPL and SPL are specialized for the processing of spatial attributes independently of sensory modality. PMID:19726653

  13. Effects of congruent and incongruent visual cues on speech perception and brain activity in cochlear implant users.

    PubMed

    Song, Jae-Jin; Lee, Hyo-Jeong; Kang, Hyejin; Lee, Dong Soo; Chang, Sun O; Oh, Seung Ha

    2015-03-01

    While deafness-induced plasticity has been investigated in the visual and auditory domains, not much is known about language processing in audiovisual multimodal environments for patients with restored hearing via cochlear implant (CI) devices. Here, we examined the effect of agreeing or conflicting visual inputs on auditory processing in deaf patients equipped with degraded artificial hearing. Ten post-lingually deafened CI users with good performance, along with matched control subjects, underwent H 2 (15) O-positron emission tomography scans while carrying out a behavioral task requiring the extraction of speech information from unimodal auditory stimuli, bimodal audiovisual congruent stimuli, and incongruent stimuli. Regardless of congruency, the control subjects demonstrated activation of the auditory and visual sensory cortices, as well as the superior temporal sulcus, the classical multisensory integration area, indicating a bottom-up multisensory processing strategy. Compared to CI users, the control subjects exhibited activation of the right ventral premotor-supramarginal pathway. In contrast, CI users activated primarily the visual cortices more in the congruent audiovisual condition than in the null condition. In addition, compared to controls, CI users displayed an activation focus in the right amygdala for congruent audiovisual stimuli. The most notable difference between the two groups was an activation focus in the left inferior frontal gyrus in CI users confronted with incongruent audiovisual stimuli, suggesting top-down cognitive modulation for audiovisual conflict. Correlation analysis revealed that good speech performance was positively correlated with right amygdala activity for the congruent condition, but negatively correlated with bilateral visual cortices regardless of congruency. Taken together these results suggest that for multimodal inputs, cochlear implant users are more vision-reliant when processing congruent stimuli and are disturbed more by visual distractors when confronted with incongruent audiovisual stimuli. To cope with this multimodal conflict, CI users activate the left inferior frontal gyrus to adopt a top-down cognitive modulation pathway, whereas normal hearing individuals primarily adopt a bottom-up strategy.

  14. Boosting pitch encoding with audiovisual interactions in congenital amusia.

    PubMed

    Albouy, Philippe; Lévêque, Yohana; Hyde, Krista L; Bouchet, Patrick; Tillmann, Barbara; Caclin, Anne

    2015-01-01

    The combination of information across senses can enhance perception, as revealed for example by decreased reaction times or improved stimulus detection. Interestingly, these facilitatory effects have been shown to be maximal when responses to unisensory modalities are weak. The present study investigated whether audiovisual facilitation can be observed in congenital amusia, a music-specific disorder primarily ascribed to impairments of pitch processing. Amusic individuals and their matched controls performed two tasks. In Task 1, they were required to detect auditory, visual, or audiovisual stimuli as rapidly as possible. In Task 2, they were required to detect as accurately and as rapidly as possible a pitch change within an otherwise monotonic 5-tone sequence that was presented either only auditorily (A condition), or simultaneously with a temporally congruent, but otherwise uninformative visual stimulus (AV condition). Results of Task 1 showed that amusics exhibit typical auditory and visual detection, and typical audiovisual integration capacities: both amusics and controls exhibited shorter response times for audiovisual stimuli than for either auditory stimuli or visual stimuli. Results of Task 2 revealed that both groups benefited from simultaneous uninformative visual stimuli to detect pitch changes: accuracy was higher and response times shorter in the AV condition than in the A condition. The audiovisual improvements of response times were observed for different pitch interval sizes depending on the group. These results suggest that both typical listeners and amusic individuals can benefit from multisensory integration to improve their pitch processing abilities and that this benefit varies as a function of task difficulty. These findings constitute the first step towards the perspective to exploit multisensory paradigms to reduce pitch-related deficits in congenital amusia, notably by suggesting that audiovisual paradigms are effective in an appropriate range of unimodal performance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Increase of frontal neuronal activity in chronic neglect after training in virtual reality.

    PubMed

    Ekman, U; Fordell, H; Eriksson, J; Lenfeldt, N; Wåhlin, A; Eklund, A; Malm, J

    2018-05-16

    A third of patients with stroke acquire spatial neglect associated with poor rehabilitation outcome. New effective rehabilitation interventions are needed. Scanning training combined with multisensory stimulation to enhance the rehabilitation effect is suggested. In accordance, we have designed a virtual-reality based scanning training that combines visual, audio and sensori-motor stimulation called RehAtt ® . Effects were shown in behavioural tests and activity of daily living. Here, we use fMRI to evaluate the change in brain activity during Posner's Cuing Task (attention task) after RehAtt ® intervention, in patients with chronic neglect. Twelve patients (mean age = 72.7 years, SD = 6.1) with chronic neglect (persistent symptoms >6 months) performed the interventions 3 times/wk during 5 weeks, in total 15 hours. Training effects on brain activity were evaluated using fMRI task-evoked responses during the Posner's cuing task before and after the intervention. Patients improved their performance in the Posner fMRI task. In addition, patients increased their task-evoked brain activity after the VR interventions in an extended network including pre-frontal and temporal cortex during attentional cueing, but showed no training effects during target presentations. The current pilot study demonstrates that a novel multisensory VR intervention has the potential to benefit patients with chronic neglect in respect of behaviour and brain changes. Specifically, the fMRI results show that strategic processes (top-down control during attentional cuing) were enhanced by the intervention. The findings increase knowledge of the plasticity processes underlying positive rehabilitation effects from RehAtt ® in chronic neglect. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  16. Electrophysiological evidence for a self-processing advantage during audiovisual speech integration.

    PubMed

    Treille, Avril; Vilain, Coriandre; Kandel, Sonia; Sato, Marc

    2017-09-01

    Previous electrophysiological studies have provided strong evidence for early multisensory integrative mechanisms during audiovisual speech perception. From these studies, one unanswered issue is whether hearing our own voice and seeing our own articulatory gestures facilitate speech perception, possibly through a better processing and integration of sensory inputs with our own sensory-motor knowledge. The present EEG study examined the impact of self-knowledge during the perception of auditory (A), visual (V) and audiovisual (AV) speech stimuli that were previously recorded from the participant or from a speaker he/she had never met. Audiovisual interactions were estimated by comparing N1 and P2 auditory evoked potentials during the bimodal condition (AV) with the sum of those observed in the unimodal conditions (A + V). In line with previous EEG studies, our results revealed an amplitude decrease of P2 auditory evoked potentials in AV compared to A + V conditions. Crucially, a temporal facilitation of N1 responses was observed during the visual perception of self speech movements compared to those of another speaker. This facilitation was negatively correlated with the saliency of visual stimuli. These results provide evidence for a temporal facilitation of the integration of auditory and visual speech signals when the visual situation involves our own speech gestures.

  17. Optimality in mono- and multisensory map formation.

    PubMed

    Bürck, Moritz; Friedel, Paul; Sichert, Andreas B; Vossen, Christine; van Hemmen, J Leo

    2010-07-01

    In the struggle for survival in a complex and dynamic environment, nature has developed a multitude of sophisticated sensory systems. In order to exploit the information provided by these sensory systems, higher vertebrates reconstruct the spatio-temporal environment from each of the sensory systems they have at their disposal. That is, for each modality the animal computes a neuronal representation of the outside world, a monosensory neuronal map. Here we present a universal framework that allows to calculate the specific layout of the involved neuronal network by means of a general mathematical principle, viz., stochastic optimality. In order to illustrate the use of this theoretical framework, we provide a step-by-step tutorial of how to apply our model. In so doing, we present a spatial and a temporal example of optimal stimulus reconstruction which underline the advantages of our approach. That is, given a known physical signal transmission and rudimental knowledge of the detection process, our approach allows to estimate the possible performance and to predict neuronal properties of biological sensory systems. Finally, information from different sensory modalities has to be integrated so as to gain a unified perception of reality for further processing, e.g., for distinct motor commands. We briefly discuss concepts of multimodal interaction and how a multimodal space can evolve by alignment of monosensory maps.

  18. Integrating Information from Different Senses in the Auditory Cortex

    PubMed Central

    King, Andrew J.; Walker, Kerry M.M.

    2015-01-01

    Multisensory integration was once thought to be the domain of brain areas high in the cortical hierarchy, with early sensory cortical fields devoted to unisensory processing of inputs from their given set of sensory receptors. More recently, a wealth of evidence documenting visual and somatosensory responses in auditory cortex, even as early as the primary fields, has changed this view of cortical processing. These multisensory inputs may serve to enhance responses to sounds that are accompanied by other sensory cues, effectively making them easier to hear, but may also act more selectively to shape the receptive field properties of auditory cortical neurons to the location or identity of these events. We discuss the new, converging evidence that multiplexing of neural signals may play a key role in informatively encoding and integrating signals in auditory cortex across multiple sensory modalities. We highlight some of the many open research questions that exist about the neural mechanisms that give rise to multisensory integration in auditory cortex, which should be addressed in future experimental and theoretical studies. PMID:22798035

  19. Creating Multisensory Environments: Practical Ideas for Teaching and Learning. David Fulton/Nasen

    ERIC Educational Resources Information Center

    Davies, Christopher

    2011-01-01

    Multi-sensory environments in the classroom provide a wealth of stimulating learning experiences for all young children whose senses are still under development. "Creating Multisensory Environments: Practical Ideas for Teaching and Learning" is a highly practical guide to low-cost cost, easy to assemble multi-sensory environments. With a…

  20. Behavioural benefits of multisensory processing in ferrets.

    PubMed

    Hammond-Kenny, Amy; Bajo, Victoria M; King, Andrew J; Nodal, Fernando R

    2017-01-01

    Enhanced detection and discrimination, along with faster reaction times, are the most typical behavioural manifestations of the brain's capacity to integrate multisensory signals arising from the same object. In this study, we examined whether multisensory behavioural gains are observable across different components of the localization response that are potentially under the command of distinct brain regions. We measured the ability of ferrets to localize unisensory (auditory or visual) and spatiotemporally coincident auditory-visual stimuli of different durations that were presented from one of seven locations spanning the frontal hemifield. During the localization task, we recorded the head movements made following stimulus presentation, as a metric for assessing the initial orienting response of the ferrets, as well as the subsequent choice of which target location to approach to receive a reward. Head-orienting responses to auditory-visual stimuli were more accurate and faster than those made to visual but not auditory targets, suggesting that these movements were guided principally by sound alone. In contrast, approach-to-target localization responses were more accurate and faster to spatially congruent auditory-visual stimuli throughout the frontal hemifield than to either visual or auditory stimuli alone. Race model inequality analysis of head-orienting reaction times and approach-to-target response times indicates that different processes, probability summation and neural integration, respectively, are likely to be responsible for the effects of multisensory stimulation on these two measures of localization behaviour. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  1. Responses of prefrontal multisensory neurons to mismatching faces and vocalizations.

    PubMed

    Diehl, Maria M; Romanski, Lizabeth M

    2014-08-20

    Social communication relies on the integration of auditory and visual information, which are present in faces and vocalizations. Evidence suggests that the integration of information from multiple sources enhances perception compared with the processing of a unimodal stimulus. Our previous studies demonstrated that single neurons in the ventrolateral prefrontal cortex (VLPFC) of the rhesus monkey (Macaca mulatta) respond to and integrate conspecific vocalizations and their accompanying facial gestures. We were therefore interested in how VLPFC neurons respond differentially to matching (congruent) and mismatching (incongruent) faces and vocalizations. We recorded VLPFC neurons during the presentation of movies with congruent or incongruent species-specific facial gestures and vocalizations as well as their unimodal components. Recordings showed that while many VLPFC units are multisensory and respond to faces, vocalizations, or their combination, a subset of neurons showed a significant change in neuronal activity in response to incongruent versus congruent vocalization movies. Among these neurons, we typically observed incongruent suppression during the early stimulus period and incongruent enhancement during the late stimulus period. Incongruent-responsive VLPFC neurons were both bimodal and nonlinear multisensory, fostering their ability to respond to changes in either modality of a face-vocalization stimulus. These results demonstrate that ventral prefrontal neurons respond to changes in either modality of an audiovisual stimulus, which is important in identity processing and for the integration of multisensory communication information. Copyright © 2014 the authors 0270-6474/14/3411233-11$15.00/0.

  2. Learning Multisensory Integration and Coordinate Transformation via Density Estimation

    PubMed Central

    Sabes, Philip N.

    2013-01-01

    Sensory processing in the brain includes three key operations: multisensory integration—the task of combining cues into a single estimate of a common underlying stimulus; coordinate transformations—the change of reference frame for a stimulus (e.g., retinotopic to body-centered) effected through knowledge about an intervening variable (e.g., gaze position); and the incorporation of prior information. Statistically optimal sensory processing requires that each of these operations maintains the correct posterior distribution over the stimulus. Elements of this optimality have been demonstrated in many behavioral contexts in humans and other animals, suggesting that the neural computations are indeed optimal. That the relationships between sensory modalities are complex and plastic further suggests that these computations are learned—but how? We provide a principled answer, by treating the acquisition of these mappings as a case of density estimation, a well-studied problem in machine learning and statistics, in which the distribution of observed data is modeled in terms of a set of fixed parameters and a set of latent variables. In our case, the observed data are unisensory-population activities, the fixed parameters are synaptic connections, and the latent variables are multisensory-population activities. In particular, we train a restricted Boltzmann machine with the biologically plausible contrastive-divergence rule to learn a range of neural computations not previously demonstrated under a single approach: optimal integration; encoding of priors; hierarchical integration of cues; learning when not to integrate; and coordinate transformation. The model makes testable predictions about the nature of multisensory representations. PMID:23637588

  3. Sensory dominance and multisensory integration as screening tools in aging.

    PubMed

    Murray, Micah M; Eardley, Alison F; Edginton, Trudi; Oyekan, Rebecca; Smyth, Emily; Matusz, Pawel J

    2018-06-11

    Multisensory information typically confers neural and behavioural advantages over unisensory information. We used a simple audio-visual detection task to compare healthy young (HY), healthy older (HO) and mild-cognitive impairment (MCI) individuals. Neuropsychological tests assessed individuals' learning and memory impairments. First, we provide much-needed clarification regarding the presence of enhanced multisensory benefits in both healthily and abnormally aging individuals. The pattern of sensory dominance shifted with healthy and abnormal aging to favour a propensity of auditory-dominant behaviour (i.e., detecting sounds faster than flashes). Notably, multisensory benefits were larger only in healthy older than younger individuals who were also visually-dominant. Second, we demonstrate that the multisensory detection task offers benefits as a time- and resource-economic MCI screening tool. Receiver operating characteristic (ROC) analysis demonstrated that MCI diagnosis could be reliably achieved based on the combination of indices of multisensory integration together with indices of sensory dominance. Our findings showcase the importance of sensory profiles in determining multisensory benefits in healthy and abnormal aging. Crucially, our findings open an exciting possibility for multisensory detection tasks to be used as a cost-effective screening tool. These findings clarify relationships between multisensory and memory functions in aging, while offering new avenues for improved dementia diagnostics.

  4. The Thalamocortical Projection Systems in Primate: An Anatomical Support for Multisensory and Sensorimotor Interplay

    PubMed Central

    Cappe, Céline; Morel, Anne; Barone, Pascal

    2009-01-01

    Multisensory and sensorimotor integrations are usually considered to occur in superior colliculus and cerebral cortex, but few studies proposed the thalamus as being involved in these integrative processes. We investigated whether the organization of the thalamocortical (TC) systems for different modalities partly overlap, representing an anatomical support for multisensory and sensorimotor interplay in thalamus. In 2 macaque monkeys, 6 neuroanatomical tracers were injected in the rostral and caudal auditory cortex, posterior parietal cortex (PE/PEa in area 5), and dorsal and ventral premotor cortical areas (PMd, PMv), demonstrating the existence of overlapping territories of thalamic projections to areas of different modalities (sensory and motor). TC projections, distinct from the ones arising from specific unimodal sensory nuclei, were observed from motor thalamus to PE/PEa or auditory cortex and from sensory thalamus to PMd/PMv. The central lateral nucleus and the mediodorsal nucleus project to all injected areas, but the most significant overlap across modalities was found in the medial pulvinar nucleus. The present results demonstrate the presence of thalamic territories integrating different sensory modalities with motor attributes. Based on the divergent/convergent pattern of TC and corticothalamic projections, 4 distinct mechanisms of multisensory and sensorimotor interplay are proposed. PMID:19150924

  5. Data and techniques for studying the urban heat island effect in Johannesburg

    NASA Astrophysics Data System (ADS)

    Hardy, C. H.; Nel, A. L.

    2015-04-01

    The city of Johannesburg contains over 10 million trees and is often referred to as an urban forest. The intra-urban spatial variability of the levels of vegetation across Johannesburg's residential regions has an influence on the urban heat island effect within the city. Residential areas with high levels of vegetation benefit from cooling due to evapo-transpirative processes and thus exhibit weaker heat island effects; while their impoverished counterparts are not so fortunate. The urban heat island effect describes a phenomenon where some urban areas exhibit temperatures that are warmer than that of surrounding areas. The factors influencing the urban heat island effect include the high density of people and buildings and low levels of vegetative cover within populated urban areas. This paper describes the remote sensing data sets and the processing techniques employed to study the heat island effect within Johannesburg. In particular we consider the use of multi-sensorial multi-temporal remote sensing data towards a predictive model, based on the analysis of influencing factors.

  6. Visual Enhancement of Illusory Phenomenal Accents in Non-Isochronous Auditory Rhythms

    PubMed Central

    2016-01-01

    Musical rhythms encompass temporal patterns that often yield regular metrical accents (e.g., a beat). There have been mixed results regarding perception as a function of metrical saliency, namely, whether sensitivity to a deviant was greater in metrically stronger or weaker positions. Besides, effects of metrical position have not been examined in non-isochronous rhythms, or with respect to multisensory influences. This study was concerned with two main issues: (1) In non-isochronous auditory rhythms with clear metrical accents, how would sensitivity to a deviant be modulated by metrical positions? (2) Would the effects be enhanced by multisensory information? Participants listened to strongly metrical rhythms with or without watching a point-light figure dance to the rhythm in the same meter, and detected a slight loudness increment. Both conditions were presented with or without an auditory interference that served to impair auditory metrical perception. Sensitivity to a deviant was found greater in weak beat than in strong beat positions, consistent with the Predictive Coding hypothesis and the idea of metrically induced illusory phenomenal accents. The visual rhythm of dance hindered auditory detection, but more so when the latter was itself less impaired. This pattern suggested that the visual and auditory rhythms were perceptually integrated to reinforce metrical accentuation, yielding more illusory phenomenal accents and thus lower sensitivity to deviants, in a manner consistent with the principle of inverse effectiveness. Results were discussed in the predictive framework for multisensory rhythms involving observed movements and possible mediation of the motor system. PMID:27880850

  7. Ownership of an artificial limb induced by electrical brain stimulation

    PubMed Central

    Collins, Kelly L.; Cronin, Jeneva; Olson, Jared D.; Ehrsson, H. Henrik; Ojemann, Jeffrey G.

    2017-01-01

    Replacing the function of a missing or paralyzed limb with a prosthetic device that acts and feels like one’s own limb is a major goal in applied neuroscience. Recent studies in nonhuman primates have shown that motor control and sensory feedback can be achieved by connecting sensors in a robotic arm to electrodes implanted in the brain. However, it remains unknown whether electrical brain stimulation can be used to create a sense of ownership of an artificial limb. In this study on two human subjects, we show that ownership of an artificial hand can be induced via the electrical stimulation of the hand section of the somatosensory (SI) cortex in synchrony with touches applied to a rubber hand. Importantly, the illusion was not elicited when the electrical stimulation was delivered asynchronously or to a portion of the SI cortex representing a body part other than the hand, suggesting that multisensory integration according to basic spatial and temporal congruence rules is the underlying mechanism of the illusion. These findings show that the brain is capable of integrating “natural” visual input and direct cortical-somatosensory stimulation to create the multisensory perception that an artificial limb belongs to one’s own body. Thus, they serve as a proof of concept that electrical brain stimulation can be used to “bypass” the peripheral nervous system to induce multisensory illusions and ownership of artificial body parts, which has important implications for patients who lack peripheral sensory input due to spinal cord or nerve lesions. PMID:27994147

  8. Deficits in voice and multisensory processing in patients with Prader-Willi syndrome.

    PubMed

    Salles, Juliette; Strelnikov, Kuzma; Carine, Mantoulan; Denise, Thuilleaux; Laurier, Virginie; Molinas, Catherine; Tauber, Maïthé; Barone, Pascal

    2016-05-01

    Prader-Willi syndrome (PWS) is a rare neurodevelopmental and genetic disorder that is characterized by various expression of endocrine, cognitive and behavioral problems, among which a true obsession for food and a deficit of satiety that leads to hyperphagia and severe obesity. Neuropsychological studies have reported that PWS display altered social interactions with a specific weakness in interpreting social information and in responding to them, a symptom closed to that observed in autism spectrum disorders (ASD). Based on the hypothesis that atypical multisensory integration such as face and voice interactions would contribute in PWS to social impairment we investigate the abilities of PWS to process communication signals including the human voice. Patients with PWS recruited from the national reference center for PWS performed a simple detection task of stimuli presented in an uni-o or bimodal condition, as well as a voice discrimination task. Compared to control typically developing (TD) individuals, PWS present a specific deficit in discriminating human voices from environmental sounds. Further, PWS present a much lower multisensory benefits with an absence of violation of the race model indicating that multisensory information do not converge and interact prior to the initiation of the behavioral response. All the deficits observed in PWS were stronger for the subgroup of patients suffering from Uniparental Disomy, a population known to be more sensitive to ASD. Altogether, our study suggests that the deficits in social behavior observed in PWS derive at least partly from an impairment in deciphering the social information carried by voice signals, face signals, and the combination of both. In addition, our work is in agreement with the brain imaging studies revealing an alteration in PWS of the "social brain network" including the STS region involved in processing human voices. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Rapid temporal recalibration is unique to audiovisual stimuli.

    PubMed

    Van der Burg, Erik; Orchard-Mills, Emily; Alais, David

    2015-01-01

    Following prolonged exposure to asynchronous multisensory signals, the brain adapts to reduce the perceived asynchrony. Here, in three separate experiments, participants performed a synchrony judgment task on audiovisual, audiotactile or visuotactile stimuli and we used inter-trial analyses to examine whether temporal recalibration occurs rapidly on the basis of a single asynchronous trial. Even though all combinations used the same subjects, task and design, temporal recalibration occurred for audiovisual stimuli (i.e., the point of subjective simultaneity depended on the preceding trial's modality order), but none occurred when the same auditory or visual event was combined with a tactile event. Contrary to findings from prolonged adaptation studies showing recalibration for all three combinations, we show that rapid, inter-trial recalibration is unique to audiovisual stimuli. We conclude that recalibration occurs at two different timescales for audiovisual stimuli (fast and slow), but only on a slow timescale for audiotactile and visuotactile stimuli.

  10. Audiovisual Modulation in Mouse Primary Visual Cortex Depends on Cross-Modal Stimulus Configuration and Congruency.

    PubMed

    Meijer, Guido T; Montijn, Jorrit S; Pennartz, Cyriel M A; Lansink, Carien S

    2017-09-06

    The sensory neocortex is a highly connected associative network that integrates information from multiple senses, even at the level of the primary sensory areas. Although a growing body of empirical evidence supports this view, the neural mechanisms of cross-modal integration in primary sensory areas, such as the primary visual cortex (V1), are still largely unknown. Using two-photon calcium imaging in awake mice, we show that the encoding of audiovisual stimuli in V1 neuronal populations is highly dependent on the features of the stimulus constituents. When the visual and auditory stimulus features were modulated at the same rate (i.e., temporally congruent), neurons responded with either an enhancement or suppression compared with unisensory visual stimuli, and their prevalence was balanced. Temporally incongruent tones or white-noise bursts included in audiovisual stimulus pairs resulted in predominant response suppression across the neuronal population. Visual contrast did not influence multisensory processing when the audiovisual stimulus pairs were congruent; however, when white-noise bursts were used, neurons generally showed response suppression when the visual stimulus contrast was high whereas this effect was absent when the visual contrast was low. Furthermore, a small fraction of V1 neurons, predominantly those located near the lateral border of V1, responded to sound alone. These results show that V1 is involved in the encoding of cross-modal interactions in a more versatile way than previously thought. SIGNIFICANCE STATEMENT The neural substrate of cross-modal integration is not limited to specialized cortical association areas but extends to primary sensory areas. Using two-photon imaging of large groups of neurons, we show that multisensory modulation of V1 populations is strongly determined by the individual and shared features of cross-modal stimulus constituents, such as contrast, frequency, congruency, and temporal structure. Congruent audiovisual stimulation resulted in a balanced pattern of response enhancement and suppression compared with unisensory visual stimuli, whereas incongruent or dissimilar stimuli at full contrast gave rise to a population dominated by response-suppressing neurons. Our results indicate that V1 dynamically integrates nonvisual sources of information while still attributing most of its resources to coding visual information. Copyright © 2017 the authors 0270-6474/17/378783-14$15.00/0.

  11. Is Body Dysmorphic Disorder Associated with Abnormal Bodily Self-Awareness? A Study Using the Rubber Hand Illusion

    PubMed Central

    Kaplan, Ryan A.; Enticott, Peter G.; Hohwy, Jakob; Castle, David J.; Rossell, Susan L.

    2014-01-01

    Evidence from past research suggests that behaviours and characteristics related to body dissatisfaction may be associated with greater instability of perceptual body image, possibly due to problems in the integration of body-related multisensory information. We investigated whether people with body dysmorphic disorder (BDD), a condition characterised by body image disturbances, demonstrated enhanced susceptibility to the rubber hand illusion (RHI), which arises as a result of multisensory integration processes when a rubber hand and the participant's hidden real hand are stimulated in synchrony. Overall, differences in RHI experience between the BDD group and healthy and schizophrenia control groups (n = 17 in each) were not significant. RHI strength, however, was positively associated with body dissatisfaction and related tendencies. For the healthy control group, proprioceptive drift towards the rubber hand was observed following synchronous but not asynchronous stimulation, a typical pattern when inducing the RHI. Similar drifts in proprioceptive awareness occurred for the BDD group irrespective of whether stimulation was synchronous or not. These results are discussed in terms of possible abnormalities in visual processing and multisensory integration among people with BDD. PMID:24925079

  12. Wireless Wearable Multisensory Suite and Real-Time Prediction of Obstructive Sleep Apnea Episodes.

    PubMed

    Le, Trung Q; Cheng, Changqing; Sangasoongsong, Akkarapol; Wongdhamma, Woranat; Bukkapatnam, Satish T S

    2013-01-01

    Obstructive sleep apnea (OSA) is a common sleep disorder found in 24% of adult men and 9% of adult women. Although continuous positive airway pressure (CPAP) has emerged as a standard therapy for OSA, a majority of patients are not tolerant to this treatment, largely because of the uncomfortable nasal air delivery during their sleep. Recent advances in wireless communication and advanced ("bigdata") preditive analytics technologies offer radically new point-of-care treatment approaches for OSA episodes with unprecedented comfort and afforadability. We introduce a Dirichlet process-based mixture Gaussian process (DPMG) model to predict the onset of sleep apnea episodes based on analyzing complex cardiorespiratory signals gathered from a custom-designed wireless wearable multisensory suite. Extensive testing with signals from the multisensory suite as well as PhysioNet's OSA database suggests that the accuracy of offline OSA classification is 88%, and accuracy for predicting an OSA episode 1-min ahead is 83% and 3-min ahead is 77%. Such accurate prediction of an impending OSA episode can be used to adaptively adjust CPAP airflow (toward improving the patient's adherence) or the torso posture (e.g., minor chin adjustments to maintain steady levels of the airflow).

  13. The transformation of multi-sensory experiences into memories during sleep.

    PubMed

    Rothschild, Gideon

    2018-03-26

    Our everyday lives present us with a continuous stream of multi-modal sensory inputs. While most of this information is soon forgotten, sensory information associated with salient experiences can leave long-lasting memories in our minds. Extensive human and animal research has established that the hippocampus is critically involved in this process of memory formation and consolidation. However, the underlying mechanistic details are still only partially understood. Specifically, the hippocampus has often been suggested to encode information during experience, temporarily store it, and gradually transfer this information to the cortex during sleep. In rodents, ample evidence has supported this notion in the context of spatial memory, yet whether this process adequately describes the consolidation of multi-sensory experiences into memories is unclear. Here, focusing on rodent studies, I examine how multi-sensory experiences are consolidated into long term memories by hippocampal and cortical circuits during sleep. I propose that in contrast to the classical model of memory consolidation, the cortex is a "fast learner" that has a rapid and instructive role in shaping hippocampal-dependent memory consolidation. The proposed model may offer mechanistic insight into memory biasing using sensory cues during sleep. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Listening to Another Sense: Somatosensory Integration in the Auditory System

    PubMed Central

    Wu, Calvin; Stefanescu, Roxana A.; Martel, David T.

    2014-01-01

    Conventionally, sensory systems are viewed as separate entities, each with its own physiological process serving a different purpose. However, many functions require integrative inputs from multiple sensory systems, and sensory intersection and convergence occur throughout the central nervous system. The neural processes for hearing perception undergo significant modulation by the two other major sensory systems, vision and somatosensation. This synthesis occurs at every level of the ascending auditory pathway: the cochlear nucleus, inferior colliculus, medial geniculate body, and the auditory cortex. In this review, we explore the process of multisensory integration from 1) anatomical (inputs and connections), 2) physiological (cellular responses), 3) functional, and 4) pathological aspects. We focus on the convergence between auditory and somatosensory inputs in each ascending auditory station. This review highlights the intricacy of sensory processing, and offers a multisensory perspective regarding the understanding of sensory disorders. PMID:25526698

  15. A standing posture is associated with increased susceptibility to the sound-induced flash illusion in fall-prone older adults.

    PubMed

    Stapleton, John; Setti, Annalisa; Doheny, Emer P; Kenny, Rose Anne; Newell, Fiona N

    2014-02-01

    Recent research has provided evidence suggesting a link between inefficient processing of multisensory information and incidence of falling in older adults. Specifically, Setti et al. (Exp Brain Res 209:375-384, 2011) reported that older adults with a history of falling were more susceptible than their healthy, age-matched counterparts to the sound-induced flash illusion. Here, we investigated whether balance control in fall-prone older adults was directly associated with multisensory integration by testing susceptibility to the illusion under two postural conditions: sitting and standing. Whilst standing, fall-prone older adults had a greater body sway than the age-matched healthy older adults and their body sway increased when presented with the audio-visual illusory but not the audio-visual congruent conditions. We also found an increase in susceptibility to the sound-induced flash illusion during standing relative to sitting for fall-prone older adults only. Importantly, no performance differences were found across groups in either the unisensory or non-illusory multisensory conditions across the two postures. These results suggest an important link between multisensory integration and balance control in older adults and have important implications for understanding why some older adults are prone to falling.

  16. Proprio-tactile integration for kinesthetic perception: an fMRI study.

    PubMed

    Kavounoudias, A; Roll, J P; Anton, J L; Nazarian, B; Roth, M; Roll, R

    2008-01-31

    This study aims to identify the cerebral networks involved in the integrative processing of somesthetic inputs for kinesthetic purposes. In particular, we investigated how muscle proprioceptive and tactile messages can result in a unified percept of one's own body movements. We stimulated either separately or conjointly these two sensory channels in order to evoke kinesthetic illusions of a clockwise rotation of 10 subjects' right hand in an fMRI environment. Results first show that, whether induced by a tactile or a proprioceptive stimulation, the kinesthetic illusion was accompanied by the activation of a very similar cerebral network including cortical and subcortical sensorimotor areas, which are also classically found in passive or imagined movement tasks. In addition, the strongest kinesthetic illusions occurred under the congruent proprio-tactile co-stimulation condition. They were specifically associated to brain area activations distinct from those evidenced under the unimodal stimulations: the inferior parietal lobule, the superior temporal sulcus, the insula-claustrum region, and the cerebellum. These findings support the hypothesis that heteromodal areas may subserve multisensory integrative mechanisms at cortical and subcortical levels. They also suggest the integrative processing might consist of detection of the spatial coherence between the two kinesthetic messages involving the inferior parietal lobule activity and of a detection of their temporal coincidence via a subcortical relay, the insula structure, usually linked to the relative synchrony of different stimuli. Finally, the involvement of the superior temporal sulcus in the feeling of biological movement and that of the cerebellum in the movement timing control are also discussed.

  17. The functional and structural asymmetries of the superior temporal sulcus.

    PubMed

    Specht, Karsten; Wigglesworth, Philip

    2018-02-01

    The superior temporal sulcus (STS) is an anatomical structure that increasingly interests researchers. This structure appears to receive multisensory input and is involved in several perceptual and cognitive core functions, such as speech perception, audiovisual integration, (biological) motion processing and theory of mind capacities. In addition, the superior temporal sulcus is not only one of the longest sulci of the brain, but it also shows marked functional and structural asymmetries, some of which have only been found in humans. To explore the functional-structural relationships of these asymmetries in more detail, this study combines functional and structural magnetic resonance imaging. Using a speech perception task, an audiovisual integration task, and a theory of mind task, this study again demonstrated an involvement of the STS in these processes, with an expected strong leftward asymmetry for the speech perception task. Furthermore, this study confirmed the earlier described, human-specific asymmetries, namely that the left STS is longer than the right STS and that the right STS is deeper than the left STS. However, this study did not find any relationship between these structural asymmetries and the detected brain activations or their functional asymmetries. This can, on the other hand, give further support to the notion that the structural asymmetry of the STS is not directly related to the functional asymmetry of the speech perception and the language system as a whole, but that it may have other causes and functions. © 2018 The Authors. Scandinavian Journal of Psychology published by Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  18. A comparison of multisensory and traditional interventions on inpatient psychiatry and geriatric neuropsychiatry units.

    PubMed

    Knight, Margaret; Adkison, Lesley; Kovach, Joan Stack

    2010-01-01

    Sensory rooms and the use of multisensory interventions are becoming popular in inpatient psychiatry. The empirical data supporting their use are limited, and there is only anecdotal evidence indicating effectiveness in psychiatric populations. The specific aims of this observational pilot study were to determine whether multisensory-based therapies were effective in managing psychiatric symptoms and to evaluate how these interventions compared to traditional ones used in the milieu. The study found that multisensory interventions were as effective as traditional ones in managing symptoms, and participants' Brief Psychiatric Rating Scale scores significantly improved following both kinds of intervention. Medication administration did not affect symptom reduction. This article explores how multisensory interventions offer choice in symptom management. Education regarding multisensory strategies should become integral to inpatient and outpatient group programs, in that additional symptom management strategies can only be an asset.

  19. Binocular disparity tuning and visual-vestibular congruency of multisensory neurons in macaque parietal cortex

    PubMed Central

    Yang, Yun; Liu, Sheng; Chowdhury, Syed A.; DeAngelis, Gregory C.; Angelaki, Dora E.

    2012-01-01

    Many neurons in the dorsal medial superior temporal (MSTd) and ventral intraparietal (VIP) areas of the macaque brain are multisensory, responding to both optic flow and vestibular cues to self-motion. The heading tuning of visual and vestibular responses can be either congruent or opposite, but only congruent cells have been implicated in cue integration for heading perception. Because of the geometric properties of motion parallax, however, both congruent and opposite cells could be involved in coding self-motion when observers fixate a world-fixed target during translation, if congruent cells prefer near disparities and opposite cells prefer far disparities. We characterized the binocular disparity selectivity and heading tuning of MSTd and VIP cells using random-dot stimuli. Most (70%) MSTd neurons were disparity-selective with monotonic tuning, and there was no consistent relationship between depth preference and congruency of visual and vestibular heading tuning. One-third of disparity-selective MSTd cells reversed their depth preference for opposite directions of motion (direction-dependent disparity tuning, DDD), but most of these cells were unisensory with no tuning for vestibular stimuli. Inconsistent with previous reports, the direction preferences of most DDD neurons do not reverse with disparity. By comparison to MSTd, VIP contains fewer disparity-selective neurons (41%) and very few DDD cells. On average, VIP neurons also preferred higher speeds and nearer disparities than MSTd cells. Our findings are inconsistent with the hypothesis that visual/vestibular congruency is linked to depth preference, and also suggest that DDD cells are not involved in multisensory integration for heading perception. PMID:22159105

  20. Evaluating the operations underlying multisensory integration in the cat superior colliculus.

    PubMed

    Stanford, Terrence R; Quessy, Stephan; Stein, Barry E

    2005-07-13

    It is well established that superior colliculus (SC) multisensory neurons integrate cues from different senses; however, the mechanisms responsible for producing multisensory responses are poorly understood. Previous studies have shown that spatially congruent cues from different modalities (e.g., auditory and visual) yield enhanced responses and that the greatest relative enhancements occur for combinations of the least effective modality-specific stimuli. Although these phenomena are well documented, little is known about the mechanisms that underlie them, because no study has systematically examined the operation that multisensory neurons perform on their modality-specific inputs. The goal of this study was to evaluate the computations that multisensory neurons perform in combining the influences of stimuli from two modalities. The extracellular activities of single neurons in the SC of the cat were recorded in response to visual, auditory, and bimodal visual-auditory stimulation. Each neuron was tested across a range of stimulus intensities and multisensory responses evaluated against the null hypothesis of simple summation of unisensory influences. We found that the multisensory response could be superadditive, additive, or subadditive but that the computation was strongly dictated by the efficacies of the modality-specific stimulus components. Superadditivity was most common within a restricted range of near-threshold stimulus efficacies, whereas for the majority of stimuli, response magnitudes were consistent with the linear summation of modality-specific influences. In addition to providing a constraint for developing models of multisensory integration, the relationship between response mode and stimulus efficacy emphasizes the importance of considering stimulus parameters when inducing or interpreting multisensory phenomena.

  1. The multisensory approach to birth and aromatherapy.

    PubMed

    Gutteridge, Kathryn

    2014-05-01

    The birth environment continues to be a subject of midwifery discourse within theory and practice. This article discusses the birth environment from the perspective of understanding the aromas and aromatherapy for the benefit of women and midwives The dynamic between the olfactory system and stimulation of normal birth processes proves to be fascinating. By examining other health models of care we can incorporate simple but powerful methods that can shape clinical outcomes. There is still more that midwives can do by using aromatherapy in the context of a multisensory approach to make birth environments synchronise with women's potential to birth in a positive way.

  2. Neural networks supporting audiovisual integration for speech: A large-scale lesion study.

    PubMed

    Hickok, Gregory; Rogalsky, Corianne; Matchin, William; Basilakos, Alexandra; Cai, Julia; Pillay, Sara; Ferrill, Michelle; Mickelsen, Soren; Anderson, Steven W; Love, Tracy; Binder, Jeffrey; Fridriksson, Julius

    2018-06-01

    Auditory and visual speech information are often strongly integrated resulting in perceptual enhancements for audiovisual (AV) speech over audio alone and sometimes yielding compelling illusory fusion percepts when AV cues are mismatched, the McGurk-MacDonald effect. Previous research has identified three candidate regions thought to be critical for AV speech integration: the posterior superior temporal sulcus (STS), early auditory cortex, and the posterior inferior frontal gyrus. We assess the causal involvement of these regions (and others) in the first large-scale (N = 100) lesion-based study of AV speech integration. Two primary findings emerged. First, behavioral performance and lesion maps for AV enhancement and illusory fusion measures indicate that classic metrics of AV speech integration are not necessarily measuring the same process. Second, lesions involving superior temporal auditory, lateral occipital visual, and multisensory zones in the STS are the most disruptive to AV speech integration. Further, when AV speech integration fails, the nature of the failure-auditory vs visual capture-can be predicted from the location of the lesions. These findings show that AV speech processing is supported by unimodal auditory and visual cortices as well as multimodal regions such as the STS at their boundary. Motor related frontal regions do not appear to play a role in AV speech integration. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Visual form predictions facilitate auditory processing at the N1.

    PubMed

    Paris, Tim; Kim, Jeesun; Davis, Chris

    2017-02-20

    Auditory-visual (AV) events often involve a leading visual cue (e.g. auditory-visual speech) that allows the perceiver to generate predictions about the upcoming auditory event. Electrophysiological evidence suggests that when an auditory event is predicted, processing is sped up, i.e., the N1 component of the ERP occurs earlier (N1 facilitation). However, it is not clear (1) whether N1 facilitation is based specifically on predictive rather than multisensory integration and (2) which particular properties of the visual cue it is based on. The current experiment used artificial AV stimuli in which visual cues predicted but did not co-occur with auditory cues. Visual form cues (high and low salience) and the auditory-visual pairing were manipulated so that auditory predictions could be based on form and timing or on timing only. The results showed that N1 facilitation occurred only for combined form and temporal predictions. These results suggest that faster auditory processing (as indicated by N1 facilitation) is based on predictive processing generated by a visual cue that clearly predicts both what and when the auditory stimulus will occur. Copyright © 2016. Published by Elsevier Ltd.

  4. Common variation in the autism risk gene CNTNAP2, brain structural connectivity and multisensory speech integration.

    PubMed

    Ross, Lars A; Del Bene, Victor A; Molholm, Sophie; Jae Woo, Young; Andrade, Gizely N; Abrahams, Brett S; Foxe, John J

    2017-11-01

    Three lines of evidence motivated this study. 1) CNTNAP2 variation is associated with autism risk and speech-language development. 2) CNTNAP2 variations are associated with differences in white matter (WM) tracts comprising the speech-language circuitry. 3) Children with autism show impairment in multisensory speech perception. Here, we asked whether an autism risk-associated CNTNAP2 single nucleotide polymorphism in neurotypical adults was associated with multisensory speech perception performance, and whether such a genotype-phenotype association was mediated through white matter tract integrity in speech-language circuitry. Risk genotype at rs7794745 was associated with decreased benefit from visual speech and lower fractional anisotropy (FA) in several WM tracts (right precentral gyrus, left anterior corona radiata, right retrolenticular internal capsule). These structural connectivity differences were found to mediate the effect of genotype on audiovisual speech perception, shedding light on possible pathogenic pathways in autism and biological sources of inter-individual variation in audiovisual speech processing in neurotypicals. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Neural correlates of perceptual narrowing in cross-species face-voice matching.

    PubMed

    Grossmann, Tobias; Missana, Manuela; Friederici, Angela D; Ghazanfar, Asif A

    2012-11-01

    Integrating the multisensory features of talking faces is critical to learning and extracting coherent meaning from social signals. While we know much about the development of these capacities at the behavioral level, we know very little about the underlying neural processes. One prominent behavioral milestone of these capacities is the perceptual narrowing of face-voice matching, whereby young infants match faces and voices across species, but older infants do not. In the present study, we provide neurophysiological evidence for developmental decline in cross-species face-voice matching. We measured event-related brain potentials (ERPs) while 4- and 8-month-old infants watched and listened to congruent and incongruent audio-visual presentations of monkey vocalizations and humans mimicking monkey vocalizations. The ERP results indicated that younger infants distinguished between the congruent and the incongruent faces and voices regardless of species, whereas in older infants, the sensitivity to multisensory congruency was limited to the human face and voice. Furthermore, with development, visual and frontal brain processes and their functional connectivity became more sensitive to the congruence of human faces and voices relative to monkey faces and voices. Our data show the neural correlates of perceptual narrowing in face-voice matching and support the notion that postnatal experience with species identity is associated with neural changes in multisensory processing (Lewkowicz & Ghazanfar, 2009). © 2012 Blackwell Publishing Ltd.

  6. Multisensory architectures for action-oriented perception

    NASA Astrophysics Data System (ADS)

    Alba, L.; Arena, P.; De Fiore, S.; Listán, J.; Patané, L.; Salem, A.; Scordino, G.; Webb, B.

    2007-05-01

    In order to solve the navigation problem of a mobile robot in an unstructured environment a versatile sensory system and efficient locomotion control algorithms are necessary. In this paper an innovative sensory system for action-oriented perception applied to a legged robot is presented. An important problem we address is how to utilize a large variety and number of sensors, while having systems that can operate in real time. Our solution is to use sensory systems that incorporate analog and parallel processing, inspired by biological systems, to reduce the required data exchange with the motor control layer. In particular, as concerns the visual system, we use the Eye-RIS v1.1 board made by Anafocus, which is based on a fully parallel mixed-signal array sensor-processor chip. The hearing sensor is inspired by the cricket hearing system and allows efficient localization of a specific sound source with a very simple analog circuit. Our robot utilizes additional sensors for touch, posture, load, distance, and heading, and thus requires customized and parallel processing for concurrent acquisition. Therefore a Field Programmable Gate Array (FPGA) based hardware was used to manage the multi-sensory acquisition and processing. This choice was made because FPGAs permit the implementation of customized digital logic blocks that can operate in parallel allowing the sensors to be driven simultaneously. With this approach the multi-sensory architecture proposed can achieve real time capabilities.

  7. Development of Multisensory Reweighting Is Impaired for Quiet Stance Control in Children with Developmental Coordination Disorder (DCD)

    PubMed Central

    Bair, Woei-Nan; Kiemel, Tim; Jeka, John J.; Clark, Jane E.

    2012-01-01

    Background Developmental Coordination Disorder (DCD) is a leading movement disorder in children that commonly involves poor postural control. Multisensory integration deficit, especially the inability to adaptively reweight to changing sensory conditions, has been proposed as a possible mechanism but with insufficient characterization. Empirical quantification of reweighting significantly advances our understanding of its developmental onset and improves the characterization of its difference in children with DCD compared to their typically developing (TD) peers. Methodology/Principal Findings Twenty children with DCD (6.6 to 11.8 years) were tested with a protocol in which visual scene and touch bar simultaneously oscillateded medio-laterally at different frequencies and various amplitudes. Their data were compared to data on TD children (4.2 to 10.8 years) from a previous study. Gains and phases were calculated for medio-lateral responses of the head and center of mass to both sensory stimuli. Gains and phases were simultaneously fitted by linear functions of age for each amplitude condition, segment, modality and group. Fitted gains and phases at two comparison ages (6.6 and 10.8 years) were tested for reweighting within each group and for group differences. Children with DCD reweight touch and vision at a later age (10.8 years) than their TD peers (4.2 years). Children with DCD demonstrate a weak visual reweighting, no advanced multisensory fusion and phase lags larger than those of TD children in response to both touch and vision. Conclusions/Significance Two developmental perspectives, postural body scheme and dorsal stream development, are provided to explain the weak vision reweighting. The lack of multisensory fusion supports the notion that optimal multisensory integration is a slow developmental process and is vulnerable in children with DCD. PMID:22815872

  8. Multisensory Integration in Non-Human Primates during a Sensory-Motor Task

    PubMed Central

    Lanz, Florian; Moret, Véronique; Rouiller, Eric Michel; Loquet, Gérard

    2013-01-01

    Daily our central nervous system receives inputs via several sensory modalities, processes them and integrates information in order to produce a suitable behavior. The amazing part is that such a multisensory integration brings all information into a unified percept. An approach to start investigating this property is to show that perception is better and faster when multimodal stimuli are used as compared to unimodal stimuli. This forms the first part of the present study conducted in a non-human primate’s model (n = 2) engaged in a detection sensory-motor task where visual and auditory stimuli were displayed individually or simultaneously. The measured parameters were the reaction time (RT) between stimulus and onset of arm movement, successes and errors percentages, as well as the evolution as a function of time of these parameters with training. As expected, RTs were shorter when the subjects were exposed to combined stimuli. The gains for both subjects were around 20 and 40 ms, as compared with the auditory and visual stimulus alone, respectively. Moreover the number of correct responses increased in response to bimodal stimuli. We interpreted such multisensory advantage through redundant signal effect which decreases perceptual ambiguity, increases speed of stimulus detection, and improves performance accuracy. The second part of the study presents single-unit recordings derived from the premotor cortex (PM) of the same subjects during the sensory-motor task. Response patterns to sensory/multisensory stimulation are documented and specific type proportions are reported. Characterization of bimodal neurons indicates a mechanism of audio-visual integration possibly through a decrease of inhibition. Nevertheless the neural processing leading to faster motor response from PM as a polysensory association cortical area remains still unclear. PMID:24319421

  9. Neuronal plasticity and multisensory integration in filial imprinting.

    PubMed

    Town, Stephen Michael; McCabe, Brian John

    2011-03-10

    Many organisms sample their environment through multiple sensory systems and the integration of multisensory information enhances learning. However, the mechanisms underlying multisensory memory formation and their similarity to unisensory mechanisms remain unclear. Filial imprinting is one example in which experience is multisensory, and the mechanisms of unisensory neuronal plasticity are well established. We investigated the storage of audiovisual information through experience by comparing the activity of neurons in the intermediate and medial mesopallium of imprinted and naïve domestic chicks (Gallus gallus domesticus) in response to an audiovisual imprinting stimulus and novel object and their auditory and visual components. We find that imprinting enhanced the mean response magnitude of neurons to unisensory but not multisensory stimuli. Furthermore, imprinting enhanced responses to incongruent audiovisual stimuli comprised of mismatched auditory and visual components. Our results suggest that the effects of imprinting on the unisensory and multisensory responsiveness of IMM neurons differ and that IMM neurons may function to detect unexpected deviations from the audiovisual imprinting stimulus.

  10. Neuronal Plasticity and Multisensory Integration in Filial Imprinting

    PubMed Central

    Town, Stephen Michael; McCabe, Brian John

    2011-01-01

    Many organisms sample their environment through multiple sensory systems and the integration of multisensory information enhances learning. However, the mechanisms underlying multisensory memory formation and their similarity to unisensory mechanisms remain unclear. Filial imprinting is one example in which experience is multisensory, and the mechanisms of unisensory neuronal plasticity are well established. We investigated the storage of audiovisual information through experience by comparing the activity of neurons in the intermediate and medial mesopallium of imprinted and naïve domestic chicks (Gallus gallus domesticus) in response to an audiovisual imprinting stimulus and novel object and their auditory and visual components. We find that imprinting enhanced the mean response magnitude of neurons to unisensory but not multisensory stimuli. Furthermore, imprinting enhanced responses to incongruent audiovisual stimuli comprised of mismatched auditory and visual components. Our results suggest that the effects of imprinting on the unisensory and multisensory responsiveness of IMM neurons differ and that IMM neurons may function to detect unexpected deviations from the audiovisual imprinting stimulus. PMID:21423770

  11. Development of Multisensory Integration Approach Model

    ERIC Educational Resources Information Center

    Kumar, S. Prasanna; Nathan, B. Sami

    2016-01-01

    Every teacher expects optimum level of processing in mind of them students. The level of processing is mainly depends upon memory process. Most of the students have retrieval difficulties on past learning. Memory difficulties directly related to sensory integration. In these circumstances the investigator made an attempt to construct Multisensory…

  12. Multi-Sensory Intervention Observational Research

    ERIC Educational Resources Information Center

    Thompson, Carla J.

    2011-01-01

    An observational research study based on sensory integration theory was conducted to examine the observed impact of student selected multi-sensory experiences within a multi-sensory intervention center relative to the sustained focus levels of students with special needs. A stratified random sample of 50 students with severe developmental…

  13. Visual-somatosensory integration in aging: Does stimulus location really matter?

    PubMed Central

    MAHONEY, JEANNETTE R.; WANG, CUILING; DUMAS, KRISTINA; HOLTZER, ROEE

    2014-01-01

    Individuals are constantly bombarded by sensory stimuli across multiple modalities that must be integrated efficiently. Multisensory integration (MSI) is said to be governed by stimulus properties including space, time, and magnitude. While there is a paucity of research detailing MSI in aging, we have demonstrated that older adults reveal the greatest reaction time (RT) benefi t when presented with simultaneous visual-somatosensory (VS) stimuli. To our knowledge, the differential RT benefit of visual and somatosensory stimuli presented within and across spatial hemifields has not been investigated in aging. Eighteen older adults (Mean = 74 years; 11 female), who were determined to be non-demented and without medical or psychiatric conditions that may affect their performance, participated in this study. Participants received eight randomly presented stimulus conditions (four unisensory and four multisensory) and were instructed to make speeded foot-pedal responses as soon as they detected any stimulation, regardless of stimulus type and location of unisensory inputs. Results from a linear mixed effect model, adjusted for speed of processing and other covariates, revealed that RTs to all multisensory pairings were significantly faster than those elicited to averaged constituent unisensory conditions (p < 0.01). Similarly, race model violation did not differ based on unisensory spatial location (p = 0.41). In summary, older adults demonstrate significant VS multisensory RT effects to stimuli both within and across spatial hemifields. PMID:24698637

  14. Hybrid motion sensing and experimental modal analysis using collocated smartphone camera and accelerometers

    NASA Astrophysics Data System (ADS)

    Ozer, Ekin; Feng, Dongming; Feng, Maria Q.

    2017-10-01

    State-of-the-art multisensory technologies and heterogeneous sensor networks propose a wide range of response measurement opportunities for structural health monitoring (SHM). Measuring and fusing different physical quantities in terms of structural vibrations can provide alternative acquisition methods and improve the quality of the modal testing results. In this study, a recently introduced SHM concept, SHM with smartphones, is focused to utilize multisensory smartphone features for a hybridized structural vibration response measurement framework. Based on vibration testing of a small-scale multistory laboratory model, displacement and acceleration responses are monitored using two different smartphone sensors, an embedded camera and accelerometer, respectively. Double-integration or differentiation among different measurement types is performed to combine multisensory measurements on a comparative basis. In addition, distributed sensor signals from collocated devices are processed for modal identification, and performance of smartphone-based sensing platforms are tested under different configuration scenarios and heterogeneity levels. The results of these tests show a novel and successful implementation of a hybrid motion sensing platform through multiple sensor type and device integration. Despite the heterogeneity of motion data obtained from different smartphone devices and technologies, it is shown that multisensory response measurements can be blended for experimental modal analysis. Getting benefit from the accessibility of smartphone technology, similar smartphone-based dynamic testing methodologies can provide innovative SHM solutions with mobile, programmable, and cost-free interfaces.

  15. Using Multisensory Phonics to Foster Reading Skills of Adolescent Delinquents

    ERIC Educational Resources Information Center

    Warnick, Kristan; Caldarella, Paul

    2016-01-01

    This study examined the effectiveness of a multisensory phonics-based reading remediation program for adolescent delinquents classified as poor readers living at a residential treatment center. We used a pretest--posttest control group design with random assignment. The treatment group participated in a 30-hr multisensory phonics reading…

  16. Incidental Learning in a Multisensory Environment across Childhood

    ERIC Educational Resources Information Center

    Broadbent, Hannah J.; White, Hayley; Mareschal, Denis; Kirkham, Natasha Z.

    2018-01-01

    Multisensory information has been shown to modulate attention in infants and facilitate learning in adults, by enhancing the amodal properties of a stimulus. However, it remains unclear whether this translates to learning in a multisensory environment across middle childhood, and particularly in the case of incidental learning. One hundred and…

  17. Influence of Motor Therapy on Children with Multisensory Disabilities: A Preliminary Study.

    ERIC Educational Resources Information Center

    Rider, Robert A.; Candeletti, Glenn

    1982-01-01

    Effects of a program of motor therapy on the motor ability levels of eight multisensory handicapped children were examined. Participation improved performance for all subjects. The gain scores from pretest to posttest indicated that children with multisensory disabilities may benefit from such a program. (Author)

  18. A Rational Analysis of the Acquisition of Multisensory Representations

    ERIC Educational Resources Information Center

    Yildirim, Ilker; Jacobs, Robert A.

    2012-01-01

    How do people learn multisensory, or amodal, representations, and what consequences do these representations have for perceptual performance? We address this question by performing a rational analysis of the problem of learning multisensory representations. This analysis makes use of a Bayesian nonparametric model that acquires latent multisensory…

  19. Multisensory Modalities for Blending and Segmenting among Early Readers

    ERIC Educational Resources Information Center

    Lee, Lay Wah

    2016-01-01

    With the advent of touch-screen interfaces on the tablet computer, multisensory elements in reading instruction have taken on a new dimension. This computer assisted language learning research aimed to determine whether specific technology features of a tablet computer can add to the functionality of multisensory instruction in early reading…

  20. The LD Teacher's Language Arts Companion[TM]: A Multisensory Approach.

    ERIC Educational Resources Information Center

    Wadlington, Elizabeth M.; Currie, Paula S.

    This book presents a multisensory approach for teaching language arts skills to students in grades 3-10 with learning disabilities. It is intended for teachers, parents, speech-language pathologists, and other professionals who work with students with learning disabilities. An introduction discusses multisensory instruction and the benefits of…

  1. Contributions of local speech encoding and functional connectivity to audio-visual speech perception

    PubMed Central

    Giordano, Bruno L; Ince, Robin A A; Gross, Joachim; Schyns, Philippe G; Panzeri, Stefano; Kayser, Christoph

    2017-01-01

    Seeing a speaker’s face enhances speech intelligibility in adverse environments. We investigated the underlying network mechanisms by quantifying local speech representations and directed connectivity in MEG data obtained while human participants listened to speech of varying acoustic SNR and visual context. During high acoustic SNR speech encoding by temporally entrained brain activity was strong in temporal and inferior frontal cortex, while during low SNR strong entrainment emerged in premotor and superior frontal cortex. These changes in local encoding were accompanied by changes in directed connectivity along the ventral stream and the auditory-premotor axis. Importantly, the behavioral benefit arising from seeing the speaker’s face was not predicted by changes in local encoding but rather by enhanced functional connectivity between temporal and inferior frontal cortex. Our results demonstrate a role of auditory-frontal interactions in visual speech representations and suggest that functional connectivity along the ventral pathway facilitates speech comprehension in multisensory environments. DOI: http://dx.doi.org/10.7554/eLife.24763.001 PMID:28590903

  2. The effects of aging on the working memory processes of multimodal information.

    PubMed

    Solesio-Jofre, Elena; López-Frutos, José María; Cashdollar, Nathan; Aurtenetxe, Sara; de Ramón, Ignacio; Maestú, Fernando

    2017-05-01

    Normal aging is associated with deficits in working memory processes. However, the majority of research has focused on storage or inhibitory processes using unimodal paradigms, without addressing their relationships using different sensory modalities. Hence, we pursued two objectives. First, was to examine the effects of aging on storage and inhibitory processes. Second, was to evaluate aging effects on multisensory integration of visual and auditory stimuli. To this end, young and older participants performed a multimodal task for visual and auditory pairs of stimuli with increasing memory load at encoding and interference during retention. Our results showed an age-related increased vulnerability to interrupting and distracting interference reflecting inhibitory deficits related to the off-line reactivation and on-line suppression of relevant and irrelevant information, respectively. Storage capacity was impaired with increasing task demands in both age groups. Additionally, older adults showed a deficit in multisensory integration, with poorer performance for new visual compared to new auditory information.

  3. Cortical oscillations modulated by congruent and incongruent audiovisual stimuli.

    PubMed

    Herdman, A T; Fujioka, T; Chau, W; Ross, B; Pantev, C; Picton, T W

    2004-11-30

    Congruent or incongruent grapheme-phoneme stimuli are easily perceived as one or two linguistic objects. The main objective of this study was to investigate the changes in cortical oscillations that reflect the processing of congruent and incongruent audiovisual stimuli. Graphemes were Japanese Hiragana characters for four different vowels (/a/, /o/, /u/, and /i/). They were presented simultaneously with their corresponding phonemes (congruent) or non-corresponding phonemes (incongruent) to native-speaking Japanese participants. Participants' reaction times to the congruent audiovisual stimuli were significantly faster by 57 ms as compared to reaction times to incongruent stimuli. We recorded the brain responses for each condition using a whole-head magnetoencephalograph (MEG). A novel approach to analysing MEG data, called synthetic aperture magnetometry (SAM), was used to identify event-related changes in cortical oscillations involved in audiovisual processing. The SAM contrast between congruent and incongruent responses revealed greater event-related desynchonization (8-16 Hz) bilaterally in the occipital lobes and greater event-related synchronization (4-8 Hz) in the left transverse temporal gyrus. Results from this study further support the concept of interactions between the auditory and visual sensory cortices in multi-sensory processing of audiovisual objects.

  4. Bayesian-based integration of multisensory naturalistic perithreshold stimuli.

    PubMed

    Regenbogen, Christina; Johansson, Emilia; Andersson, Patrik; Olsson, Mats J; Lundström, Johan N

    2016-07-29

    Most studies exploring multisensory integration have used clearly perceivable stimuli. According to the principle of inverse effectiveness, the added neural and behavioral benefit of integrating clear stimuli is reduced in comparison to stimuli with degraded and less salient unisensory information. Traditionally, speed and accuracy measures have been analyzed separately with few studies merging these to gain an understanding of speed-accuracy trade-offs in multisensory integration. In two separate experiments, we assessed multisensory integration of naturalistic audio-visual objects consisting of individually-tailored perithreshold dynamic visual and auditory stimuli, presented within a multiple-choice task, using a Bayesian Hierarchical Drift Diffusion Model that combines response time and accuracy. For both experiments, unisensory stimuli were degraded to reach a 75% identification accuracy level for all individuals and stimuli to promote multisensory binding. In Experiment 1, we subsequently presented uni- and their respective bimodal stimuli followed by a 5-alternative-forced-choice task. In Experiment 2, we controlled for low-level integration and attentional differences. Both experiments demonstrated significant superadditive multisensory integration of bimodal perithreshold dynamic information. We present evidence that the use of degraded sensory stimuli may provide a link between previous findings of inverse effectiveness on a single neuron level and overt behavior. We further suggest that a combined measure of accuracy and reaction time may be a more valid and holistic approach of studying multisensory integration and propose the application of drift diffusion models for studying behavioral correlates as well as brain-behavior relationships of multisensory integration. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Multi-sensory Environments: An Exploration of Their Potential for Young People with Profound and Multiple Learning Difficulties.

    ERIC Educational Resources Information Center

    Mount, Helen; Cavet, Judith

    1995-01-01

    This article addresses the controversy concerning multisensory environments for children and adults with profound and multiple learning difficulties, from a British perspective. The need for critical evaluation of such multisensory interventions as the "snoezelen" approach and the paucity of relevant, rigorous research on educational…

  6. Multisensory Teaching of Basic Language Skills Activity Book. Revised Edition

    ERIC Educational Resources Information Center

    Carreker, Suzanne; Birsh, Judith R.

    2011-01-01

    With the new edition of this activity book--the companion to Judith Birsh's bestselling text, "Multisensory Teaching of Basic Language Skills"--students and practitioners will get the practice they need to use multisensory teaching effectively with students who have dyslexia and other learning disabilities. Ideal for both pre-service teacher…

  7. Realigning thunder and lightning: temporal adaptation to spatiotemporally distant events.

    PubMed

    Navarra, Jordi; Fernández-Prieto, Irune; Garcia-Morera, Joel

    2013-01-01

    The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exposure to visual and auditory stimuli that were separated by 706 ms and appeared either from the same (Experiment 1) or from different spatial positions (Experiment 2). A simultaneity judgment task (SJ) was administered right afterwards. Temporal realignment between vision and audition was observed, in both Experiment 1 and 2, when comparing the participants' SJs after this exposure phase with those obtained after a baseline exposure to audiovisual synchrony. However, this effect was present only when the visual stimuli preceded the auditory stimuli during the exposure to asynchrony. A similar pattern of results (temporal realignment after exposure to visual-leading asynchrony but not after exposure to auditory-leading asynchrony) was obtained using temporal order judgments (TOJs) instead of SJs (Experiment 3). Taken together, these results suggest that temporal recalibration still occurs for visual and auditory stimuli that fall clearly outside the so-called temporal window for multisensory integration and appear from different spatial positions. This temporal realignment may be modulated by long-term experience with the kind of asynchrony (vision-leading) that we most frequently encounter in the outside world (e.g., while perceiving distant events).

  8. Reduced orienting to audiovisual synchrony in infancy predicts autism diagnosis at 3 years of age.

    PubMed

    Falck-Ytter, Terje; Nyström, Pär; Gredebäck, Gustaf; Gliga, Teodora; Bölte, Sven

    2018-01-23

    Effective multisensory processing develops in infancy and is thought to be important for the perception of unified and multimodal objects and events. Previous research suggests impaired multisensory processing in autism, but its role in the early development of the disorder is yet uncertain. Here, using a prospective longitudinal design, we tested whether reduced visual attention to audiovisual synchrony is an infant marker of later-emerging autism diagnosis. We studied 10-month-old siblings of children with autism using an eye tracking task previously used in studies of preschoolers. The task assessed the effect of manipulations of audiovisual synchrony on viewing patterns while the infants were observing point light displays of biological motion. We analyzed the gaze data recorded in infancy according to diagnostic status at 3 years of age (DSM-5). Ten-month-old infants who later received an autism diagnosis did not orient to audiovisual synchrony expressed within biological motion. In contrast, both infants at low-risk and high-risk siblings without autism at follow-up had a strong preference for this type of information. No group differences were observed in terms of orienting to upright biological motion. This study suggests that reduced orienting to audiovisual synchrony within biological motion is an early sign of autism. The findings support the view that poor multisensory processing could be an important antecedent marker of this neurodevelopmental condition. © 2018 Association for Child and Adolescent Mental Health.

  9. Audition and vision share spatial attentional resources, yet attentional load does not disrupt audiovisual integration.

    PubMed

    Wahn, Basil; König, Peter

    2015-01-01

    Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage.

  10. Computing an optimal time window of audiovisual integration in focused attention tasks: illustrated by studies on effect of age and prior knowledge.

    PubMed

    Colonius, Hans; Diederich, Adele

    2011-07-01

    The concept of a "time window of integration" holds that information from different sensory modalities must not be perceived too far apart in time in order to be integrated into a multisensory perceptual event. Empirical estimates of window width differ widely, however, ranging from 40 to 600 ms depending on context and experimental paradigm. Searching for theoretical derivation of window width, Colonius and Diederich (Front Integr Neurosci 2010) developed a decision-theoretic framework using a decision rule that is based on the prior probability of a common source, the likelihood of temporal disparities between the unimodal signals, and the payoff for making right or wrong decisions. Here, this framework is extended to the focused attention task where subjects are asked to respond to signals from a target modality only. Evoking the framework of the time-window-of-integration (TWIN) model, an explicit expression for optimal window width is obtained. The approach is probed on two published focused attention studies. The first is a saccadic reaction time study assessing the efficiency with which multisensory integration varies as a function of aging. Although the window widths for young and older adults differ by nearly 200 ms, presumably due to their different peripheral processing speeds, neither of them deviates significantly from the optimal values. In the second study, head saccadic reactions times to a perfectly aligned audiovisual stimulus pair had been shown to depend on the prior probability of spatial alignment. Intriguingly, they reflected the magnitude of the time-window widths predicted by our decision-theoretic framework, i.e., a larger time window is associated with a higher prior probability.

  11. Multisensory stimuli elicit altered oscillatory brain responses at gamma frequencies in patients with schizophrenia

    PubMed Central

    Stone, David B.; Coffman, Brian A.; Bustillo, Juan R.; Aine, Cheryl J.; Stephen, Julia M.

    2014-01-01

    Deficits in auditory and visual unisensory responses are well documented in patients with schizophrenia; however, potential abnormalities elicited from multisensory audio-visual stimuli are less understood. Further, schizophrenia patients have shown abnormal patterns in task-related and task-independent oscillatory brain activity, particularly in the gamma frequency band. We examined oscillatory responses to basic unisensory and multisensory stimuli in schizophrenia patients (N = 46) and healthy controls (N = 57) using magnetoencephalography (MEG). Time-frequency decomposition was performed to determine regions of significant changes in gamma band power by group in response to unisensory and multisensory stimuli relative to baseline levels. Results showed significant behavioral differences between groups in response to unisensory and multisensory stimuli. In addition, time-frequency analysis revealed significant decreases and increases in gamma-band power in schizophrenia patients relative to healthy controls, which emerged both early and late over both sensory and frontal regions in response to unisensory and multisensory stimuli. Unisensory gamma-band power predicted multisensory gamma-band power differently by group. Furthermore, gamma-band power in these regions predicted performance in select measures of the Measurement and Treatment Research to Improve Cognition in Schizophrenia (MATRICS) test battery differently by group. These results reveal a unique pattern of task-related gamma-band power in schizophrenia patients relative to controls that may indicate reduced inhibition in combination with impaired oscillatory mechanisms in patients with schizophrenia. PMID:25414652

  12. Proprioceptive feedback determines visuomotor gain in Drosophila

    PubMed Central

    Bartussek, Jan; Lehmann, Fritz-Olaf

    2016-01-01

    Multisensory integration is a prerequisite for effective locomotor control in most animals. Especially, the impressive aerial performance of insects relies on rapid and precise integration of multiple sensory modalities that provide feedback on different time scales. In flies, continuous visual signalling from the compound eyes is fused with phasic proprioceptive feedback to ensure precise neural activation of wing steering muscles (WSM) within narrow temporal phase bands of the stroke cycle. This phase-locked activation relies on mechanoreceptors distributed over wings and gyroscopic halteres. Here we investigate visual steering performance of tethered flying fruit flies with reduced haltere and wing feedback signalling. Using a flight simulator, we evaluated visual object fixation behaviour, optomotor altitude control and saccadic escape reflexes. The behavioural assays show an antagonistic effect of wing and haltere signalling on visuomotor gain during flight. Compared with controls, suppression of haltere feedback attenuates while suppression of wing feedback enhances the animal’s wing steering range. Our results suggest that the generation of motor commands owing to visual perception is dynamically controlled by proprioception. We outline a potential physiological mechanism based on the biomechanical properties of WSM and sensory integration processes at the level of motoneurons. Collectively, the findings contribute to our general understanding how moving animals integrate sensory information with dynamically changing temporal structure. PMID:26909184

  13. Comparative Effects of Multisensory and Metacognitive Instructional Approaches on English Vocabulary Achievement of Underachieving Nigerian Secondary School Students

    ERIC Educational Resources Information Center

    Adeniyi, Folakemi O.; Lawal, R. Adebayo

    2012-01-01

    The purpose of this study was to find out the relative effects of three instructional Approaches i.e. Multisensory, Metacognitive, and a combination of Multisensory and Metacognitive Instructional Approaches on the Vocabulary achievement of underachieving Secondary School Students. The study adopted the quasi-experimental design in which a…

  14. A Review of Multi-Sensory Technologies in a Science, Technology, Engineering, Arts and Mathematics (STEAM) Classroom

    ERIC Educational Resources Information Center

    Taljaard, Johann

    2016-01-01

    This article reviews the literature on multi-sensory technology and, in particular, looks at answering the question: "What multi-sensory technologies are available to use in a science, technology, engineering, arts and mathematics (STEAM) classroom, and do they affect student engagement and learning outcomes?" Here engagement is defined…

  15. Multisensory brand search: How the meaning of sounds guides consumers' visual attention.

    PubMed

    Knoeferle, Klemens M; Knoeferle, Pia; Velasco, Carlos; Spence, Charles

    2016-06-01

    Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  16. It feels like it’s me: interpersonal multisensory stimulation enhances visual remapping of touch from other to self

    PubMed Central

    Cardini, Flavia; Tajadura-Jiménez, Ana; Serino, Andrea; Tsakiris, Manos

    2013-01-01

    Understanding other people’s feelings in social interactions depends on the ability to map onto our body the sensory experiences we observed on other people’s bodies. It has been shown that the perception of tactile stimuli on the face is improved when concurrently viewing a face being touched. This Visual Remapping of Touch (VRT) is enhanced the more similar others are perceived to be to the self, and is strongest when viewing one’s face. Here, we ask whether altering self-other boundaries can in turn change the VRT effect. We used the enfacement illusion, which relies on synchronous interpersonal multisensory stimulation (IMS), to manipulate self-other boundaries. Following synchronous, but not asynchronous, IMS, the self-related enhancement of the VRT extended to the other individual. These findings suggest that shared multisensory experiences represent one key way to overcome the boundaries between self and others, as evidenced by changes in somatosensory processing of tactile stimuli on one’s own face when concurrently viewing another person’s face being touched. PMID:23276110

  17. Applying the Neurodynamics of Emotional Circular Causalities in Psychosocial and Cognitive Therapy using Multi-Sensory Environments: An ORBDE Case Study Analysis.

    PubMed

    Ryan, Janice

    2017-10-01

    This exploratory, evidence-based practice research study focuses on presenting a plausible mesoscopic brain dynamics hypothesis for the benefits of treating clients with psychosocial and cognitive challenges using a mindful therapeutic approach and multi-sensory environments. After an extensive neuroscientific review of the therapeutic benefits of mindfulness, a multi-sensory environment is presented as a window of therapeutic opportunity to more quickly and efficiently facilitate the neurobiological experience of becoming more mindful or conscious of self and environment. The complementary relationship between the default mode network and the executive attention network is offered as a neurobiological hypothesis that could explain positive occupational engagement pattern shifts in a case study video of a hospice client with advanced dementia during multi-sensory environment treatment. Orbital Decomposition is used for a video analysis that shows a significant behavioral pattern shift consistent with dampening of the perceptual system attractors that contribute to negative emotional circular causalities in a variety of client populations. This treatment approach may also prove to be valuable for any person who has developed circular causalities due to feelings of isolation, victimization, or abuse. A case is made for broader applications of this intervention that may positively influence perception during the information transfer and processing of hippocampal learning. Future research is called for to determine if positive affective, interpersonal, and occupational engagement pattern shifts during treatment are related to the improved default mode network-executive attention network synchrony characteristic of increased mindfulness.

  18. The multisensory brain and its ability to learn music.

    PubMed

    Zimmerman, Emily; Lahav, Amir

    2012-04-01

    Playing a musical instrument requires a complex skill set that depends on the brain's ability to quickly integrate information from multiple senses. It has been well documented that intensive musical training alters brain structure and function within and across multisensory brain regions, supporting the experience-dependent plasticity model. Here, we argue that this experience-dependent plasticity occurs because of the multisensory nature of the brain and may be an important contributing factor to musical learning. This review highlights key multisensory regions within the brain and discusses their role in the context of music learning and rehabilitation. © 2012 New York Academy of Sciences.

  19. Auditory-Motor Rhythms and Speech Processing in French and German Listeners

    PubMed Central

    Falk, Simone; Volpi-Moncorger, Chloé; Dalla Bella, Simone

    2017-01-01

    Moving to a speech rhythm can enhance verbal processing in the listener by increasing temporal expectancies (Falk and Dalla Bella, 2016). Here we tested whether this hypothesis holds for prosodically diverse languages such as German (a lexical stress-language) and French (a non-stress language). Moreover, we examined the relation between motor performance and the benefits for verbal processing as a function of language. Sixty-four participants, 32 German and 32 French native speakers detected subtle word changes in accented positions in metrically structured sentences to which they previously tapped with their index finger. Before each sentence, they were cued by a metronome to tap either congruently (i.e., to accented syllables) or incongruently (i.e., to non-accented parts) to the following speech stimulus. Both French and German speakers detected words better when cued to tap congruently compared to incongruent tapping. Detection performance was predicted by participants' motor performance in the non-verbal cueing phase. Moreover, tapping rate while participants tapped to speech predicted detection differently for the two language groups, in particular in the incongruent tapping condition. We discuss our findings in light of the rhythmic differences of both languages and with respect to recent theories of expectancy-driven and multisensory speech processing. PMID:28443036

  20. Neurophysiological Indices of Atypical Auditory Processing and Multisensory Integration Are Associated with Symptom Severity in Autism

    ERIC Educational Resources Information Center

    Brandwein, Alice B.; Foxe, John J.; Butler, John S.; Frey, Hans-Peter; Bates, Juliana C.; Shulman, Lisa H.; Molholm, Sophie

    2015-01-01

    Atypical processing and integration of sensory inputs are hypothesized to play a role in unusual sensory reactions and social-cognitive deficits in autism spectrum disorder (ASD). Reports on the relationship between objective metrics of sensory processing and clinical symptoms, however, are surprisingly sparse. Here we examined the relationship…

  1. Looking for myself: current multisensory input alters self-face recognition.

    PubMed

    Tsakiris, Manos

    2008-01-01

    How do I know the person I see in the mirror is really me? Is it because I know the person simply looks like me, or is it because the mirror reflection moves when I move, and I see it being touched when I feel touch myself? Studies of face-recognition suggest that visual recognition of stored visual features inform self-face recognition. In contrast, body-recognition studies conclude that multisensory integration is the main cue to selfhood. The present study investigates for the first time the specific contribution of current multisensory input for self-face recognition. Participants were stroked on their face while they were looking at a morphed face being touched in synchrony or asynchrony. Before and after the visuo-tactile stimulation participants performed a self-recognition task. The results show that multisensory signals have a significant effect on self-face recognition. Synchronous tactile stimulation while watching another person's face being similarly touched produced a bias in recognizing one's own face, in the direction of the other person included in the representation of one's own face. Multisensory integration can update cognitive representations of one's body, such as the sense of ownership. The present study extends this converging evidence by showing that the correlation of synchronous multisensory signals also updates the representation of one's face. The face is a key feature of our identity, but at the same time is a source of rich multisensory experiences used to maintain or update self-representations.

  2. The Theory of Localist Representation and of a Purely Abstract Cognitive System: The Evidence from Cortical Columns, Category Cells, and Multisensory Neurons.

    PubMed

    Roy, Asim

    2017-01-01

    The debate about representation in the brain and the nature of the cognitive system has been going on for decades now. This paper examines the neurophysiological evidence, primarily from single cell recordings, to get a better perspective on both the issues. After an initial review of some basic concepts, the paper reviews the data from single cell recordings - in cortical columns and of category-selective and multisensory neurons. In neuroscience, columns in the neocortex (cortical columns) are understood to be a basic functional/computational unit. The paper reviews the fundamental discoveries about the columnar organization and finds that it reveals a massively parallel search mechanism. This columnar organization could be the most extensive neurophysiological evidence for the widespread use of localist representation in the brain. The paper also reviews studies of category-selective cells. The evidence for category-selective cells reveals that localist representation is also used to encode complex abstract concepts at the highest levels of processing in the brain. A third major issue is the nature of the cognitive system in the brain and whether there is a form that is purely abstract and encoded by single cells. To provide evidence for a single-cell based purely abstract cognitive system, the paper reviews some of the findings related to multisensory cells. It appears that there is widespread usage of multisensory cells in the brain in the same areas where sensory processing takes place. Plus there is evidence for abstract modality invariant cells at higher levels of cortical processing. Overall, that reveals the existence of a purely abstract cognitive system in the brain. The paper also argues that since there is no evidence for dense distributed representation and since sparse representation is actually used to encode memories, there is actually no evidence for distributed representation in the brain. Overall, it appears that, at an abstract level, the brain is a massively parallel, distributed computing system that is symbolic. The paper also explains how grounded cognition and other theories of the brain are fully compatible with localist representation and a purely abstract cognitive system.

  3. The Theory of Localist Representation and of a Purely Abstract Cognitive System: The Evidence from Cortical Columns, Category Cells, and Multisensory Neurons

    PubMed Central

    Roy, Asim

    2017-01-01

    The debate about representation in the brain and the nature of the cognitive system has been going on for decades now. This paper examines the neurophysiological evidence, primarily from single cell recordings, to get a better perspective on both the issues. After an initial review of some basic concepts, the paper reviews the data from single cell recordings – in cortical columns and of category-selective and multisensory neurons. In neuroscience, columns in the neocortex (cortical columns) are understood to be a basic functional/computational unit. The paper reviews the fundamental discoveries about the columnar organization and finds that it reveals a massively parallel search mechanism. This columnar organization could be the most extensive neurophysiological evidence for the widespread use of localist representation in the brain. The paper also reviews studies of category-selective cells. The evidence for category-selective cells reveals that localist representation is also used to encode complex abstract concepts at the highest levels of processing in the brain. A third major issue is the nature of the cognitive system in the brain and whether there is a form that is purely abstract and encoded by single cells. To provide evidence for a single-cell based purely abstract cognitive system, the paper reviews some of the findings related to multisensory cells. It appears that there is widespread usage of multisensory cells in the brain in the same areas where sensory processing takes place. Plus there is evidence for abstract modality invariant cells at higher levels of cortical processing. Overall, that reveals the existence of a purely abstract cognitive system in the brain. The paper also argues that since there is no evidence for dense distributed representation and since sparse representation is actually used to encode memories, there is actually no evidence for distributed representation in the brain. Overall, it appears that, at an abstract level, the brain is a massively parallel, distributed computing system that is symbolic. The paper also explains how grounded cognition and other theories of the brain are fully compatible with localist representation and a purely abstract cognitive system. PMID:28261127

  4. Change my body, change my mind: the effects of illusory ownership of an outgroup hand on implicit attitudes toward that outgroup.

    PubMed

    Farmer, Harry; Maister, Lara; Tsakiris, Manos

    2014-01-13

    The effect of multisensory-induced changes on body-ownership and self-awareness using bodily illusions has been well established. More recently, experimental manipulation of bodily illusions have been combined with social cognition tasks to investigate whether changes in body-ownership can in turn change the way we perceive others. For example, experiencing ownership over a dark-skin rubber hand reduces implicit bias against dark-skin groups. Several studies have also shown that processing of skin color and facial features play an important role in judgements of racial typicality and racial categorization independently and in an additive manner. The present study aimed at examining whether using multisensory stimulation to induce feelings of body-ownership over a dark-skin rubber hand would lead to an increase in positive attitudes toward black faces. We here show, that the induced ownership of a body-part of a different skin color affected the participants' implicit attitudes when processing facial features, in addition to the processing of skin color shown previously. Furthermore, when the levels of pre-existing attitudes toward black people were taken into account, the effect of the rubber hand illusion on the post-stimulation implicit attitudes was only significant for those participants who had a negative initial attitude toward black people, with no significant effects found for those who had positive initial attitudes toward black people. Taken together, our findings corroborate the hypothesis that the representation of the self and its relation to others, as given to us by body-related multisensory processing, is critical in maintaining but also in changing social attitudes.

  5. A biologically inspired neural model for visual and proprioceptive integration including sensory training.

    PubMed

    Saidi, Maryam; Towhidkhah, Farzad; Gharibzadeh, Shahriar; Lari, Abdolaziz Azizi

    2013-12-01

    Humans perceive the surrounding world by integration of information through different sensory modalities. Earlier models of multisensory integration rely mainly on traditional Bayesian and causal Bayesian inferences for single causal (source) and two causal (for two senses such as visual and auditory systems), respectively. In this paper a new recurrent neural model is presented for integration of visual and proprioceptive information. This model is based on population coding which is able to mimic multisensory integration of neural centers in the human brain. The simulation results agree with those achieved by casual Bayesian inference. The model can also simulate the sensory training process of visual and proprioceptive information in human. Training process in multisensory integration is a point with less attention in the literature before. The effect of proprioceptive training on multisensory perception was investigated through a set of experiments in our previous study. The current study, evaluates the effect of both modalities, i.e., visual and proprioceptive training and compares them with each other through a set of new experiments. In these experiments, the subject was asked to move his/her hand in a circle and estimate its position. The experiments were performed on eight subjects with proprioception training and eight subjects with visual training. Results of the experiments show three important points: (1) visual learning rate is significantly more than that of proprioception; (2) means of visual and proprioceptive errors are decreased by training but statistical analysis shows that this decrement is significant for proprioceptive error and non-significant for visual error, and (3) visual errors in training phase even in the beginning of it, is much less than errors of the main test stage because in the main test, the subject has to focus on two senses. The results of the experiments in this paper is in agreement with the results of the neural model simulation.

  6. Multisensory and modality specific processing of visual speech in different regions of the premotor cortex

    PubMed Central

    Callan, Daniel E.; Jones, Jeffery A.; Callan, Akiko

    2014-01-01

    Behavioral and neuroimaging studies have demonstrated that brain regions involved with speech production also support speech perception, especially under degraded conditions. The premotor cortex (PMC) has been shown to be active during both observation and execution of action (“Mirror System” properties), and may facilitate speech perception by mapping unimodal and multimodal sensory features onto articulatory speech gestures. For this functional magnetic resonance imaging (fMRI) study, participants identified vowels produced by a speaker in audio-visual (saw the speaker's articulating face and heard her voice), visual only (only saw the speaker's articulating face), and audio only (only heard the speaker's voice) conditions with varying audio signal-to-noise ratios in order to determine the regions of the PMC involved with multisensory and modality specific processing of visual speech gestures. The task was designed so that identification could be made with a high level of accuracy from visual only stimuli to control for task difficulty and differences in intelligibility. The results of the functional magnetic resonance imaging (fMRI) analysis for visual only and audio-visual conditions showed overlapping activity in inferior frontal gyrus and PMC. The left ventral inferior premotor cortex (PMvi) showed properties of multimodal (audio-visual) enhancement with a degraded auditory signal. The left inferior parietal lobule and right cerebellum also showed these properties. The left ventral superior and dorsal premotor cortex (PMvs/PMd) did not show this multisensory enhancement effect, but there was greater activity for the visual only over audio-visual conditions in these areas. The results suggest that the inferior regions of the ventral premotor cortex are involved with integrating multisensory information, whereas, more superior and dorsal regions of the PMC are involved with mapping unimodal (in this case visual) sensory features of the speech signal with articulatory speech gestures. PMID:24860526

  7. Multisensory integration: flexible use of general operations

    PubMed Central

    van Atteveldt, Nienke; Murray, Micah M.; Thut, Gregor; Schroeder, Charles

    2014-01-01

    Research into the anatomical substrates and “principles” for integrating inputs from separate sensory surfaces has yielded divergent findings. This suggests that multisensory integration is flexible and context-dependent, and underlines the need for dynamically adaptive neuronal integration mechanisms. We propose that flexible multisensory integration can be explained by a combination of canonical, population-level integrative operations, such as oscillatory phase-resetting and divisive normalization. These canonical operations subsume multisensory integration into a fundamental set of principles as to how the brain integrates all sorts of information, and they are being used proactively and adaptively. We illustrate this proposition by unifying recent findings from different research themes such as timing, behavioral goal and experience-related differences in integration. PMID:24656248

  8. Interoceptive signals impact visual processing: Cardiac modulation of visual body perception.

    PubMed

    Ronchi, Roberta; Bernasconi, Fosco; Pfeiffer, Christian; Bello-Ruiz, Javier; Kaliuzhna, Mariia; Blanke, Olaf

    2017-09-01

    Multisensory perception research has largely focused on exteroceptive signals, but recent evidence has revealed the integration of interoceptive signals with exteroceptive information. Such research revealed that heartbeat signals affect sensory (e.g., visual) processing: however, it is unknown how they impact the perception of body images. Here we linked our participants' heartbeat to visual stimuli and investigated the spatio-temporal brain dynamics of cardio-visual stimulation on the processing of human body images. We recorded visual evoked potentials with 64-channel electroencephalography while showing a body or a scrambled-body (control) that appeared at the frequency of the on-line recorded participants' heartbeat or not (not-synchronous, control). Extending earlier studies, we found a body-independent effect, with cardiac signals enhancing visual processing during two time periods (77-130 ms and 145-246 ms). Within the second (later) time-window we detected a second effect characterised by enhanced activity in parietal, temporo-occipital, inferior frontal, and right basal ganglia-insula regions, but only when non-scrambled body images were flashed synchronously with the heartbeat (208-224 ms). In conclusion, our results highlight the role of interoceptive information for the visual processing of human body pictures within a network integrating cardio-visual signals of relevance for perceptual and cognitive aspects of visual body processing. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Deficient multisensory integration in schizophrenia: an event-related potential study.

    PubMed

    Stekelenburg, Jeroen J; Maes, Jan Pieter; Van Gool, Arthur R; Sitskoorn, Margriet; Vroomen, Jean

    2013-07-01

    In many natural audiovisual events (e.g., the sight of a face articulating the syllable /ba/), the visual signal precedes the sound and thus allows observers to predict the onset and the content of the sound. In healthy adults, the N1 component of the event-related brain potential (ERP), reflecting neural activity associated with basic sound processing, is suppressed if a sound is accompanied by a video that reliably predicts sound onset. If the sound does not match the content of the video (e.g., hearing /ba/ while lipreading /fu/), the later occurring P2 component is affected. Here, we examined whether these visual information sources affect auditory processing in patients with schizophrenia. The electroencephalography (EEG) was recorded in 18 patients with schizophrenia and compared with that of 18 healthy volunteers. As stimuli we used video recordings of natural actions in which visual information preceded and predicted the onset of the sound that was either congruent or incongruent with the video. For the healthy control group, visual information reduced the auditory-evoked N1 if compared to a sound-only condition, and stimulus-congruency affected the P2. This reduction in N1 was absent in patients with schizophrenia, and the congruency effect on the P2 was diminished. Distributed source estimations revealed deficits in the network subserving audiovisual integration in patients with schizophrenia. The results show a deficit in multisensory processing in patients with schizophrenia and suggest that multisensory integration dysfunction may be an important and, to date, under-researched aspect of schizophrenia. Copyright © 2013. Published by Elsevier B.V.

  10. Realigning Thunder and Lightning: Temporal Adaptation to Spatiotemporally Distant Events

    PubMed Central

    Navarra, Jordi; Fernández-Prieto, Irune; Garcia-Morera, Joel

    2013-01-01

    The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exposure to visual and auditory stimuli that were separated by 706 ms and appeared either from the same (Experiment 1) or from different spatial positions (Experiment 2). A simultaneity judgment task (SJ) was administered right afterwards. Temporal realignment between vision and audition was observed, in both Experiment 1 and 2, when comparing the participants’ SJs after this exposure phase with those obtained after a baseline exposure to audiovisual synchrony. However, this effect was present only when the visual stimuli preceded the auditory stimuli during the exposure to asynchrony. A similar pattern of results (temporal realignment after exposure to visual-leading asynchrony but not after exposure to auditory-leading asynchrony) was obtained using temporal order judgments (TOJs) instead of SJs (Experiment 3). Taken together, these results suggest that temporal recalibration still occurs for visual and auditory stimuli that fall clearly outside the so-called temporal window for multisensory integration and appear from different spatial positions. This temporal realignment may be modulated by long-term experience with the kind of asynchrony (vision-leading) that we most frequently encounter in the outside world (e.g., while perceiving distant events). PMID:24391928

  11. The priming function of in-car audio instruction.

    PubMed

    Keyes, Helen; Whitmore, Antony; Naneva, Stanislava; McDermott, Daragh

    2018-05-01

    Studies to date have focused on the priming power of visual road signs, but not the priming potential of audio road scene instruction. Here, the relative priming power of visual, audio, and multisensory road scene instructions was assessed. In a lab-based study, participants responded to target road scene turns following visual, audio, or multisensory road turn primes which were congruent or incongruent to the primes in direction, or control primes. All types of instruction (visual, audio, and multisensory) were successful in priming responses to a road scene. Responses to multisensory-primed targets (both audio and visual) were faster than responses to either audio or visual primes alone. Incongruent audio primes did not affect performance negatively in the manner of incongruent visual or multisensory primes. Results suggest that audio instructions have the potential to prime drivers to respond quickly and safely to their road environment. Peak performance will be observed if audio and visual road instruction primes can be timed to co-occur.

  12. Learning with the City via Enchantment: Photo-Walks as Creative Encounters

    ERIC Educational Resources Information Center

    Pyyry, Noora

    2016-01-01

    In this paper, I approach learning as a process of rethinking the world that happens via the surprising experience of "enchantment." This process becomes possible by dwelling, that is, by forming meaningful multisensory engagements with one's surroundings. I present my arguments by discussing photo-walks that students conducted in…

  13. Predictable Locations Aid Early Object Name Learning

    ERIC Educational Resources Information Center

    Benitez, Viridiana L.; Smith, Linda B.

    2012-01-01

    Expectancy-based localized attention has been shown to promote the formation and retrieval of multisensory memories in adults. Three experiments show that these processes also characterize attention and learning in 16- to 18-month old infants and, moreover, that these processes may play a critical role in supporting early object name learning. The…

  14. Sensory Mode and "Information Load": Examining the Effects of Timing on Multisensory Processing.

    ERIC Educational Resources Information Center

    Tiene, Drew

    2000-01-01

    Discussion of the development of instructional multimedia materials focuses on a study of undergraduates that examined how the use of visual icons affected learning, differences in the instructional effectiveness of visual versus auditory processing of the same information, and timing (whether simultaneous or sequential presentation is more…

  15. Sleeping on the rubber-hand illusion: Memory reactivation during sleep facilitates multisensory recalibration.

    PubMed

    Honma, Motoyasu; Plass, John; Brang, David; Florczak, Susan M; Grabowecky, Marcia; Paller, Ken A

    2016-01-01

    Plasticity is essential in body perception so that physical changes in the body can be accommodated and assimilated. Multisensory integration of visual, auditory, tactile, and proprioceptive signals contributes both to conscious perception of the body's current state and to associated learning. However, much is unknown about how novel information is assimilated into body perception networks in the brain. Sleep-based consolidation can facilitate various types of learning via the reactivation of networks involved in prior encoding or through synaptic down-scaling. Sleep may likewise contribute to perceptual learning of bodily information by providing an optimal time for multisensory recalibration. Here we used methods for targeted memory reactivation (TMR) during slow-wave sleep to examine the influence of sleep-based reactivation of experimentally induced alterations in body perception. The rubber-hand illusion was induced with concomitant auditory stimulation in 24 healthy participants on 3 consecutive days. While each participant was sleeping in his or her own bed during intervening nights, electrophysiological detection of slow-wave sleep prompted covert stimulation with either the sound heard during illusion induction, a counterbalanced novel sound, or neither. TMR systematically enhanced feelings of bodily ownership after subsequent inductions of the rubber-hand illusion. TMR also enhanced spatial recalibration of perceived hand location in the direction of the rubber hand. This evidence for a sleep-based facilitation of a body-perception illusion demonstrates that the spatial recalibration of multisensory signals can be altered overnight to stabilize new learning of bodily representations. Sleep-based memory processing may thus constitute a fundamental component of body-image plasticity.

  16. On the role of crossmodal prediction in audiovisual emotion perception.

    PubMed

    Jessen, Sarah; Kotz, Sonja A

    2013-01-01

    Humans rely on multiple sensory modalities to determine the emotional state of others. In fact, such multisensory perception may be one of the mechanisms explaining the ease and efficiency by which others' emotions are recognized. But how and when exactly do the different modalities interact? One aspect in multisensory perception that has received increasing interest in recent years is the concept of cross-modal prediction. In emotion perception, as in most other settings, visual information precedes the auditory information. Thereby, leading in visual information can facilitate subsequent auditory processing. While this mechanism has often been described in audiovisual speech perception, so far it has not been addressed in audiovisual emotion perception. Based on the current state of the art in (a) cross-modal prediction and (b) multisensory emotion perception research, we propose that it is essential to consider the former in order to fully understand the latter. Focusing on electroencephalographic (EEG) and magnetoencephalographic (MEG) studies, we provide a brief overview of the current research in both fields. In discussing these findings, we suggest that emotional visual information may allow more reliable predicting of auditory information compared to non-emotional visual information. In support of this hypothesis, we present a re-analysis of a previous data set that shows an inverse correlation between the N1 EEG response and the duration of visual emotional, but not non-emotional information. If the assumption that emotional content allows more reliable predicting can be corroborated in future studies, cross-modal prediction is a crucial factor in our understanding of multisensory emotion perception.

  17. Body schema and corporeal self-recognition in the alien hand syndrome.

    PubMed

    Olgiati, Elena; Maravita, Angelo; Spandri, Viviana; Casati, Roberta; Ferraro, Francesco; Tedesco, Lucia; Agostoni, Elio Clemente; Bolognini, Nadia

    2017-07-01

    The alien hand syndrome (AHS) is a rare neuropsychological disorder characterized by involuntary, yet purposeful, hand movements. Patients with the AHS typically complain about a loss of agency associated with a feeling of estrangement for actions performed by the affected limb. The present study explores the integrity of the body representation in AHS, focusing on 2 main processes: multisensory integration and visual self-recognition of body parts. Three patients affected by AHS following a right-hemisphere stroke, with clinical symptoms akin to the posterior variant of AHS, were tested and their performance was compared with that of 18 age-matched healthy controls. AHS patients and controls underwent 2 experimental tasks: a same-different visual matching task for body postures, which assessed the ability of using your own body schema for encoding others' body postural changes (Experiment 1), and an explicit self-hand recognition task, which assessed the ability to visually recognize your own hands (Experiment 2). As compared to controls, all AHS patients were unable to access a reliable multisensory representation of their alien hand and use it for decoding others' postural changes; however, they could rely on an efficient multisensory representation of their intact (ipsilesional) hand. Two AHS patients also presented with a specific impairment in the visual self-recognition of their alien hand, but normal recognition of their intact hand. This evidence suggests that the AHS following a right-hemisphere stroke may involve a disruption of the multisensory representation of the alien limb; instead, self-hand recognition mechanisms may be spared. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. Dance experience sculpts aesthetic perception and related brain circuits

    PubMed Central

    Kirsch, Louise P; Dawson, Kelvin; Cross, Emily S

    2015-01-01

    Previous research on aesthetic preferences demonstrates that people are more likely to judge a stimulus as pleasing if it is familiar. Although general familiarity and liking are related, it is less clear how motor familiarity, or embodiment, relates to a viewer's aesthetic appraisal. This study directly compared how learning to embody an action impacts the neural response when watching and aesthetically evaluating the same action. Twenty-two participants trained for 4 days on dance sequences. Each day they physically rehearsed one set of sequences, passively watched a second set, listened to the music of a third set, and a fourth set remained untrained. Functional MRI was obtained prior to and immediately following the training period, as were affective and physical ability ratings for each dance sequence. This approach enabled precise comparison of self-report methods of embodiment with nonbiased, empirical measures of action performance. Results suggest that after experience, participants most enjoy watching those dance sequences they danced or observed. Moreover, brain regions involved in mediating the aesthetic response shift from subcortical regions associated with dopaminergic reward processing to posterior temporal regions involved in processing multisensory integration, emotion, and biological motion. PMID:25773627

  19. Recent advances in magnesium assessment: From single selective sensors to multisensory approach.

    PubMed

    Lvova, Larisa; Gonçalves, Carla Guanais; Di Natale, Corrado; Legin, Andrey; Kirsanov, Dmitry; Paolesse, Roberto

    2018-03-01

    The development of efficient analytical procedures for the selective detection of magnesium is an important analytical task, since this element is one of the most abundant metals in cells and plays an essential role in a plenty of cellular processes. Magnesium misbalance has been related to several pathologies and diseases both in plants and animals, as far as in humans, but the number of suitable methods for magnesium detection especially in life sample and biological environments is scarce. Chemical sensors, due to their high reliability, simplicity of handling and instrumentation, fast and real-time in situ and on site analysis are promising candidates for magnesium analysis and represent an attractive alternative to the standard instrumental methods. Here the recent achievements in the development of chemical sensors for magnesium ions detection over the last decade are reviewed. The working principles and the main types of sensors applied are described. Focus is placed on the optical sensors and multisensory systems applications for magnesium assessment in different media. Further, a critical outlook on the employment of multisensory approach in comparison to single selective sensors application in biological samples is presented. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. The multisensory basis of the self: From body to identity to others

    PubMed Central

    Tsakiris, Manos

    2017-01-01

    ABSTRACT By grounding the self in the body, experimental psychology has taken the body as the starting point for a science of the self. One fundamental dimension of the bodily self is the sense of body ownership that refers to the special perceptual status of one’s own body, the feeling that “my body” belongs to me. The primary aim of this review article is to highlight recent advances in the study of body ownership and our understanding of the underlying neurocognitive processes in three ways. I first consider how the sense of body ownership has been investigated and elucidated in the context of multisensory integration. Beyond exteroception, recent studies have considered how this exteroceptively driven sense of body ownership can be linked to the other side of embodiment, that of the unobservable, yet felt, interoceptive body, suggesting that these two sides of embodiment interact to provide a unifying bodily self. Lastly, the multisensorial understanding of the self has been shown to have implications for our understanding of social relationships, especially in the context of self–other boundaries. Taken together, these three research strands motivate a unified model of the self inspired by current predictive coding models. PMID:27100132

  1. The multisensory basis of the self: From body to identity to others [Formula: see text].

    PubMed

    Tsakiris, Manos

    2017-04-01

    By grounding the self in the body, experimental psychology has taken the body as the starting point for a science of the self. One fundamental dimension of the bodily self is the sense of body ownership that refers to the special perceptual status of one's own body, the feeling that "my body" belongs to me. The primary aim of this review article is to highlight recent advances in the study of body ownership and our understanding of the underlying neurocognitive processes in three ways. I first consider how the sense of body ownership has been investigated and elucidated in the context of multisensory integration. Beyond exteroception, recent studies have considered how this exteroceptively driven sense of body ownership can be linked to the other side of embodiment, that of the unobservable, yet felt, interoceptive body, suggesting that these two sides of embodiment interact to provide a unifying bodily self. Lastly, the multisensorial understanding of the self has been shown to have implications for our understanding of social relationships, especially in the context of self-other boundaries. Taken together, these three research strands motivate a unified model of the self inspired by current predictive coding models.

  2. Multisensory Integration in the Virtual Hand Illusion with Active Movement

    PubMed Central

    Satoh, Satoru; Hachimura, Kozaburo

    2016-01-01

    Improving the sense of immersion is one of the core issues in virtual reality. Perceptual illusions of ownership can be perceived over a virtual body in a multisensory virtual reality environment. Rubber Hand and Virtual Hand Illusions showed that body ownership can be manipulated by applying suitable visual and tactile stimulation. In this study, we investigate the effects of multisensory integration in the Virtual Hand Illusion with active movement. A virtual xylophone playing system which can interactively provide synchronous visual, tactile, and auditory stimulation was constructed. We conducted two experiments regarding different movement conditions and different sensory stimulations. Our results demonstrate that multisensory integration with free active movement can improve the sense of immersion in virtual reality. PMID:27847822

  3. Severe Multisensory Speech Integration Deficits in High-Functioning School-Aged Children with Autism Spectrum Disorder (ASD) and Their Resolution During Early Adolescence

    PubMed Central

    Foxe, John J.; Molholm, Sophie; Del Bene, Victor A.; Frey, Hans-Peter; Russo, Natalie N.; Blanco, Daniella; Saint-Amour, Dave; Ross, Lars A.

    2015-01-01

    Under noisy listening conditions, visualizing a speaker's articulations substantially improves speech intelligibility. This multisensory speech integration ability is crucial to effective communication, and the appropriate development of this capacity greatly impacts a child's ability to successfully navigate educational and social settings. Research shows that multisensory integration abilities continue developing late into childhood. The primary aim here was to track the development of these abilities in children with autism, since multisensory deficits are increasingly recognized as a component of the autism spectrum disorder (ASD) phenotype. The abilities of high-functioning ASD children (n = 84) to integrate seen and heard speech were assessed cross-sectionally, while environmental noise levels were systematically manipulated, comparing them with age-matched neurotypical children (n = 142). Severe integration deficits were uncovered in ASD, which were increasingly pronounced as background noise increased. These deficits were evident in school-aged ASD children (5–12 year olds), but were fully ameliorated in ASD children entering adolescence (13–15 year olds). The severity of multisensory deficits uncovered has important implications for educators and clinicians working in ASD. We consider the observation that the multisensory speech system recovers substantially in adolescence as an indication that it is likely amenable to intervention during earlier childhood, with potentially profound implications for the development of social communication abilities in ASD children. PMID:23985136

  4. Multisensory environments for leisure: promoting well-being in nursing home residents with dementia.

    PubMed

    Cox, Helen; Burns, Ian; Savage, Sally

    2004-02-01

    Multisensory environments such as Snoezelen rooms are becoming increasingly popular in health care facilities for older individuals. There is limited reliable evidence of the benefits of such innovations, and the effect they have on residents, caregivers, and visitors in these facilities. This two-stage project examined how effective two types of multisensory environments were in improving the well-being of older individuals with dementia. The two multisensory environments were a Snoezelen room and a landscaped garden. These environments were compared to the experience of the normal living environment. The observed response of 24 residents with dementia in a nursing home was measured during time spent in the Snoezelen room, in the garden, and in the living room. In the second part of the project, face-to-face interviews were conducted with six caregivers and six visitors to obtain their responses to the multisensory environments. These interviews identified the components of the environments most used and enjoyed by residents and the ways in which they could be improved to maximize well-being.

  5. Alterations in audiovisual simultaneity perception in amblyopia

    PubMed Central

    2017-01-01

    Amblyopia is a developmental visual impairment that is increasingly recognized to affect higher-level perceptual and multisensory processes. To further investigate the audiovisual (AV) perceptual impairments associated with this condition, we characterized the temporal interval in which asynchronous auditory and visual stimuli are perceived as simultaneous 50% of the time (i.e., the AV simultaneity window). Adults with unilateral amblyopia (n = 17) and visually normal controls (n = 17) judged the simultaneity of a flash and a click presented with both eyes viewing. The signal onset asynchrony (SOA) varied from 0 ms to 450 ms for auditory-lead and visual-lead conditions. A subset of participants with amblyopia (n = 6) was tested monocularly. Compared to the control group, the auditory-lead side of the AV simultaneity window was widened by 48 ms (36%; p = 0.002), whereas that of the visual-lead side was widened by 86 ms (37%; p = 0.02). The overall mean window width was 500 ms, compared to 366 ms among controls (37% wider; p = 0.002). Among participants with amblyopia, the simultaneity window parameters were unchanged by viewing condition, but subgroup analysis revealed differential effects on the parameters by amblyopia severity, etiology, and foveal suppression status. Possible mechanisms to explain these findings include visual temporal uncertainty, interocular perceptual latency asymmetry, and disruption of normal developmental tuning of sensitivity to audiovisual asynchrony. PMID:28598996

  6. Alterations in audiovisual simultaneity perception in amblyopia.

    PubMed

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2017-01-01

    Amblyopia is a developmental visual impairment that is increasingly recognized to affect higher-level perceptual and multisensory processes. To further investigate the audiovisual (AV) perceptual impairments associated with this condition, we characterized the temporal interval in which asynchronous auditory and visual stimuli are perceived as simultaneous 50% of the time (i.e., the AV simultaneity window). Adults with unilateral amblyopia (n = 17) and visually normal controls (n = 17) judged the simultaneity of a flash and a click presented with both eyes viewing. The signal onset asynchrony (SOA) varied from 0 ms to 450 ms for auditory-lead and visual-lead conditions. A subset of participants with amblyopia (n = 6) was tested monocularly. Compared to the control group, the auditory-lead side of the AV simultaneity window was widened by 48 ms (36%; p = 0.002), whereas that of the visual-lead side was widened by 86 ms (37%; p = 0.02). The overall mean window width was 500 ms, compared to 366 ms among controls (37% wider; p = 0.002). Among participants with amblyopia, the simultaneity window parameters were unchanged by viewing condition, but subgroup analysis revealed differential effects on the parameters by amblyopia severity, etiology, and foveal suppression status. Possible mechanisms to explain these findings include visual temporal uncertainty, interocular perceptual latency asymmetry, and disruption of normal developmental tuning of sensitivity to audiovisual asynchrony.

  7. Multisensory flavor perception.

    PubMed

    Spence, Charles

    2015-03-26

    The perception of flavor is perhaps the most multisensory of our everyday experiences. The latest research by psychologists and cognitive neuroscientists increasingly reveals the complex multisensory interactions that give rise to the flavor experiences we all know and love, demonstrating how they rely on the integration of cues from all of the human senses. This Perspective explores the contributions of distinct senses to our perception of food and the growing realization that the same rules of multisensory integration that have been thoroughly explored in interactions between audition, vision, and touch may also explain the combination of the (admittedly harder to study) flavor senses. Academic advances are now spilling out into the real world, with chefs and food industry increasingly taking the latest scientific findings on board in their food design. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Neonatal Restriction of Tactile Inputs Leads to Long-Lasting Impairments of Cross-Modal Processing

    PubMed Central

    Röder, Brigitte; Hanganu-Opatz, Ileana L.

    2015-01-01

    Optimal behavior relies on the combination of inputs from multiple senses through complex interactions within neocortical networks. The ontogeny of this multisensory interplay is still unknown. Here, we identify critical factors that control the development of visual-tactile processing by combining in vivo electrophysiology with anatomical/functional assessment of cortico-cortical communication and behavioral investigation of pigmented rats. We demonstrate that the transient reduction of unimodal (tactile) inputs during a short period of neonatal development prior to the first cross-modal experience affects feed-forward subcortico-cortical interactions by attenuating the cross-modal enhancement of evoked responses in the adult primary somatosensory cortex. Moreover, the neonatal manipulation alters cortico-cortical interactions by decreasing the cross-modal synchrony and directionality in line with the sparsification of direct projections between primary somatosensory and visual cortices. At the behavioral level, these functional and structural deficits resulted in lower cross-modal matching abilities. Thus, neonatal unimodal experience during defined developmental stages is necessary for setting up the neuronal networks of multisensory processing. PMID:26600123

  9. Multisensor-based real-time quality monitoring by means of feature extraction, selection and modeling for Al alloy in arc welding

    NASA Astrophysics Data System (ADS)

    Zhang, Zhifen; Chen, Huabin; Xu, Yanling; Zhong, Jiyong; Lv, Na; Chen, Shanben

    2015-08-01

    Multisensory data fusion-based online welding quality monitoring has gained increasing attention in intelligent welding process. This paper mainly focuses on the automatic detection of typical welding defect for Al alloy in gas tungsten arc welding (GTAW) by means of analzing arc spectrum, sound and voltage signal. Based on the developed algorithms in time and frequency domain, 41 feature parameters were successively extracted from these signals to characterize the welding process and seam quality. Then, the proposed feature selection approach, i.e., hybrid fisher-based filter and wrapper was successfully utilized to evaluate the sensitivity of each feature and reduce the feature dimensions. Finally, the optimal feature subset with 19 features was selected to obtain the highest accuracy, i.e., 94.72% using established classification model. This study provides a guideline for feature extraction, selection and dynamic modeling based on heterogeneous multisensory data to achieve a reliable online defect detection system in arc welding.

  10. Slow changing postural cues cancel visual field dependence on self-tilt detection.

    PubMed

    Scotto Di Cesare, C; Macaluso, T; Mestre, D R; Bringoux, L

    2015-01-01

    Interindividual differences influence the multisensory integration process involved in spatial perception. Here, we assessed the effect of visual field dependence on self-tilt detection relative to upright, as a function of static vs. slow changing visual or postural cues. To that aim, we manipulated slow rotations (i.e., 0.05° s(-1)) of the body and/or the visual scene in pitch. Participants had to indicate whether they felt being tilted forward at successive angles. Results show that thresholds for self-tilt detection substantially differed between visual field dependent/independent subjects, when only the visual scene was rotated. This difference was no longer present when the body was actually rotated, whatever the visual scene condition (i.e., absent, static or rotated relative to the observer). These results suggest that the cancellation of visual field dependence by dynamic postural cues may rely on a multisensory reweighting process, where slow changing vestibular/somatosensory inputs may prevail over visual inputs. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Multi-Sensory Aerosol Data and the NRL NAAPS model for Regulatory Exceptional Event Analysis

    NASA Astrophysics Data System (ADS)

    Husar, R. B.; Hoijarvi, K.; Westphal, D. L.; Haynes, J.; Omar, A. H.; Frank, N. H.

    2013-12-01

    Beyond scientific exploration and analysis, multi-sensory observations along with models are finding increasing applications for operational air quality management. EPA's Exceptional Event (EE) Rule allows the exclusion of data strongly influenced by impacts from "exceptional events," such as smoke from wildfires or dust from abnormally high winds. The EE Rule encourages the use of satellite observations and other non-standard data along with models as evidence for formal documentation of EE samples for exclusion. Thus, the implementation of the EE Rule is uniquely suited for the direct application of integrated multi-sensory observations and indirectly through the assimilation into an aerosol simulation model. Here we report the results of a project: NASA and NAAPS Products for Air Quality Decision Making. The project uses of observations from multiple satellite sensors, surface-based aerosol measurements and the NRL Aerosol Analysis and Prediction System (NAAPS) model that assimilates key satellite observations. The satellite sensor data for detecting and documenting smoke and dust events include: MODIS AOD and Images; OMI Aerosol Index, Tropospheric NO2; AIRS, CO. The surface observations include the EPA regulatory PM2.5 network; the IMPROVE/STN aerosol chemical network; AIRNOW PM2.5 mass network, and surface met. data. Within this application, crucial role is assigned to the NAAPS model for estimating the surface concentration of windblown dust and biomass smoke. The operational model assimilates quality-assured daily MODIS data and 2DVAR to adjust the model concentrations and CALIOP-based climatology to adjust the vertical profiles at 6-hour intervals. The assimilation of satellite data from multiple satellites significantly contributes to the usefulness of NAAPS for EE analysis. The NAAPS smoke and dust simulations were evaluated using the IMPROVE/STN chemical data. The multi-sensory observations along with the model simulations are integrated into a web-based Exceptional Event Decision System (EE DSS) application program, designed to support air quality analysts at the Federal and Regional EPA offices and the EE-affected States. EE DSS screening tool automatically identifies the EPA PM2.5 mass samples that are candidates for EE flagging, based mainly on the NAAPS-simulated surface concentration of dust and smoke. The AQ analysts at the States and the EPA can also use the EE DSS to gather further evidence from the examination of spatio-temporal pattern, Absorbing Aerosol Index, CO and NO2 concentration, backward and forward airmass trajectories and other signatures. Since early 2013, the DSS has been used for the identification and analysis of dozens of events. Hence, integration of multi-sensory observations and modeling with data assimilation is maturing to support real-world operational AQ management applications. The remaining challenges can be resolved by seeking ';closure' of the system components; i.e. the systematic adjustments to reconcile the satellite and surface observations, the emissions and their integration through a suitable AQ model.

  12. Multisensory information boosts numerical matching abilities in young children.

    PubMed

    Jordan, Kerry E; Baker, Joseph

    2011-03-01

    This study presents the first evidence that preschool children perform more accurately in a numerical matching task when given multisensory rather than unisensory information about number. Three- to 5-year-old children learned to play a numerical matching game on a touchscreen computer, which asked them to match a sample numerosity with a numerically equivalent choice numerosity. Samples consisted of a series of visual squares on some trials, a series of auditory tones on other trials, and synchronized squares and tones on still other trials. Children performed at chance on this matching task when provided with either type of unisensory sample, but improved significantly when provided with multisensory samples. There was no speed–accuracy tradeoff between unisensory and multisensory trial types. Thus, these findings suggest that intersensory redundancy may improve young children’s abilities to match numerosities.

  13. The effect of a multisensory exercise program on engagement, behavior, and selected physiological indexes in persons with dementia.

    PubMed

    Heyn, Patricia

    2003-01-01

    A multisensory exercise approach that evokes the stimulation and use of various senses, such as combining physical and cognitive stimuli, can assist in the management of persons with Alzheimer's disease (AD). The objective of this study was to evaluate the outcomes of a multisensory exercise program on cognitive function (engagement), behavior (mood), and physiological indices (blood pressure, resting heart rate, and weight) in 13 nursing home residents diagnosed with moderate to severe AD. A one-group pretest/post-test, quasi-experimental design was used. The program combined a variety of sensory stimulations, integrating storytelling and imaging strategies. Results showed an improvement in resting heart rate, overall mood, and in engagement of physical activity. The findings suggest that a multisensory exercise approach can be beneficial for individuals with AD.

  14. Multisensory Stimulation to Improve Low- and Higher-Level Sensory Deficits after Stroke: A Systematic Review.

    PubMed

    Tinga, Angelica Maria; Visser-Meily, Johanna Maria Augusta; van der Smagt, Maarten Jeroen; Van der Stigchel, Stefan; van Ee, Raymond; Nijboer, Tanja Cornelia Wilhelmina

    2016-03-01

    The aim of this systematic review was to integrate and assess evidence for the effectiveness of multisensory stimulation (i.e., stimulating at least two of the following sensory systems: visual, auditory, and somatosensory) as a possible rehabilitation method after stroke. Evidence was considered with a focus on low-level, perceptual (visual, auditory and somatosensory deficits), as well as higher-level, cognitive, sensory deficits. We referred to the electronic databases Scopus and PubMed to search for articles that were published before May 2015. Studies were included which evaluated the effects of multisensory stimulation on patients with low- or higher-level sensory deficits caused by stroke. Twenty-one studies were included in this review and the quality of these studies was assessed (based on eight elements: randomization, inclusion of control patient group, blinding of participants, blinding of researchers, follow-up, group size, reporting effect sizes, and reporting time post-stroke). Twenty of the twenty-one included studies demonstrate beneficial effects on low- and/or higher-level sensory deficits after stroke. Notwithstanding these beneficial effects, the quality of the studies is insufficient for valid conclusion that multisensory stimulation can be successfully applied as an effective intervention. A valuable and necessary next step would be to set up well-designed randomized controlled trials to examine the effectiveness of multisensory stimulation as an intervention for low- and/or higher-level sensory deficits after stroke. Finally, we consider the potential mechanisms of multisensory stimulation for rehabilitation to guide this future research.

  15. A Kinect-Based Motion-Sensing Game Therapy to Foster the Learning of Children with Sensory Integration Dysfunction

    ERIC Educational Resources Information Center

    Chuang, Tsung-Yen; Kuo, Ming-Shiou; Fan, Ping-Lin; Hsu, Yen-Wei

    2017-01-01

    Sensory integration dysfunction (SID, also known as sensory processing disorder, SPD) is a condition that exists when a person's multisensory integration fails to process and respond adequately to the demands of the environment. Children with SID (CwSID) are also learners with disabilities with regard to responding adequately to the demands made…

  16. Multisensory integration processing during olfactory-visual stimulation-An fMRI graph theoretical network analysis.

    PubMed

    Ripp, Isabelle; Zur Nieden, Anna-Nora; Blankenagel, Sonja; Franzmeier, Nicolai; Lundström, Johan N; Freiherr, Jessica

    2018-05-07

    In this study, we aimed to understand how whole-brain neural networks compute sensory information integration based on the olfactory and visual system. Task-related functional magnetic resonance imaging (fMRI) data was obtained during unimodal and bimodal sensory stimulation. Based on the identification of multisensory integration processing (MIP) specific hub-like network nodes analyzed with network-based statistics using region-of-interest based connectivity matrices, we conclude the following brain areas to be important for processing the presented bimodal sensory information: right precuneus connected contralaterally to the supramarginal gyrus for memory-related imagery and phonology retrieval, and the left middle occipital gyrus connected ipsilaterally to the inferior frontal gyrus via the inferior fronto-occipital fasciculus including functional aspects of working memory. Applied graph theory for quantification of the resulting complex network topologies indicates a significantly increased global efficiency and clustering coefficient in networks including aspects of MIP reflecting a simultaneous better integration and segregation. Graph theoretical analysis of positive and negative network correlations allowing for inferences about excitatory and inhibitory network architectures revealed-not significant, but very consistent-that MIP-specific neural networks are dominated by inhibitory relationships between brain regions involved in stimulus processing. © 2018 Wiley Periodicals, Inc.

  17. Cross-Modal Matching of Audio-Visual German and French Fluent Speech in Infancy

    PubMed Central

    Kubicek, Claudia; Hillairet de Boisferon, Anne; Dupierrix, Eve; Pascalis, Olivier; Lœvenbruck, Hélène; Gervain, Judit; Schwarzer, Gudrun

    2014-01-01

    The present study examined when and how the ability to cross-modally match audio-visual fluent speech develops in 4.5-, 6- and 12-month-old German-learning infants. In Experiment 1, 4.5- and 6-month-old infants’ audio-visual matching ability of native (German) and non-native (French) fluent speech was assessed by presenting auditory and visual speech information sequentially, that is, in the absence of temporal synchrony cues. The results showed that 4.5-month-old infants were capable of matching native as well as non-native audio and visual speech stimuli, whereas 6-month-olds perceived the audio-visual correspondence of native language stimuli only. This suggests that intersensory matching narrows for fluent speech between 4.5 and 6 months of age. In Experiment 2, auditory and visual speech information was presented simultaneously, therefore, providing temporal synchrony cues. Here, 6-month-olds were found to match native as well as non-native speech indicating facilitation of temporal synchrony cues on the intersensory perception of non-native fluent speech. Intriguingly, despite the fact that audio and visual stimuli cohered temporally, 12-month-olds matched the non-native language only. Results were discussed with regard to multisensory perceptual narrowing during the first year of life. PMID:24586651

  18. How the prior information shapes couplings in neural fields performing optimal multisensory integration

    NASA Astrophysics Data System (ADS)

    Wang, He; Zhang, Wen-Hao; Wong, K. Y. Michael; Wu, Si

    Extensive studies suggest that the brain integrates multisensory signals in a Bayesian optimal way. However, it remains largely unknown how the sensory reliability and the prior information shape the neural architecture. In this work, we propose a biologically plausible neural field model, which can perform optimal multisensory integration and encode the whole profile of the posterior. Our model is composed of two modules, each for one modality. The crosstalks between the two modules can be carried out through feedforwad cross-links and reciprocal connections. We found that the reciprocal couplings are crucial to optimal multisensory integration in that the reciprocal coupling pattern is shaped by the correlation in the joint prior distribution of the sensory stimuli. A perturbative approach is developed to illustrate the relation between the prior information and features in coupling patterns quantitatively. Our results show that a decentralized architecture based on reciprocal connections is able to accommodate complex correlation structures across modalities and utilize this prior information in optimal multisensory integration. This work is supported by the Research Grants Council of Hong Kong (N_HKUST606/12 and 605813) and National Basic Research Program of China (2014CB846101) and the Natural Science Foundation of China (31261160495).

  19. Assessing the effect of physical differences in the articulation of consonants and vowels on audiovisual temporal perception

    PubMed Central

    Vatakis, Argiro; Maragos, Petros; Rodomagoulakis, Isidoros; Spence, Charles

    2012-01-01

    We investigated how the physical differences associated with the articulation of speech affect the temporal aspects of audiovisual speech perception. Video clips of consonants and vowels uttered by three different speakers were presented. The video clips were analyzed using an auditory-visual signal saliency model in order to compare signal saliency and behavioral data. Participants made temporal order judgments (TOJs) regarding which speech-stream (auditory or visual) had been presented first. The sensitivity of participants' TOJs and the point of subjective simultaneity (PSS) were analyzed as a function of the place, manner of articulation, and voicing for consonants, and the height/backness of the tongue and lip-roundedness for vowels. We expected that in the case of the place of articulation and roundedness, where the visual-speech signal is more salient, temporal perception of speech would be modulated by the visual-speech signal. No such effect was expected for the manner of articulation or height. The results demonstrate that for place and manner of articulation, participants' temporal percept was affected (although not always significantly) by highly-salient speech-signals with the visual-signals requiring smaller visual-leads at the PSS. This was not the case when height was evaluated. These findings suggest that in the case of audiovisual speech perception, a highly salient visual-speech signal may lead to higher probabilities regarding the identity of the auditory-signal that modulate the temporal window of multisensory integration of the speech-stimulus. PMID:23060756

  20. What Hands May Tell Us about Reading and Writing

    ERIC Educational Resources Information Center

    Mangen, Anne

    2016-01-01

    Reading and writing are increasingly performed with digital, screen-based technologies rather than with analogue technologies such as paper and pen(cil). The current digitization is an occasion to "unpack," theoretically and conceptually, what is entailed in reading and writing as embodied, multisensory processes involving audiovisual…

  1. The Multisensory Attentional Consequences of Tool Use: A Functional Magnetic Resonance Imaging Study

    PubMed Central

    Holmes, Nicholas P.; Spence, Charles; Hansen, Peter C.; Mackay, Clare E.; Calvert, Gemma A.

    2008-01-01

    Background Tool use in humans requires that multisensory information is integrated across different locations, from objects seen to be distant from the hand, but felt indirectly at the hand via the tool. We tested the hypothesis that using a simple tool to perceive vibrotactile stimuli results in the enhanced processing of visual stimuli presented at the distal, functional part of the tool. Such a finding would be consistent with a shift of spatial attention to the location where the tool is used. Methodology/Principal Findings We tested this hypothesis by scanning healthy human participants' brains using functional magnetic resonance imaging, while they used a simple tool to discriminate between target vibrations, accompanied by congruent or incongruent visual distractors, on the same or opposite side to the tool. The attentional hypothesis was supported: BOLD response in occipital cortex, particularly in the right hemisphere lingual gyrus, varied significantly as a function of tool position, increasing contralaterally, and decreasing ipsilaterally to the tool. Furthermore, these modulations occurred despite the fact that participants were repeatedly instructed to ignore the visual stimuli, to respond only to the vibrotactile stimuli, and to maintain visual fixation centrally. In addition, the magnitude of multisensory (visual-vibrotactile) interactions in participants' behavioural responses significantly predicted the BOLD response in occipital cortical areas that were also modulated as a function of both visual stimulus position and tool position. Conclusions/Significance These results show that using a simple tool to locate and to perceive vibrotactile stimuli is accompanied by a shift of spatial attention to the location where the functional part of the tool is used, resulting in enhanced processing of visual stimuli at that location, and decreased processing at other locations. This was most clearly observed in the right hemisphere lingual gyrus. Such modulations of visual processing may reflect the functional importance of visuospatial information during human tool use. PMID:18958150

  2. Cocaine- and amphetamine-regulated transcript peptide and calcium binding proteins immunoreactivity in the deep layers of the superior colliculus of the guinea pig: Implications for multisensory and visuomotor processing.

    PubMed

    Najdzion, Janusz

    2018-03-01

    The superior colliculus (SC) of mammals is a midbrain center, that can be subdivided into the superficial (SCs) and deep layers (SCd). In contrast to the visual SCs, the SCd are involved in multisensory and motor processing. This study investigated the pattern of distribution and colocalization of cocaine- and amphetamine-regulated transcript peptide (CART) and three calcium-binding proteins (CaBPs) i.e. calbindin (CB), calretinin (CR) and parvalbumin (PV) in the SCd of the guinea pig. CART labeling was seen almost exclusively in the neuropil and fibers, which differed in regard to morphology and location. CART-positive neurons were very rare and restricted to a narrow area of the SCd. The most intense CART immunoreactivity was observed in the most dorsally located sublayer of the SCd, which is anatomically and functionally connected with the SCs. CART immunoreactivity in the remaining SCd was less intensive, but still relatively high. This characteristic pattern of immunoreactivity indicates that CART as a putative neurotransmitter or neuromodulator may play an important role in processing of visual information, while its involvement in the auditory and visuomotor processing is less significant, but still possible. CaBPs-positive neurons were morphologically diverse and widely distributed throughout all SCd. From studied CaBPs, CR showed a markedly different distribution compared to CB and PV. Overall, the patterns of distribution of CB and PV were similar in the entire SCd. Consequently, the complementarity of these patterns in the guinea pig was very weak. Double immunostaining revealed that CART did not colocalize with either CaBPs, which suggested that these neurochemical substances might not coexist in the multisensory and visuomotor parts of the SC. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Characterizing the roles of alpha and theta oscillations in multisensory attention.

    PubMed

    Keller, Arielle S; Payne, Lisa; Sekuler, Robert

    2017-05-01

    Cortical alpha oscillations (8-13Hz) appear to play a role in suppressing distractions when just one sensory modality is being attended, but do they also contribute when attention is distributed over multiple sensory modalities? For an answer, we examined cortical oscillations in human subjects who were dividing attention between auditory and visual sequences. In Experiment 1, subjects performed an oddball task with auditory, visual, or simultaneous audiovisual sequences in separate blocks, while the electroencephalogram was recorded using high-density scalp electrodes. Alpha oscillations were present continuously over posterior regions while subjects were attending to auditory sequences. This supports the idea that the brain suppresses processing of visual input in order to advantage auditory processing. During a divided-attention audiovisual condition, an oddball (a rare, unusual stimulus) occurred in either the auditory or the visual domain, requiring that attention be divided between the two modalities. Fronto-central theta band (4-7Hz) activity was strongest in this audiovisual condition, when subjects monitored auditory and visual sequences simultaneously. Theta oscillations have been associated with both attention and with short-term memory. Experiment 2 sought to distinguish these possible roles of fronto-central theta activity during multisensory divided attention. Using a modified version of the oddball task from Experiment 1, Experiment 2 showed that differences in theta power among conditions were independent of short-term memory load. Ruling out theta's association with short-term memory, we conclude that fronto-central theta activity is likely a marker of multisensory divided attention. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Characterizing the roles of alpha and theta oscillations in multisensory attention

    PubMed Central

    Keller, Arielle S.; Payne, Lisa; Sekuler, Robert

    2017-01-01

    Cortical alpha oscillations (8–13 Hz) appear to play a role in suppressing distractions when just one sensory modality is being attended, but do they also contribute when attention is distributed over multiple sensory modalities? For an answer, we examined cortical oscillations in human subjects who were dividing attention between auditory and visual sequences. In Experiment 1, subjects performed an oddball task with auditory, visual, or simultaneous audiovisual sequences in separate blocks, while the electroencephalogram was recorded using high-density scalp electrodes. Alpha oscillations were present continuously over posterior regions while subjects were attending to auditory sequences. This supports the idea that the brain suppresses processing of visual input in order to advantage auditory processing. During a divided-attention audiovisual condition, an oddball (a rare, unusual stimulus) occurred in either the auditory or the visual domain, requiring that attention be divided between the two modalities. Fronto-central theta band (4–7 Hz) activity was strongest in this audiovisual condition, when subjects monitored auditory and visual sequences simultaneously. Theta oscillations have been associated with both attention and with short-term memory. Experiment 2 sought to distinguish these possible roles of fronto-central theta activity during multisensory divided attention. Using a modified version of the oddball task from Experiment 1, Experiment 2 showed that differences in theta power among conditions were independent of short-term memory load. Ruling out theta’s association with short-term memory, we conclude that fronto-central theta activity is likely a marker of multisensory divided attention. PMID:28259771

  5. Effect of mechanical tactile noise on amplitude of visual evoked potentials: multisensory stochastic resonance.

    PubMed

    Méndez-Balbuena, Ignacio; Huidobro, Nayeli; Silva, Mayte; Flores, Amira; Trenado, Carlos; Quintanar, Luis; Arias-Carrión, Oscar; Kristeva, Rumyana; Manjarrez, Elias

    2015-10-01

    The present investigation documents the electrophysiological occurrence of multisensory stochastic resonance in the human visual pathway elicited by tactile noise. We define multisensory stochastic resonance of brain evoked potentials as the phenomenon in which an intermediate level of input noise of one sensory modality enhances the brain evoked response of another sensory modality. Here we examined this phenomenon in visual evoked potentials (VEPs) modulated by the addition of tactile noise. Specifically, we examined whether a particular level of mechanical Gaussian noise applied to the index finger can improve the amplitude of the VEP. We compared the amplitude of the positive P100 VEP component between zero noise (ZN), optimal noise (ON), and high mechanical noise (HN). The data disclosed an inverted U-like graph for all the subjects, thus demonstrating the occurrence of a multisensory stochastic resonance in the P100 VEP. Copyright © 2015 the American Physiological Society.

  6. Modeling development of natural multi-sensory integration using neural self-organisation and probabilistic population codes

    NASA Astrophysics Data System (ADS)

    Bauer, Johannes; Dávila-Chacón, Jorge; Wermter, Stefan

    2015-10-01

    Humans and other animals have been shown to perform near-optimally in multi-sensory integration tasks. Probabilistic population codes (PPCs) have been proposed as a mechanism by which optimal integration can be accomplished. Previous approaches have focussed on how neural networks might produce PPCs from sensory input or perform calculations using them, like combining multiple PPCs. Less attention has been given to the question of how the necessary organisation of neurons can arise and how the required knowledge about the input statistics can be learned. In this paper, we propose a model of learning multi-sensory integration based on an unsupervised learning algorithm in which an artificial neural network learns the noise characteristics of each of its sources of input. Our algorithm borrows from the self-organising map the ability to learn latent-variable models of the input and extends it to learning to produce a PPC approximating a probability density function over the latent variable behind its (noisy) input. The neurons in our network are only required to perform simple calculations and we make few assumptions about input noise properties and tuning functions. We report on a neurorobotic experiment in which we apply our algorithm to multi-sensory integration in a humanoid robot to demonstrate its effectiveness and compare it to human multi-sensory integration on the behavioural level. We also show in simulations that our algorithm performs near-optimally under certain plausible conditions, and that it reproduces important aspects of natural multi-sensory integration on the neural level.

  7. Why Johnny Can't Learn to Read, or Sex Differences in Education.

    ERIC Educational Resources Information Center

    Caukins, Sivan E.

    Beginning with the observation that sex differences affecting the learning process have largely been ignored in our schools, this dissertation reviews literature on the differences in learning characteristics of boys and girls and proposes a proprioceptor stimulation or multisensory approach of teaching. The author maintains that kinesthetic…

  8. Action-outcome learning and prediction shape the window of simultaneity of audiovisual outcomes.

    PubMed

    Desantis, Andrea; Haggard, Patrick

    2016-08-01

    To form a coherent representation of the objects around us, the brain must group the different sensory features composing these objects. Here, we investigated whether actions contribute in this grouping process. In particular, we assessed whether action-outcome learning and prediction contribute to audiovisual temporal binding. Participants were presented with two audiovisual pairs: one pair was triggered by a left action, and the other by a right action. In a later test phase, the audio and visual components of these pairs were presented at different onset times. Participants judged whether they were simultaneous or not. To assess the role of action-outcome prediction on audiovisual simultaneity, each action triggered either the same audiovisual pair as in the learning phase ('predicted' pair), or the pair that had previously been associated with the other action ('unpredicted' pair). We found the time window within which auditory and visual events appeared simultaneous increased for predicted compared to unpredicted pairs. However, no change in audiovisual simultaneity was observed when audiovisual pairs followed visual cues, rather than voluntary actions. This suggests that only action-outcome learning promotes temporal grouping of audio and visual effects. In a second experiment we observed that changes in audiovisual simultaneity do not only depend on our ability to predict what outcomes our actions generate, but also on learning the delay between the action and the multisensory outcome. When participants learned that the delay between action and audiovisual pair was variable, the window of audiovisual simultaneity for predicted pairs increased, relative to a fixed action-outcome pair delay. This suggests that participants learn action-based predictions of audiovisual outcome, and adapt their temporal perception of outcome events based on such predictions. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  9. Audio-visual synchrony and spatial attention enhance processing of dynamic visual stimulation independently and in parallel: A frequency-tagging study.

    PubMed

    Covic, Amra; Keitel, Christian; Porcu, Emanuele; Schröger, Erich; Müller, Matthias M

    2017-11-01

    The neural processing of a visual stimulus can be facilitated by attending to its position or by a co-occurring auditory tone. Using frequency-tagging, we investigated whether facilitation by spatial attention and audio-visual synchrony rely on similar neural processes. Participants attended to one of two flickering Gabor patches (14.17 and 17 Hz) located in opposite lower visual fields. Gabor patches further "pulsed" (i.e. showed smooth spatial frequency variations) at distinct rates (3.14 and 3.63 Hz). Frequency-modulating an auditory stimulus at the pulse-rate of one of the visual stimuli established audio-visual synchrony. Flicker and pulsed stimulation elicited stimulus-locked rhythmic electrophysiological brain responses that allowed tracking the neural processing of simultaneously presented Gabor patches. These steady-state responses (SSRs) were quantified in the spectral domain to examine visual stimulus processing under conditions of synchronous vs. asynchronous tone presentation and when respective stimulus positions were attended vs. unattended. Strikingly, unique patterns of effects on pulse- and flicker driven SSRs indicated that spatial attention and audiovisual synchrony facilitated early visual processing in parallel and via different cortical processes. We found attention effects to resemble the classical top-down gain effect facilitating both, flicker and pulse-driven SSRs. Audio-visual synchrony, in turn, only amplified synchrony-producing stimulus aspects (i.e. pulse-driven SSRs) possibly highlighting the role of temporally co-occurring sights and sounds in bottom-up multisensory integration. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Cultural immersion alters emotion perception: Neurophysiological evidence from Chinese immigrants to Canada.

    PubMed

    Liu, Pan; Rigoulot, Simon; Pell, Marc D

    2017-12-01

    To explore how cultural immersion modulates emotion processing, this study examined how Chinese immigrants to Canada process multisensory emotional expressions, which were compared to existing data from two groups, Chinese and North Americans. Stroop and Oddball paradigms were employed to examine different stages of emotion processing. The Stroop task presented face-voice pairs expressing congruent/incongruent emotions and participants actively judged the emotion of one modality while ignoring the other. A significant effect of cultural immersion was observed in the immigrants' behavioral performance, which showed greater interference from to-be-ignored faces, comparable with what was observed in North Americans. However, this effect was absent in their N400 data, which retained the same pattern as the Chinese. In the Oddball task, where immigrants passively viewed facial expressions with/without simultaneous vocal emotions, they exhibited a larger visual MMN for faces accompanied by voices, again mirroring patterns observed in Chinese. Correlation analyses indicated that the immigrants' living duration in Canada was associated with neural patterns (N400 and visual mismatch negativity) more closely resembling North Americans. Our data suggest that in multisensory emotion processing, adopting to a new culture first leads to behavioral accommodation followed by alterations in brain activities, providing new evidence on human's neurocognitive plasticity in communication.

  11. Language/Culture Modulates Brain and Gaze Processes in Audiovisual Speech Perception.

    PubMed

    Hisanaga, Satoko; Sekiyama, Kaoru; Igasaki, Tomohiko; Murayama, Nobuki

    2016-10-13

    Several behavioural studies have shown that the interplay between voice and face information in audiovisual speech perception is not universal. Native English speakers (ESs) are influenced by visual mouth movement to a greater degree than native Japanese speakers (JSs) when listening to speech. However, the biological basis of these group differences is unknown. Here, we demonstrate the time-varying processes of group differences in terms of event-related brain potentials (ERP) and eye gaze for audiovisual and audio-only speech perception. On a behavioural level, while congruent mouth movement shortened the ESs' response time for speech perception, the opposite effect was observed in JSs. Eye-tracking data revealed a gaze bias to the mouth for the ESs but not the JSs, especially before the audio onset. Additionally, the ERP P2 amplitude indicated that ESs processed multisensory speech more efficiently than auditory-only speech; however, the JSs exhibited the opposite pattern. Taken together, the ESs' early visual attention to the mouth was likely to promote phonetic anticipation, which was not the case for the JSs. These results clearly indicate the impact of language and/or culture on multisensory speech processing, suggesting that linguistic/cultural experiences lead to the development of unique neural systems for audiovisual speech perception.

  12. Visuotactile motion congruence enhances gamma-band activity in visual and somatosensory cortices.

    PubMed

    Krebber, Martin; Harwood, James; Spitzer, Bernhard; Keil, Julian; Senkowski, Daniel

    2015-08-15

    When touching and viewing a moving surface our visual and somatosensory systems receive congruent spatiotemporal input. Behavioral studies have shown that motion congruence facilitates interplay between visual and tactile stimuli, but the neural mechanisms underlying this interplay are not well understood. Neural oscillations play a role in motion processing and multisensory integration. They may also be crucial for visuotactile motion processing. In this electroencephalography study, we applied linear beamforming to examine the impact of visuotactile motion congruence on beta and gamma band activity (GBA) in visual and somatosensory cortices. Visual and tactile inputs comprised of gratings that moved either in the same or different directions. Participants performed a target detection task that was unrelated to motion congruence. While there were no effects in the beta band (13-21Hz), the power of GBA (50-80Hz) in visual and somatosensory cortices was larger for congruent compared with incongruent motion stimuli. This suggests enhanced bottom-up multisensory processing when visual and tactile gratings moved in the same direction. Supporting its behavioral relevance, GBA was correlated with shorter reaction times in the target detection task. We conclude that motion congruence plays an important role for the integrative processing of visuotactile stimuli in sensory cortices, as reflected by oscillatory responses in the gamma band. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Multisensory Spatial Attention Deficits Are Predictive of Phonological Decoding Skills in Developmental Dyslexia

    ERIC Educational Resources Information Center

    Facoetti, Andrea; Trussardi, Anna Noemi; Ruffino, Milena; Lorusso, Maria Luisa; Cattaneo, Carmen; Galli, Raffaella; Molteni, Massimo; Zorzi, Marco

    2010-01-01

    Although the dominant approach posits that developmental dyslexia arises from deficits in systems that are exclusively linguistic in nature (i.e., phonological deficit theory), dyslexics show a variety of lower level deficits in sensory and attentional processing. Although their link to the reading disorder remains contentious, recent empirical…

  14. Neural Correlates of Perceptual Narrowing in Cross-Species Face-Voice Matching

    ERIC Educational Resources Information Center

    Grossmann, Tobias; Missana, Manuela; Friederici, Angela D.; Ghazanfar, Asif A.

    2012-01-01

    Integrating the multisensory features of talking faces is critical to learning and extracting coherent meaning from social signals. While we know much about the development of these capacities at the behavioral level, we know very little about the underlying neural processes. One prominent behavioral milestone of these capacities is the perceptual…

  15. A Self-Synthesis Approach to Perceptual Learning for Multisensory Fusion in Robotics

    PubMed Central

    Axenie, Cristian; Richter, Christoph; Conradt, Jörg

    2016-01-01

    Biological and technical systems operate in a rich multimodal environment. Due to the diversity of incoming sensory streams a system perceives and the variety of motor capabilities a system exhibits there is no single representation and no singular unambiguous interpretation of such a complex scene. In this work we propose a novel sensory processing architecture, inspired by the distributed macro-architecture of the mammalian cortex. The underlying computation is performed by a network of computational maps, each representing a different sensory quantity. All the different sensory streams enter the system through multiple parallel channels. The system autonomously associates and combines them into a coherent representation, given incoming observations. These processes are adaptive and involve learning. The proposed framework introduces mechanisms for self-creation and learning of the functional relations between the computational maps, encoding sensorimotor streams, directly from the data. Its intrinsic scalability, parallelisation, and automatic adaptation to unforeseen sensory perturbations make our approach a promising candidate for robust multisensory fusion in robotic systems. We demonstrate this by applying our model to a 3D motion estimation on a quadrotor. PMID:27775621

  16. Perception of Upright: Multisensory Convergence and the Role of Temporo-Parietal Cortex

    PubMed Central

    Kheradmand, Amir; Winnick, Ariel

    2017-01-01

    We inherently maintain a stable perception of the world despite frequent changes in the head, eye, and body positions. Such “orientation constancy” is a prerequisite for coherent spatial perception and sensorimotor planning. As a multimodal sensory reference, perception of upright represents neural processes that subserve orientation constancy through integration of sensory information encoding the eye, head, and body positions. Although perception of upright is distinct from perception of body orientation, they share similar neural substrates within the cerebral cortical networks involved in perception of spatial orientation. These cortical networks, mainly within the temporo-parietal junction, are crucial for multisensory processing and integration that generate sensory reference frames for coherent perception of self-position and extrapersonal space transformations. In this review, we focus on these neural mechanisms and discuss (i) neurobehavioral aspects of orientation constancy, (ii) sensory models that address the neurophysiology underlying perception of upright, and (iii) the current evidence for the role of cerebral cortex in perception of upright and orientation constancy, including findings from the neurological disorders that affect cortical function. PMID:29118736

  17. Multisensory Strategies for Science Vocabulary

    ERIC Educational Resources Information Center

    Husty, Sandra; Jackson, Julie

    2008-01-01

    Seeing, touching, smelling, hearing, and learning! The authors observed that their English Language Learner (ELL) students achieved a deeper understanding of the properties of matter, as well as enhanced vocabulary development, when they were guided through inquiry-based, multisensory explorations that repeatedly exposed them to words and…

  18. Crossmodal binding rivalry: A "race" for integration between unequal sensory inputs.

    PubMed

    Kostaki, Maria; Vatakis, Argiro

    2016-10-01

    Exposure to multiple but unequal (in number) sensory inputs often leads to illusory percepts, which may be the product of a conflict between those inputs. To test this conflict, we utilized the classic sound induced visual fission and fusion illusions under various temporal configurations and timing presentations. This conflict between unequal numbers of sensory inputs (i.e., crossmodal binding rivalry) depends on the binding of the first audiovisual pair and its temporal proximity to the upcoming unisensory stimulus. We, therefore, expected that tight coupling of the first audiovisual pair would lead to higher rivalry with the upcoming unisensory stimulus and, thus, weaker illusory percepts. Loose coupling, on the other hand, would lead to lower rivalry and higher illusory percepts. Our data showed the emergence of two different participant groups, those with low discrimination performance and strong illusion reports (particularly for fusion) and those with the exact opposite pattern, thus extending previous findings on the effect of visual acuity in the strength of the illusion. Most importantly, our data revealed differential illusory strength across different temporal configurations for the fission illusion, while for the fusion illusion these effects were only noted for the largest stimulus onset asynchronies tested. These findings support that the optimal integration theory for the double flash illusion should be expanded so as to also take into account the multisensory temporal interactions of the stimuli presented (i.e., temporal sequence and configuration). Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. A Neural Signature of Divisive Normalization at the Level of Multisensory Integration in Primate Cortex.

    PubMed

    Ohshiro, Tomokazu; Angelaki, Dora E; DeAngelis, Gregory C

    2017-07-19

    Studies of multisensory integration by single neurons have traditionally emphasized empirical principles that describe nonlinear interactions between inputs from two sensory modalities. We previously proposed that many of these empirical principles could be explained by a divisive normalization mechanism operating in brain regions where multisensory integration occurs. This normalization model makes a critical diagnostic prediction: a non-preferred sensory input from one modality, which activates the neuron on its own, should suppress the response to a preferred input from another modality. We tested this prediction by recording from neurons in macaque area MSTd that integrate visual and vestibular cues regarding self-motion. We show that many MSTd neurons exhibit the diagnostic form of cross-modal suppression, whereas unisensory neurons in area MT do not. The normalization model also fits population responses better than a model based on subtractive inhibition. These findings provide strong support for a divisive normalization mechanism in multisensory integration. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Suppressed visual looming stimuli are not integrated with auditory looming signals: Evidence from continuous flash suppression.

    PubMed

    Moors, Pieter; Huygelier, Hanne; Wagemans, Johan; de-Wit, Lee; van Ee, Raymond

    2015-01-01

    Previous studies using binocular rivalry have shown that signals in a modality other than the visual can bias dominance durations depending on their congruency with the rivaling stimuli. More recently, studies using continuous flash suppression (CFS) have reported that multisensory integration influences how long visual stimuli remain suppressed. In this study, using CFS, we examined whether the contrast thresholds for detecting visual looming stimuli are influenced by a congruent auditory stimulus. In Experiment 1, we show that a looming visual stimulus can result in lower detection thresholds compared to a static concentric grating, but that auditory tone pips congruent with the looming stimulus did not lower suppression thresholds any further. In Experiments 2, 3, and 4, we again observed no advantage for congruent multisensory stimuli. These results add to our understanding of the conditions under which multisensory integration is possible, and suggest that certain forms of multisensory integration are not evident when the visual stimulus is suppressed from awareness using CFS.

  1. Cortical Hierarchies Perform Bayesian Causal Inference in Multisensory Perception

    PubMed Central

    Rohe, Tim; Noppeney, Uta

    2015-01-01

    To form a veridical percept of the environment, the brain needs to integrate sensory signals from a common source but segregate those from independent sources. Thus, perception inherently relies on solving the “causal inference problem.” Behaviorally, humans solve this problem optimally as predicted by Bayesian Causal Inference; yet, the underlying neural mechanisms are unexplored. Combining psychophysics, Bayesian modeling, functional magnetic resonance imaging (fMRI), and multivariate decoding in an audiovisual spatial localization task, we demonstrate that Bayesian Causal Inference is performed by a hierarchy of multisensory processes in the human brain. At the bottom of the hierarchy, in auditory and visual areas, location is represented on the basis that the two signals are generated by independent sources (= segregation). At the next stage, in posterior intraparietal sulcus, location is estimated under the assumption that the two signals are from a common source (= forced fusion). Only at the top of the hierarchy, in anterior intraparietal sulcus, the uncertainty about the causal structure of the world is taken into account and sensory signals are combined as predicted by Bayesian Causal Inference. Characterizing the computational operations of signal interactions reveals the hierarchical nature of multisensory perception in human neocortex. It unravels how the brain accomplishes Bayesian Causal Inference, a statistical computation fundamental for perception and cognition. Our results demonstrate how the brain combines information in the face of uncertainty about the underlying causal structure of the world. PMID:25710328

  2. Effects of multisensory resources on the achievement and science attitudes of seventh-grade suburban students taught science concepts on and above grade level

    NASA Astrophysics Data System (ADS)

    Roberts, Patrice Helen

    This research was designed to determine the relationships among students' achievement scores on grade-level science content, on science content that was three years above-grade level, on attitudes toward instructional approaches, and learning-styles perceptual preferences when instructional approaches were multisensory versus traditional. The dependent variables for this investigation were scores on achievement posttests and scores on the attitude survey. The independent variables were the instructional strategy and students' perceptual preferences. The sample consisted of 74 educationally oriented seventh-grade students. The Learning Styles Inventory (LSI) (Dunn, Dunn, & Price, 1990) was administered to determine perceptual preferences. The control group was taught seventh-grade and tenth-grade science units using a traditional approach and the experimental group was instructed on the same units using multisensory instructional resources. The Semantic Differential Scale (SDS) (Pizzo, 1981) was administered to reveal attitudinal differences. The traditional unit included oral reading from the textbook, completing outlines, labeling diagrams, and correcting the outlines and diagrams as a class. The multisensory unit included five instructional stations established in different sections of the classroom to allow students to learn by: (a) manipulating Flip Chutes, (b) using Electroboards, (c) assembling Task Cards, (d) playing a kinesthetic Floor Game, and (e) reading an individual Programmed Learning Sequence. Audio tapes and scripts were provided at each location. Students circulated in groups of four from station to station. The data subjected to statistical analyses supported the use of a multisensory, rather than a traditional approach, for teaching science content that is above-grade level. T-tests revealed a positive and significant impact on achievement scores (p < 0.0007). No significance was detected on grade-level achievement nor on the perceptual-preference effect. Furthermore, the students indicated significantly more positive attitudes when instructed with a multisensory approach on either grade-level or above-grade level science content (p < 0.0001). The findings supported using a multisensory approach when teaching science concepts that are new to and difficult for students (Martini, 1986).

  3. Comparison for younger and older adults: Stimulus temporal asynchrony modulates audiovisual integration.

    PubMed

    Ren, Yanna; Ren, Yanling; Yang, Weiping; Tang, Xiaoyu; Wu, Fengxia; Wu, Qiong; Takahashi, Satoshi; Ejima, Yoshimichi; Wu, Jinglong

    2018-02-01

    Recent research has shown that the magnitudes of responses to multisensory information are highly dependent on the stimulus structure. The temporal proximity of multiple signal inputs is a critical determinant for cross-modal integration. Here, we investigated the influence that temporal asynchrony has on audiovisual integration in both younger and older adults using event-related potentials (ERP). Our results showed that in the simultaneous audiovisual condition, except for the earliest integration (80-110ms), which occurred in the occipital region for older adults was absent for younger adults, early integration was similar for the younger and older groups. Additionally, late integration was delayed in older adults (280-300ms) compared to younger adults (210-240ms). In audition‑leading vision conditions, the earliest integration (80-110ms) was absent in younger adults but did occur in older adults. Additionally, after increasing the temporal disparity from 50ms to 100ms, late integration was delayed in both younger (from 230 to 290ms to 280-300ms) and older (from 210 to 240ms to 280-300ms) adults. In the audition-lagging vision conditions, integration only occurred in the A100V condition for younger adults and in the A50V condition for older adults. The current results suggested that the audiovisual temporal integration pattern differed between the audition‑leading and audition-lagging vision conditions and further revealed the varying effect of temporal asynchrony on audiovisual integration in younger and older adults. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Simultaneous Contact Sensing and Characterizing of Mechanical and Dynamic Heat Transfer Properties of Porous Polymeric Materials

    PubMed Central

    Yao, Bao-Guo; Peng, Yun-Liang; Zhang, De-Pin

    2017-01-01

    Porous polymeric materials, such as textile fabrics, are elastic and widely used in our daily life for garment and household products. The mechanical and dynamic heat transfer properties of porous polymeric materials, which describe the sensations during the contact process between porous polymeric materials and parts of the human body, such as the hand, primarily influence comfort sensations and aesthetic qualities of clothing. A multi-sensory measurement system and a new method were proposed to simultaneously sense the contact and characterize the mechanical and dynamic heat transfer properties of porous polymeric materials, such as textile fabrics in one instrument, with consideration of the interactions between different aspects of contact feels. The multi-sensory measurement system was developed for simulating the dynamic contact and psychological judgment processes during human hand contact with porous polymeric materials, and measuring the surface smoothness, compression resilience, bending and twisting, and dynamic heat transfer signals simultaneously. The contact sensing principle and the evaluation methods were presented. Twelve typical sample materials with different structural parameters were measured. The results of the experiments and the interpretation of the test results were described. An analysis of the variance and a capacity study were investigated to determine the significance of differences among the test materials and to assess the gage repeatability and reproducibility. A correlation analysis was conducted by comparing the test results of this measurement system with the results of Kawabata Evaluation System (KES) in separate instruments. This multi-sensory measurement system provides a new method for simultaneous contact sensing and characterizing of mechanical and dynamic heat transfer properties of porous polymeric materials. PMID:29084152

  5. Simultaneous Contact Sensing and Characterizing of Mechanical and Dynamic Heat Transfer Properties of Porous Polymeric Materials.

    PubMed

    Yao, Bao-Guo; Peng, Yun-Liang; Zhang, De-Pin

    2017-10-30

    Porous polymeric materials, such as textile fabrics, are elastic and widely used in our daily life for garment and household products. The mechanical and dynamic heat transfer properties of porous polymeric materials, which describe the sensations during the contact process between porous polymeric materials and parts of the human body, such as the hand, primarily influence comfort sensations and aesthetic qualities of clothing. A multi-sensory measurement system and a new method were proposed to simultaneously sense the contact and characterize the mechanical and dynamic heat transfer properties of porous polymeric materials, such as textile fabrics in one instrument, with consideration of the interactions between different aspects of contact feels. The multi-sensory measurement system was developed for simulating the dynamic contact and psychological judgment processes during human hand contact with porous polymeric materials, and measuring the surface smoothness, compression resilience, bending and twisting, and dynamic heat transfer signals simultaneously. The contact sensing principle and the evaluation methods were presented. Twelve typical sample materials with different structural parameters were measured. The results of the experiments and the interpretation of the test results were described. An analysis of the variance and a capacity study were investigated to determine the significance of differences among the test materials and to assess the gage repeatability and reproducibility. A correlation analysis was conducted by comparing the test results of this measurement system with the results of Kawabata Evaluation System (KES) in separate instruments. This multi-sensory measurement system provides a new method for simultaneous contact sensing and characterizing of mechanical and dynamic heat transfer properties of porous polymeric materials.

  6. Embodying an outgroup: the role of racial bias and the effect of multisensory processing in somatosensory remapping.

    PubMed

    Fini, Chiara; Cardini, Flavia; Tajadura-Jiménez, Ana; Serino, Andrea; Tsakiris, Manos

    2013-01-01

    We come to understand other people's physical and mental states by re-mapping their bodily states onto our sensorimotor system. This process, also called somatosensory resonance, is an essential ability for social cognition and is stronger when observing ingroup than outgroup members. Here we investigated, first, whether implicit racial bias constrains somatosensory resonance, and second, whether increasing the ingroup/outgroup perceived physical similarity results in an increase in the somatosensory resonance for outgroup members. We used the Visual Remapping of Touch effect as an index of individuals' ability in resonating with the others, and the Implicit Association Test to measure racial bias. In Experiment 1, participants were asked to detect near-threshold tactile stimuli delivered to their own face while viewing either an ingroup or an outgroup face receiving a similar stimulation. Our results showed that individuals' tactile accuracy when viewing an outgroup face being touched was negatively correlated to their implicit racial bias. In Experiment 2, participants received the interpersonal multisensory stimulation (IMS) while observing an outgroup member. IMS has been found to increase the perceived physical similarity between the observer's and the observed body. We tested whether such increase in ingroup/outgroup perceived physical similarity increased the remapping ability for outgroup members. We found that after sharing IMS experience with an outgroup member, tactile accuracy when viewing touch on outgroup faces increased. Interestingly, participants with stronger implicit bias against the outgroup showed larger positive change in the remapping. We conclude that shared multisensory experiences might represent one key way to improve our ability to resonate with others by overcoming the boundaries between ingroup and outgroup categories.

  7. Understanding Freshness Perception from the Cognitive Mechanisms of Flavor: The Case of Beverages

    PubMed Central

    Roque, Jérémy; Auvray, Malika; Lafraire, Jérémie

    2018-01-01

    Freshness perception has received recent consideration in the field of consumer science mainly because of its hedonic dimension, which is assumed to influence consumers’ preference and behavior. However, most studies have considered freshness as a multisensory attribute of food and beverage products without investigating the cognitive mechanisms at hand. In the present review, we endorse a slightly different perspective on freshness. We focus on (i) the multisensory integration processes that underpin freshness perception, and (ii) the top–down factors that influence the explicit attribution of freshness to a product by consumers. To do so, we exploit the recent literature on the cognitive underpinnings of flavor perception as a heuristic to better characterize the mechanisms of freshness perception in the particular case of beverages. We argue that the lack of consideration of particular instances of flavor, such as freshness, has resulted in a lack of consensus about the content and structure of different types of flavor representations. We then enrich these theoretical analyses, with a review of the cognitive mechanisms of flavor perception: from multisensory integration processes to the influence of top–down factors (e.g., attentional and semantic). We conclude that similarly to flavor, freshness perception is characterized by hybrid content, both perceptual and semantic, but that freshness has a higher-degree of specificity than flavor. In particular, contrary to flavor, freshness is characterized by specific functions (e.g., alleviation of oropharyngeal symptoms) and likely differs from flavor with respect to the weighting of each sensory contributor, as well as to its subjective location. Finally, we provide a comprehensive model of the cognitive mechanisms that underlie freshness perception. This model paves the way for further empirical research on particular instances of flavor, and will enable advances in the field of food and beverage cognition. PMID:29375453

  8. Understanding Freshness Perception from the Cognitive Mechanisms of Flavor: The Case of Beverages.

    PubMed

    Roque, Jérémy; Auvray, Malika; Lafraire, Jérémie

    2017-01-01

    Freshness perception has received recent consideration in the field of consumer science mainly because of its hedonic dimension, which is assumed to influence consumers' preference and behavior. However, most studies have considered freshness as a multisensory attribute of food and beverage products without investigating the cognitive mechanisms at hand. In the present review, we endorse a slightly different perspective on freshness. We focus on (i) the multisensory integration processes that underpin freshness perception, and (ii) the top-down factors that influence the explicit attribution of freshness to a product by consumers. To do so, we exploit the recent literature on the cognitive underpinnings of flavor perception as a heuristic to better characterize the mechanisms of freshness perception in the particular case of beverages. We argue that the lack of consideration of particular instances of flavor, such as freshness, has resulted in a lack of consensus about the content and structure of different types of flavor representations. We then enrich these theoretical analyses, with a review of the cognitive mechanisms of flavor perception: from multisensory integration processes to the influence of top-down factors (e.g., attentional and semantic). We conclude that similarly to flavor, freshness perception is characterized by hybrid content, both perceptual and semantic, but that freshness has a higher-degree of specificity than flavor . In particular, contrary to flavor, freshness is characterized by specific functions (e.g., alleviation of oropharyngeal symptoms) and likely differs from flavor with respect to the weighting of each sensory contributor, as well as to its subjective location. Finally, we provide a comprehensive model of the cognitive mechanisms that underlie freshness perception. This model paves the way for further empirical research on particular instances of flavor, and will enable advances in the field of food and beverage cognition.

  9. Embodying an outgroup: the role of racial bias and the effect of multisensory processing in somatosensory remapping

    PubMed Central

    Fini, Chiara; Cardini, Flavia; Tajadura-Jiménez, Ana; Serino, Andrea; Tsakiris, Manos

    2013-01-01

    We come to understand other people's physical and mental states by re-mapping their bodily states onto our sensorimotor system. This process, also called somatosensory resonance, is an essential ability for social cognition and is stronger when observing ingroup than outgroup members. Here we investigated, first, whether implicit racial bias constrains somatosensory resonance, and second, whether increasing the ingroup/outgroup perceived physical similarity results in an increase in the somatosensory resonance for outgroup members. We used the Visual Remapping of Touch effect as an index of individuals' ability in resonating with the others, and the Implicit Association Test to measure racial bias. In Experiment 1, participants were asked to detect near-threshold tactile stimuli delivered to their own face while viewing either an ingroup or an outgroup face receiving a similar stimulation. Our results showed that individuals' tactile accuracy when viewing an outgroup face being touched was negatively correlated to their implicit racial bias. In Experiment 2, participants received the interpersonal multisensory stimulation (IMS) while observing an outgroup member. IMS has been found to increase the perceived physical similarity between the observer's and the observed body. We tested whether such increase in ingroup/outgroup perceived physical similarity increased the remapping ability for outgroup members. We found that after sharing IMS experience with an outgroup member, tactile accuracy when viewing touch on outgroup faces increased. Interestingly, participants with stronger implicit bias against the outgroup showed larger positive change in the remapping. We conclude that shared multisensory experiences might represent one key way to improve our ability to resonate with others by overcoming the boundaries between ingroup and outgroup categories. PMID:24302900

  10. Multisensory influence on eating behavior: Hedonic consumption.

    PubMed

    Hernández Ruiz de Eguilaz, María; Martínez de Morentin Aldabe, Blanca; Almiron-Roig, Eva; Pérez-Diez, Salomé; San Cristóbal Blanco, Rodrigo; Navas-Carretero, Santiago; Martínez, J Alfredo

    2018-02-01

    Research in obesity has traditionally focused on prevention strategies and treatments aimed at changing lifestyle habits. However, recent research suggests that eating behavior is a habit regulated not only by homeostatic mechanisms, but also by the hedonic pathway that controls appetite and satiety processes. Cognitive, emotional, social, economic, and cultural factors, as well as organoleptic properties of food, are basic aspects to consider in order to understand eating behavior and its impact on health. This review presents a multisensory integrative view of food at both the homeostatic and non-homeostatic levels. This information will be of scientific interest to determine behavior drivers leading to overeating and, thus, to propose effective measures, at both the individual and population levels, for the prevention of obesity and associated metabolic diseases. Copyright © 2017 SEEN y SED. Publicado por Elsevier España, S.L.U. All rights reserved.

  11. Perceptual congruency of audio-visual speech affects ventriloquism with bilateral visual stimuli.

    PubMed

    Kanaya, Shoko; Yokosawa, Kazuhiko

    2011-02-01

    Many studies on multisensory processes have focused on performance in simplified experimental situations, with a single stimulus in each sensory modality. However, these results cannot necessarily be applied to explain our perceptual behavior in natural scenes where various signals exist within one sensory modality. We investigated the role of audio-visual syllable congruency on participants' auditory localization bias or the ventriloquism effect using spoken utterances and two videos of a talking face. Salience of facial movements was also manipulated. Results indicated that more salient visual utterances attracted participants' auditory localization. Congruent pairing of audio-visual utterances elicited greater localization bias than incongruent pairing, while previous studies have reported little dependency on the reality of stimuli in ventriloquism. Moreover, audio-visual illusory congruency, owing to the McGurk effect, caused substantial visual interference on auditory localization. Multisensory performance appears more flexible and adaptive in this complex environment than in previous studies.

  12. Vestibular system: the many facets of a multimodal sense.

    PubMed

    Angelaki, Dora E; Cullen, Kathleen E

    2008-01-01

    Elegant sensory structures in the inner ear have evolved to measure head motion. These vestibular receptors consist of highly conserved semicircular canals and otolith organs. Unlike other senses, vestibular information in the central nervous system becomes immediately multisensory and multimodal. There is no overt, readily recognizable conscious sensation from these organs, yet vestibular signals contribute to a surprising range of brain functions, from the most automatic reflexes to spatial perception and motor coordination. Critical to these diverse, multimodal functions are multiple computationally intriguing levels of processing. For example, the need for multisensory integration necessitates vestibular representations in multiple reference frames. Proprioceptive-vestibular interactions, coupled with corollary discharge of a motor plan, allow the brain to distinguish actively generated from passive head movements. Finally, nonlinear interactions between otolith and canal signals allow the vestibular system to function as an inertial sensor and contribute critically to both navigation and spatial orientation.

  13. The effect of a concurrent working memory task and temporal offsets on the integration of auditory and visual speech information.

    PubMed

    Buchan, Julie N; Munhall, Kevin G

    2012-01-01

    Audiovisual speech perception is an everyday occurrence of multisensory integration. Conflicting visual speech information can influence the perception of acoustic speech (namely the McGurk effect), and auditory and visual speech are integrated over a rather wide range of temporal offsets. This research examined whether the addition of a concurrent cognitive load task would affect the audiovisual integration in a McGurk speech task and whether the cognitive load task would cause more interference at increasing offsets. The amount of integration was measured by the proportion of responses in incongruent trials that did not correspond to the audio (McGurk response). An eye-tracker was also used to examine whether the amount of temporal offset and the presence of a concurrent cognitive load task would influence gaze behavior. Results from this experiment show a very modest but statistically significant decrease in the number of McGurk responses when subjects also perform a cognitive load task, and that this effect is relatively constant across the various temporal offsets. Participant's gaze behavior was also influenced by the addition of a cognitive load task. Gaze was less centralized on the face, less time was spent looking at the mouth and more time was spent looking at the eyes, when a concurrent cognitive load task was added to the speech task.

  14. Multisensory Associative Guided Instruction Components-Spelling

    ERIC Educational Resources Information Center

    Hamilton, Harley

    2016-01-01

    This article describes a multisensory presentation and response system for enhancing the spelling ability of dyslexic children. The unique aspect of MAGICSpell is its system of finger-letter associations and simplified keyboard configuration. Sixteen 10- and 11-year-old dyslexic students practiced the finger-letter associations via various typing…

  15. Microcontroller based fibre-optic visual presentation system for multisensory neuroimaging.

    PubMed

    Kurniawan, Veldri; Klemen, Jane; Chambers, Christopher D

    2011-10-30

    Presenting visual stimuli in physical 3D space during fMRI experiments carries significant technical challenges. Certain types of multisensory visuotactile experiments and visuomotor tasks require presentation of visual stimuli in peripersonal space, which cannot be accommodated by ordinary projection screens or binocular goggles. However, light points produced by a group of LEDs can be transmitted through fibre-optic cables and positioned anywhere inside the MRI scanner. Here we describe the design and implementation of a microcontroller-based programmable digital device for controlling fibre-optically transmitted LED lights from a PC. The main feature of this device is the ability to independently control the colour, brightness, and timing of each LED. Moreover, the device was designed in a modular and extensible way, which enables easy adaptation for various experimental paradigms. The device was tested and validated in three fMRI experiments involving basic visual perception, a simple colour discrimination task, and a blocked multisensory visuo-tactile task. The results revealed significant lateralized activation in occipital cortex of all participants, a reliable response in ventral occipital areas to colour stimuli elicited by the device, and strong activations in multisensory brain regions in the multisensory task. Overall, these findings confirm the suitability of this device for presenting complex fibre-optic visual and cross-modal stimuli inside the scanner. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Multisensory Teaching of Basic Language Skills. Third Edition

    ERIC Educational Resources Information Center

    Birsh, Judith R., Ed.

    2011-01-01

    As new research shows how effective systematic and explicit teaching of language-based skills is for students with learning disabilities--along with the added benefits of multisensory techniques--discover the latest on this popular teaching approach with the third edition of this bestselling textbook. Adopted by colleges and universities across…

  17. Program Evaluation of a School District's Multisensory Reading Initiative

    ERIC Educational Resources Information Center

    Asip, Michael Patrick

    2012-01-01

    The purpose of this study was to conduct a formative program evaluation of a school district's multisensory reading initiative. The mixed methods study involved semi-structured interviews, online survey, focus groups, document review, and analysis of extant special education student reading achievement data. Participants included elementary…

  18. Investigation of Proprioceptor Stimulation.

    ERIC Educational Resources Information Center

    Caukins, Sivan E.; And Others

    A research proposal to study the effect of multisensory teaching methods in first-grade reading is presented. The focus is on sex differences in learning and in multisensory approaches to teaching. The project will involve 10 experimental and 10 control first-grade classes in several Southern California schools. Both groups will be given IQ,…

  19. One Approach to Teaching the Specific Language Disabled Adult Language Arts.

    ERIC Educational Resources Information Center

    Peterson, Binnie L.

    1981-01-01

    One approach never before used in adult language arts instruction--the Slingerland Simultaneous Multisensory Technique--has been found useful for specific language disabled adults in multisensory programs at Anchorage Community College. The Slingerland method builds from single sight, sound, and feel of letters through combinations, encoding,…

  20. Age-Related Differences in Audiovisual Interactions of Semantically Different Stimuli

    ERIC Educational Resources Information Center

    Viggiano, Maria Pia; Giovannelli, Fabio; Giganti, Fiorenza; Rossi, Arianna; Metitieri, Tiziana; Rebai, Mohamed; Guerrini, Renzo; Cincotta, Massimo

    2017-01-01

    Converging results have shown that adults benefit from congruent multisensory stimulation in the identification of complex stimuli, whereas the developmental trajectory of the ability to integrate multisensory inputs in children is less well understood. In this study we explored the effects of audiovisual semantic congruency on identification of…

  1. Multisensory Speech Perception in Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Woynaroski, Tiffany G.; Kwakye, Leslie D.; Foss-Feig, Jennifer H.; Stevenson, Ryan A.; Stone, Wendy L.; Wallace, Mark T.

    2013-01-01

    This study examined unisensory and multisensory speech perception in 8-17 year old children with autism spectrum disorders (ASD) and typically developing controls matched on chronological age, sex, and IQ. Consonant-vowel syllables were presented in visual only, auditory only, matched audiovisual, and mismatched audiovisual ("McGurk")…

  2. Multisensory Integration Affects Visuo-Spatial Working Memory

    ERIC Educational Resources Information Center

    Botta, Fabiano; Santangelo, Valerio; Raffone, Antonino; Sanabria, Daniel; Lupianez, Juan; Belardinelli, Marta Olivetti

    2011-01-01

    In the present study, we investigate how spatial attention, driven by unisensory and multisensory cues, can bias the access of information into visuo-spatial working memory (VSWM). In a series of four experiments, we compared the effectiveness of spatially-nonpredictive visual, auditory, or audiovisual cues in capturing participants' spatial…

  3. Multisensory Public Access Catalogs on CD-ROM.

    ERIC Educational Resources Information Center

    Harrison, Nancy; Murphy, Brower

    1987-01-01

    BiblioFile Intelligent Catalog is a CD-ROM-based public access catalog system which incorporates graphics and sound to provide a multisensory interface and artificial intelligence techniques to increase search precision. The system can be updated frequently and inexpensively by linking hard disk drives to CD-ROM optical drives. (MES)

  4. Improving Vocabulary Acquisition with Multisensory Instruction

    ERIC Educational Resources Information Center

    D'Alesio, Rosemary; Scalia, Maureen T.; Zabel, Renee M.

    2007-01-01

    The purpose of this action research project was to improve student vocabulary acquisition through a multisensory, direct instructional approach. The study involved three teachers and a target population of 73 students in second and seventh grade classrooms. The intervention was implemented from September through December of 2006 and analyzed in…

  5. Accelerating Early Language Development with Multi-Sensory Training

    ERIC Educational Resources Information Center

    Bjorn, Piia M.; Kakkuri, Irma; Karvonen, Pirkko; Leppanen, Paavo H. T.

    2012-01-01

    This paper reports the outcome of a multi-sensory intervention on infant language skills. A programme titled "Rhyming Game and Exercise Club", which included kinaesthetic-tactile mother-child rhyming games performed in natural joint attention situations, was intended to accelerate Finnish six- to eight-month-old infants' language development. The…

  6. The maxillary palp of aedes aegypti, a model of multisensory integration

    USDA-ARS?s Scientific Manuscript database

    Female yellow-fever mosquitoes, Aedes aegypti, are obligate blood-feeders and vectors of the pathogens that cause dengue fever, yellow fever and Chikungunya. This feeding behavior concludes a series of multisensory events guiding the mosquito to its host from a distance. The antennae and maxillary...

  7. Roughness Perception during the Rubber Hand Illusion

    ERIC Educational Resources Information Center

    Schutz-Bosbach, Simone; Tausche, Peggy; Weiss, Carmen

    2009-01-01

    Watching a rubber hand being stroked by a paintbrush while feeling identical stroking of one's own occluded hand can create a compelling illusion that the seen hand becomes part of one's own body. It has been suggested that this so-called rubber hand illusion (RHI) does not simply reflect a bottom-up multisensory integration process but that the…

  8. Neural Correlates of Interindividual Differences in Children’s Audiovisual Speech Perception

    PubMed Central

    Nath, Audrey R.; Fava, Eswen E.; Beauchamp, Michael S.

    2011-01-01

    Children use information from both the auditory and visual modalities to aid in understanding speech. A dramatic illustration of this multisensory integration is the McGurk effect, an illusion in which an auditory syllable is perceived differently when it is paired with an incongruent mouth movement. However, there are significant interindividual differences in McGurk perception: some children never perceive the illusion, while others always do. Because converging evidence suggests that the posterior superior temporal sulcus (STS) is a critical site for multisensory integration, we hypothesized that activity within the STS would predict susceptibility to the McGurk effect. To test this idea, we used blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) in seventeen children aged 6 to 12 years to measure brain responses to three audiovisual stimulus categories: McGurk incongruent, non-McGurk incongruent and congruent syllables. Two separate analysis approaches, one using independent functional localizers and another using whole-brain voxel-based regression, showed differences in the left STS between perceivers and non-perceivers. The STS of McGurk perceivers responded significantly more than non-perceivers to McGurk syllables, but not to other stimuli, and perceivers’ hemodynamic responses in the STS were significantly prolonged. In addition to the STS, weaker differences between perceivers and non-perceivers were observed in the FFA and extrastriate visual cortex. These results suggest that the STS is an important source of interindividual variability in children’s audiovisual speech perception. PMID:21957257

  9. Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot

    PubMed Central

    Tidoni, Emmanuele; Gergondet, Pierre; Kheddar, Abderrahmane; Aglioti, Salvatore M.

    2014-01-01

    Advancement in brain computer interfaces (BCI) technology allows people to actively interact in the world through surrogates. Controlling real humanoid robots using BCI as intuitively as we control our body represents a challenge for current research in robotics and neuroscience. In order to successfully interact with the environment the brain integrates multiple sensory cues to form a coherent representation of the world. Cognitive neuroscience studies demonstrate that multisensory integration may imply a gain with respect to a single modality and ultimately improve the overall sensorimotor performance. For example, reactivity to simultaneous visual and auditory stimuli may be higher than to the sum of the same stimuli delivered in isolation or in temporal sequence. Yet, knowledge about whether audio-visual integration may improve the control of a surrogate is meager. To explore this issue, we provided human footstep sounds as audio feedback to BCI users while controlling a humanoid robot. Participants were asked to steer their robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that audio-visual synchrony between footsteps sound and actual humanoid's walk reduces the time required for steering the robot. Thus, auditory feedback congruent with the humanoid actions may improve motor decisions of the BCI's user and help in the feeling of control over it. Our results shed light on the possibility to increase robot's control through the combination of multisensory feedback to a BCI user. PMID:24987350

  10. Primary and multisensory cortical activity is correlated with audiovisual percepts.

    PubMed

    Benoit, Margo McKenna; Raij, Tommi; Lin, Fa-Hsuan; Jääskeläinen, Iiro P; Stufflebeam, Steven

    2010-04-01

    Incongruent auditory and visual stimuli can elicit audiovisual illusions such as the McGurk effect where visual /ka/ and auditory /pa/ fuse into another percept such as/ta/. In the present study, human brain activity was measured with adaptation functional magnetic resonance imaging to investigate which brain areas support such audiovisual illusions. Subjects viewed trains of four movies beginning with three congruent /pa/ stimuli to induce adaptation. The fourth stimulus could be (i) another congruent /pa/, (ii) a congruent /ka/, (iii) an incongruent stimulus that evokes the McGurk effect in susceptible individuals (lips /ka/ voice /pa/), or (iv) the converse combination that does not cause the McGurk effect (lips /pa/ voice/ ka/). This paradigm was predicted to show increased release from adaptation (i.e. stronger brain activation) when the fourth movie and the related percept was increasingly different from the three previous movies. A stimulus change in either the auditory or the visual stimulus from /pa/ to /ka/ (iii, iv) produced within-modality and cross-modal responses in primary auditory and visual areas. A greater release from adaptation was observed for incongruent non-McGurk (iv) compared to incongruent McGurk (iii) trials. A network including the primary auditory and visual cortices, nonprimary auditory cortex, and several multisensory areas (superior temporal sulcus, intraparietal sulcus, insula, and pre-central cortex) showed a correlation between perceiving the McGurk effect and the fMRI signal, suggesting that these areas support the audiovisual illusion. Copyright 2009 Wiley-Liss, Inc.

  11. Primary and Multisensory Cortical Activity is Correlated with Audiovisual Percepts

    PubMed Central

    Benoit, Margo McKenna; Raij, Tommi; Lin, Fa-Hsuan; Jääskeläinen, Iiro P.; Stufflebeam, Steven

    2012-01-01

    Incongruent auditory and visual stimuli can elicit audiovisual illusions such as the McGurk effect where visual /ka/ and auditory /pa/ fuse into another percept such as/ta/. In the present study, human brain activity was measured with adaptation functional magnetic resonance imaging to investigate which brain areas support such audiovisual illusions. Subjects viewed trains of four movies beginning with three congruent /pa/ stimuli to induce adaptation. The fourth stimulus could be (i) another congruent /pa/, (ii) a congruent /ka/, (iii) an incongruent stimulus that evokes the McGurk effect in susceptible individuals (lips /ka/ voice /pa/), or (iv) the converse combination that does not cause the McGurk effect (lips /pa/ voice/ ka/). This paradigm was predicted to show increased release from adaptation (i.e. stronger brain activation) when the fourth movie and the related percept was increasingly different from the three previous movies. A stimulus change in either the auditory or the visual stimulus from /pa/ to /ka/ (iii, iv) produced within-modality and cross-modal responses in primary auditory and visual areas. A greater release from adaptation was observed for incongruent non-McGurk (iv) compared to incongruent McGurk (iii) trials. A network including the primary auditory and visual cortices, nonprimary auditory cortex, and several multisensory areas (superior temporal sulcus, intraparietal sulcus, insula, and pre-central cortex) showed a correlation between perceiving the McGurk effect and the fMRI signal, suggesting that these areas support the audiovisual illusion. PMID:19780040

  12. The Marble-Hand Illusion

    PubMed Central

    Senna, Irene; Maravita, Angelo; Bolognini, Nadia; Parise, Cesare V.

    2014-01-01

    Our body is made of flesh and bones. We know it, and in our daily lives all the senses constantly provide converging information about this simple, factual truth. But is this always the case? Here we report a surprising bodily illusion demonstrating that humans rapidly update their assumptions about the material qualities of their body, based on their recent multisensory perceptual experience. To induce a misperception of the material properties of the hand, we repeatedly gently hit participants' hand with a small hammer, while progressively replacing the natural sound of the hammer against the skin with the sound of a hammer hitting a piece of marble. After five minutes, the hand started feeling stiffer, heavier, harder, less sensitive, unnatural, and showed enhanced Galvanic skin response (GSR) to threatening stimuli. Notably, such a change in skin conductivity positively correlated with changes in perceived hand stiffness. Conversely, when hammer hits and impact sounds were temporally uncorrelated, participants did not spontaneously report any changes in the perceived properties of the hand, nor did they show any modulation in GSR. In two further experiments, we ruled out that mere audio-tactile synchrony is the causal factor triggering the illusion, further demonstrating the key role of material information conveyed by impact sounds in modulating the perceived material properties of the hand. This novel bodily illusion, the ‘Marble-Hand Illusion', demonstrates that the perceived material of our body, surely the most stable attribute of our bodily self, can be quickly updated through multisensory integration. PMID:24621793

  13. Sugar reduction without compromising sensory perception. An impossible dream?

    PubMed

    Hutchings, Scott C; Low, Julia Y Q; Keast, Russell S J

    2018-03-21

    Sugar reduction is a major technical challenge for the food industry to address in response to public health concerns regarding the amount of added sugars in foods. This paper reviews sweet taste perception, sensory methods to evaluate sugar reduction and the merits of different techniques available to reduce sugar content. The use of sugar substitutes (non-nutritive sweeteners, sugar alcohols, and fibres) can achieve the greatest magnitude of sugar and energy reduction, however bitter side tastes and varying temporal sweet profiles are common issues. The use of multisensory integration principles (particularly aroma) can be an effective approach to reduce sugar content, however the magnitude of sugar reduction is small. Innovation in food structure (modifying the sucrose distribution, serum release and fracture mechanics) offers a new way to reduce sugar without significant changes in food composition, however may be difficult to implement in food produced on a large scale. Gradual sugar reduction presents difficulties for food companies from a sales perspective if acceptability is compromised. Ultimately, a holistic approach where food manufacturers integrate a range of these techniques is likely to provide the best progress. However, substantial reduction of sugar in processed foods without compromising sensory properties may be an impossible dream.

  14. Sound Richness of Music Might Be Mediated by Color Perception: A PET Study.

    PubMed

    Satoh, Masayuki; Nagata, Ken; Tomimoto, Hidekazu

    2015-01-01

    We investigated the role of the fusiform cortex in music processing with the use of PET, focusing on the perception of sound richness. Musically naïve subjects listened to familiar melodies with three kinds of accompaniments: (i) an accompaniment composed of only three basic chords (chord condition), (ii) a simple accompaniment typically used in traditional music text books in elementary school (simple condition), and (iii) an accompaniment with rich and flowery sounds composed by a professional composer (complex condition). Using a PET subtraction technique, we studied changes in regional cerebral blood flow (rCBF) in simple minus chord, complex minus simple, and complex minus chord conditions. The simple minus chord, complex minus simple, and complex minus chord conditions regularly showed increases in rCBF at the posterior portion of the inferior temporal gyrus, including the LOC and fusiform gyrus. We may conclude that certain association cortices such as the LOC and the fusiform cortex may represent centers of multisensory integration, with foreground and background segregation occurring at the LOC level and the recognition of richness and floweriness of stimuli occurring in the fusiform cortex, both in terms of vision and audition.

  15. Multisensory Teaching of Basic Language Skills. Second Edition

    ERIC Educational Resources Information Center

    Birsh, Judith R., Ed.

    2005-01-01

    For students with dyslexia and other learning disabilities--and for their peers--creative teaching methods that use two or more senses can dramatically improve language skills and academic outcomes. That is why every current and future educator needs the second edition of this definitive guide to multisensory teaching. A core text for a variety of…

  16. The Multisensory Sound Lab: Sounds You Can See and Feel.

    ERIC Educational Resources Information Center

    Lederman, Norman; Hendricks, Paula

    1994-01-01

    A multisensory sound lab has been developed at the Model Secondary School for the Deaf (District of Columbia). A special floor allows vibrations to be felt, and a spectrum analyzer displays frequencies and harmonics visually. The lab is used for science education, auditory training, speech therapy, music and dance instruction, and relaxation…

  17. Differentiating for Struggling Readers and Writers: Improving Motivation and Metacognition through Multisensory Methods & Explicit Strategy Instruction

    ERIC Educational Resources Information Center

    Walet, Jennifer

    2011-01-01

    This paper examines the issue of struggling readers and writers, and offers suggestions to help teachers increase struggling students' motivation and metacognition. Suggestions include multisensory methods that make use of the visual, auditory and kinesthetic learning pathways, as well as explicit strategy instruction to improve students' ability…

  18. Multisensory Information Boosts Numerical Matching Abilities in Young Children

    ERIC Educational Resources Information Center

    Jordan, Kerry E.; Baker, Joseph

    2011-01-01

    This study presents the first evidence that preschool children perform more accurately in a numerical matching task when given multisensory rather than unisensory information about number. Three- to 5-year-old children learned to play a numerical matching game on a touchscreen computer, which asked them to match a sample numerosity with a…

  19. Technologically and Artistically Enhanced Multi-Sensory Computer-Programming Education

    ERIC Educational Resources Information Center

    Katai, Zoltan; Toth, Laszlo

    2010-01-01

    Over the last decades more and more research has analysed relatively new or rediscovered teaching-learning concepts like blended, hybrid, multi-sensory or technologically enhanced learning. This increased interest in these educational forms can be explained by new exciting discoveries in brain research and cognitive psychology, as well as by the…

  20. Please! Teach All of Me: Multisensory Activities for Preschoolers.

    ERIC Educational Resources Information Center

    Crawford, Jackie; Hanson, Joni; Gums, Marcia; Neys, Paula

    Most people, including children, have preferences for how they learn about the world. When these preferences are clearly noticeable, they may be thought of as sensory strengths. For some children, sensory strengths develop because of a weakness in another sensory area. For these children, multisensory instruction can be very helpful. Multisensory…

  1. Role of multisensory stimuli in vigilance enhancement- a single trial event related potential study.

    PubMed

    Abbasi, Nida Itrat; Bodala, Indu Prasad; Bezerianos, Anastasios; Yu Sun; Al-Nashash, Hasan; Thakor, Nitish V

    2017-07-01

    Development of interventions to prevent vigilance decrement has important applications in sensitive areas like transportation and defence. The objective of this work is to use multisensory (visual and haptic) stimuli for cognitive enhancement during mundane tasks. Two different epoch intervals representing sensory perception and motor response were analysed using minimum variance distortionless response (MVDR) based single trial ERP estimation to understand the performance dependency on both factors. Bereitschaftspotential (BP) latency L3 (r=0.6 in phase 1 (visual) and r=0.71 in phase 2 (visual and haptic)) was significantly correlated with reaction time as compared to that of sensory ERP latency L2 (r=0.1 in both phase 1 and phase 2). This implies that low performance in monotonous tasks is predominantly dependent on the prolonged neural interaction with the muscles to initiate movement. Further, negative relationship was found between the ERP latencies related to sensory perception and Bereitschaftspotential (BP) and occurrence of epochs when multisensory cues are provided. This means that vigilance decrement is reduced with the help of multisensory stimulus presentation in prolonged monotonous tasks.

  2. Congruent and Opposite Neurons as Partners in Multisensory Integration and Segregation

    NASA Astrophysics Data System (ADS)

    Zhang, Wen-Hao; Wong, K. Y. Michael; Wang, He; Wu, Si

    Experiments revealed that where visual and vestibular cues are integrated to infer heading direction in the brain, there are two types of neurons with roughly the same number. Respectively, congruent and opposite cells respond similarly and oppositely to visual and vestibular cues. Congruent neurons are known to be responsible for cue integration, but the computational role of opposite neurons remains largely unknown. We propose that opposite neurons may serve to encode the disparity information between cues necessary for multisensory segregation. We build a computational model composed of two reciprocally coupled modules, each consisting of groups of congruent and opposite neurons. Our model reproduces the characteristics of congruent and opposite neurons, and demonstrates that in each module, congruent and opposite neurons can jointly achieve optimal multisensory information integration and segregation. This study sheds light on our understanding of how the brain implements optimal multisensory integration and segregation concurrently in a distributed manner. This work is supported by the Research Grants Council of Hong Kong (N _HKUST606/12, 605813, and 16322616) and National Basic Research Program of China (2014CB846101) and the Natural Science Foundation of China (31261160495).

  3. Multisensory integration of colors and scents: insights from bees and flowers.

    PubMed

    Leonard, Anne S; Masek, Pavel

    2014-06-01

    Karl von Frisch's studies of bees' color vision and chemical senses opened a window into the perceptual world of a species other than our own. A century of subsequent research on bees' visual and olfactory systems has developed along two productive but independent trajectories, leaving the questions of how and why bees use these two senses in concert largely unexplored. Given current interest in multimodal communication and recently discovered interplay between olfaction and vision in humans and Drosophila, understanding multisensory integration in bees is an opportunity to advance knowledge across fields. Using a classic ethological framework, we formulate proximate and ultimate perspectives on bees' use of multisensory stimuli. We discuss interactions between scent and color in the context of bee cognition and perception, focusing on mechanistic and functional approaches, and we highlight opportunities to further explore the development and evolution of multisensory integration. We argue that although the visual and olfactory worlds of bees are perhaps the best-studied of any non-human species, research focusing on the interactions between these two sensory modalities is vitally needed.

  4. Increases in the autistic trait of attention to detail are associated with decreased multisensory temporal adaptation.

    PubMed

    Stevenson, Ryan A; Toulmin, Jennifer K; Youm, Ariana; Besney, Richard M A; Schulz, Samantha E; Barense, Morgan D; Ferber, Susanne

    2017-10-30

    Recent empirical evidence suggests that autistic individuals perceive the world differently than their typically-developed peers. One theoretical account, the predictive coding hypothesis, posits that autistic individuals show a decreased reliance on previous perceptual experiences, which may relate to autism symptomatology. We tested this through a well-characterized, audiovisual statistical-learning paradigm in which typically-developed participants were first adapted to consistent temporal relationships between audiovisual stimulus pairs (audio-leading, synchronous, visual-leading) and then performed a simultaneity judgement task with audiovisual stimulus pairs varying in temporal offset from auditory-leading to visual-leading. Following exposure to the visual-leading adaptation phase, participants' perception of synchrony was biased towards visual-leading presentations, reflecting the statistical regularities of their previously experienced environment. Importantly, the strength of adaptation was significantly related to the level of autistic traits that the participant exhibited, measured by the Autism Quotient (AQ). This was specific to the Attention to Detail subscale of the AQ that assesses the perceptual propensity to focus on fine-grain aspects of sensory input at the expense of more integrative perceptions. More severe Attention to Detail was related to weaker adaptation. These results support the predictive coding framework, and suggest that changes in sensory perception commonly reported in autism may contribute to autistic symptomatology.

  5. Audiovisual temporal recalibration: space-based versus context-based.

    PubMed

    Yuan, Xiangyong; Li, Baolin; Bi, Cuihua; Yin, Huazhan; Huang, Xiting

    2012-01-01

    Recalibration of perceived simultaneity has been widely accepted to minimise delay between multisensory signals owing to different physical and neural conduct times. With concurrent exposure, temporal recalibration is either contextually or spatially based. Context-based recalibration was recently described in detail, but evidence for space-based recalibration is scarce. In addition, the competition between these two reference frames is unclear. Here, we examined participants who watched two distinct blob-and-tone couples that laterally alternated with one asynchronous and the other synchronous and then judged their perceived simultaneity and sequence when they swapped positions and varied in timing. For low-level stimuli with abundant auditory location cues space-based aftereffects were significantly more apparent (8.3%) than context-based aftereffects (4.2%), but without such auditory cues space-based aftereffects were less apparent (4.4%) and were numerically smaller than context-based aftereffects (6.0%). These results suggested that stimulus level and auditory location cues were both determinants of the recalibration frame. Through such joint judgments and the simple reaction time task, our results further revealed that criteria from perceived simultaneity to successiveness profoundly shifted without accompanying perceptual latency changes across adaptations, hence implying that criteria shifts, rather than perceptual latency changes, accounted for space-based and context-based temporal recalibration.

  6. Perception of the Multisensory Coherence of Fluent Audiovisual Speech in Infancy: Its Emergence & the Role of Experience

    PubMed Central

    Lewkowicz, David J.; Minar, Nicholas J.; Tift, Amy H.; Brandon, Melissa

    2014-01-01

    To investigate the developmental emergence of the ability to perceive the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8–10, and 12–14 month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor the 8–10 month-old infants exhibited audio-visual matching in that neither group exhibited greater looking at the matching monologue. In contrast, the 12–14 month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, they perceived the multisensory coherence of native-language monologues earlier in the test trials than of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12–14 month olds did not depend on audio-visual synchrony whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audio-visual synchrony cues are more important in the perception of the multisensory coherence of non-native than native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing. PMID:25462038

  7. Perception of the multisensory coherence of fluent audiovisual speech in infancy: its emergence and the role of experience.

    PubMed

    Lewkowicz, David J; Minar, Nicholas J; Tift, Amy H; Brandon, Melissa

    2015-02-01

    To investigate the developmental emergence of the perception of the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8- to 10-, and 12- to 14-month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor 8- to 10-month-old infants exhibited audiovisual matching in that they did not look longer at the matching monologue. In contrast, the 12- to 14-month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, perceived the multisensory coherence of native-language monologues earlier in the test trials than that of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12- to 14-month-olds did not depend on audiovisual synchrony, whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audiovisual synchrony cues are more important in the perception of the multisensory coherence of non-native speech than that of native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Relations between social-perceptual ability in multi- and unisensory contexts, autonomic reactivity, and social functioning in individuals with Williams syndrome

    PubMed Central

    Järvinen, Anna; Ng, Rowena; Crivelli, Davide; Arnold, Andrew J.; Woo-VonHoogenstyn, Nicholas; Bellugi, Ursula

    2015-01-01

    Compromised social-perceptual ability has been proposed to contribute to social dysfunction in neurodevelopmental disorders. While such impairments have been identified in Williams syndrome (WS), little is known about emotion processing in auditory and multisensory contexts. Employing a multidimensional approach, individuals with WS and typical development (TD) were tested for emotion identification across fearful, happy, and angry multisensory and unisensory face and voice stimuli. Autonomic responses were monitored in response to unimodal emotion. The WS group was administered an inventory of social functioning. Behaviorally, individuals with WS relative to TD demonstrated impaired processing of unimodal vocalizations and emotionally incongruent audiovisual compounds, reflecting a generalized deficit in social-auditory processing in WS. The TD group outperformed their counterparts with WS in identifying negative (fearful and angry) emotion, with similar between-group performance with happy stimuli. Mirroring this pattern, electrodermal activity (EDA) responses to the emotional content of the stimuli indicated that whereas those with WS showed the highest arousal to happy, and lowest arousal to fearful stimuli, the TD participants demonstrated the contrasting pattern. In WS, more normal social functioning was related to higher autonomic arousal to facial expressions. Implications for underlying neural architecture and emotional functions are discussed. PMID:26002754

  9. Effects of Multisensory Therapy on Behaviour of Adult Clients with Developmental Disabilities.

    PubMed

    Sally, Chan; David, Thompson R; Chau, P C; Tam, W; Chiu, I Ws

    The objective of this review was to present the best available evidence on the effect of multisensory therapy in adult clients with developmental disabilities on the frequency of challenging behaviour, the frequency of stereotypic self-stimulating behaviour, and the frequency of relaxing behaviour INCLUSION CRITERIA: The review summarised all the relevant studies relating to the multisensory therapy intervention. Trials which included adult clients (aged 18-60) diagnosed with mental retardation according to the criteria of the Diagnostic and Statistical Manual of Mental Disorders: IV Classification or those with an Intelligence Quotient < 70 and who stayed in institutions.Types of interventions: Multisensory therapy/ multisensory environment/ SnoezelenTypes of outcomes measures: Outcome measures of interest were challenging behaviour, stereotypic self-stimulating behaviour and relaxing behaviour.Types of studies: This study considered any randomized or quasi-randomized controlled trials that investigated the effectiveness of multisensory therapy on adult clients with developmental disabilities. Due to a limited number of high quality RCT's on this subject, papers using other experimental or observational designs were also included. Electronic databases were used to search for primary publications. The reference lists and bibliographies of retrieved articles were reviewed to identify research not located through other search strategies. Two reviewers assessed all identified abstracts and full reports were retrieved for all studies that met the inclusion criteria of the review. Studies identified from bibliography searches were assessed on the study title. Methodological quality was assessed by two reviewers using a checklist. Disagreements between reviewers were resolved by discussion with a third reviewer. Data were extracted independently by two reviewers using a data extraction tool. A third reviewer dealt with disagreements. In all studies percentages of clients in each category and/or change in group mean score for outcomes were reported. If appropriate, results from comparable groups of studies were pooled in statistical meta-analysis using Review Manager Software from the Cochrane Collaboration. Odds ratio (for categorical outcome data) or weighted mean differences (for continuous data) and their 95% confidence intervals were calculated for each analysis. Heterogeneity between combined studies was tested using standard chi-square test. For the purpose of this review, where possible, intention to treat and/or completer analysis were performed. Where statistical pooling was not appropriate or possible, the findings were summarised in narrative form. 130 publications were identified through the various database searches and review of reference list and bibliographies. However, only 15 English publications were included in the review. The present evidence showed that multisensory therapy promoted participants' positive emotions and they reported being happier and more relaxed. Evidence also indicated that participants' had displayed more positive emotions and less negative emotions after therapy sessions. This systematic review demonstrated a beneficial effect of multisensory therapy in promoting participants' positive emotions. Out of the 15 reviewed studies, 12 studies had a single treatment group only. While the reviewers acknowledge the difficulty in carrying out randomised controlled trial in people with developmental disabilities and challenging behaviour, the lack of trial-derived evidence makes it difficult to produce a strong conclusion to show the effectiveness of the multisensory therapy.

  10. Self-motion perception in autism is compromised by visual noise but integrated optimally across multiple senses

    PubMed Central

    Zaidel, Adam; Goin-Kochel, Robin P.; Angelaki, Dora E.

    2015-01-01

    Perceptual processing in autism spectrum disorder (ASD) is marked by superior low-level task performance and inferior complex-task performance. This observation has led to theories of defective integration in ASD of local parts into a global percept. Despite mixed experimental results, this notion maintains widespread influence and has also motivated recent theories of defective multisensory integration in ASD. Impaired ASD performance in tasks involving classic random dot visual motion stimuli, corrupted by noise as a means to manipulate task difficulty, is frequently interpreted to support this notion of global integration deficits. By manipulating task difficulty independently of visual stimulus noise, here we test the hypothesis that heightened sensitivity to noise, rather than integration deficits, may characterize ASD. We found that although perception of visual motion through a cloud of dots was unimpaired without noise, the addition of stimulus noise significantly affected adolescents with ASD, more than controls. Strikingly, individuals with ASD demonstrated intact multisensory (visual–vestibular) integration, even in the presence of noise. Additionally, when vestibular motion was paired with pure visual noise, individuals with ASD demonstrated a different strategy than controls, marked by reduced flexibility. This result could be simulated by using attenuated (less reliable) and inflexible (not experience-dependent) Bayesian priors in ASD. These findings question widespread theories of impaired global and multisensory integration in ASD. Rather, they implicate increased sensitivity to sensory noise and less use of prior knowledge in ASD, suggesting increased reliance on incoming sensory information. PMID:25941373

  11. Examining the Effectiveness of a Multi-Sensory Instructional Reading Program in One Rural Midwestern School District

    ERIC Educational Resources Information Center

    Waldvogel, Steven John

    2010-01-01

    Scope and method of study: The purpose of this research study was to examine the effectiveness of an (IMSE) Orton-Gillingham based multi-sensory instructional reading program when incorporated with kindergarten through first grade classroom reading instruction in one rural Midwestern school district. The IMSE supplemental reading program is…

  12. Effects of Multisensory Environments on Stereotyped Behaviours Assessed as Maintained by Automatic Reinforcement

    ERIC Educational Resources Information Center

    Hill, Lindsay; Trusler, Karen; Furniss, Frederick; Lancioni, Giulio

    2012-01-01

    Background: The aim of the present study was to evaluate the effects of the sensory equipment provided in a multi-sensory environment (MSE) and the level of social contact provided on levels of stereotyped behaviours assessed as being maintained by automatic reinforcement. Method: Stereotyped and engaged behaviours of two young people with severe…

  13. Multi-Sensory Exercises: An Approach to Communicative Practice. 1975-1979.

    ERIC Educational Resources Information Center

    Kalivoda, Theodore B.

    A reprint of a 1975 article on multi-sensory exercises for communicative second language learning is presented. The article begins by noting that the use of drills as a language learning and practice technique had been lost in the trend toward communicative language teaching, but that drills can provide a means of gaining functional control of…

  14. The Impact of Using Multi-Sensory Approach for Teaching Students with Learning Disabilities

    ERIC Educational Resources Information Center

    Obaid, Majeda Al Sayyed

    2013-01-01

    The purpose of this study is to investigate the effect of using the Multi-Sensory Approach for teaching students with learning disabilities on the sixth grade students' achievement in mathematics at Jordanian public schools. To achieve the purpose of the study, a pre/post-test was constructed to measure students' achievement in mathematics. The…

  15. Dynamics of cortico-subcortical cross-modal operations involved in audio-visual object detection in humans.

    PubMed

    Fort, Alexandra; Delpuech, Claude; Pernier, Jacques; Giard, Marie-Hélène

    2002-10-01

    Very recently, a number of neuroimaging studies in humans have begun to investigate the question of how the brain integrates information from different sensory modalities to form unified percepts. Already, intermodal neural processing appears to depend on the modalities of inputs or the nature (speech/non-speech) of information to be combined. Yet, the variety of paradigms, stimuli and technics used make it difficult to understand the relationships between the factors operating at the perceptual level and the underlying physiological processes. In a previous experiment, we used event-related potentials to describe the spatio-temporal organization of audio-visual interactions during a bimodal object recognition task. Here we examined the network of cross-modal interactions involved in simple detection of the same objects. The objects were defined either by unimodal auditory or visual features alone, or by the combination of the two features. As expected, subjects detected bimodal stimuli more rapidly than either unimodal stimuli. Combined analysis of potentials, scalp current densities and dipole modeling revealed several interaction patterns within the first 200 micro s post-stimulus: in occipito-parietal visual areas (45-85 micro s), in deep brain structures, possibly the superior colliculus (105-140 micro s), and in right temporo-frontal regions (170-185 micro s). These interactions differed from those found during object identification in sensory-specific areas and possibly in the superior colliculus, indicating that the neural operations governing multisensory integration depend crucially on the nature of the perceptual processes involved.

  16. Drawing Bodies and Spaces in Telecollaboration: A View of Research Potential in Synaesthesia and Multimodality, from the Outside

    ERIC Educational Resources Information Center

    Malinowski, David

    2014-01-01

    While much scholarship on the multisensory and transmodal phenomenon of synaesthesia seeks to uncover its psychophysiological and neurological bases, recent research in multimodal literacy and language acquisition addresses it largely in terms of agentive processes of meaning-making and design. This paper takes as its starting point the latter's…

  17. Perceptual Literacy and the Construction of Significant Meanings within Art Education

    ERIC Educational Resources Information Center

    Cerkez, Beatriz Tomsic

    2014-01-01

    In order to verify how important the ability to process visual images and sounds in a holistic way can be, we developed an experiment based on the production and reception of an art work that was conceived as a multi-sensorial experience and implied a complex understanding of visual and auditory information. We departed from the idea that to…

  18. Parameters of Semantic Multisensory Integration Depend on Timing and Modality Order among People on the Autism Spectrum: Evidence from Event-Related Potentials

    ERIC Educational Resources Information Center

    Russo, N.; Mottron, L.; Burack, J. A.; Jemel, B.

    2012-01-01

    Individuals with autism spectrum disorders (ASD) report difficulty integrating simultaneously presented visual and auditory stimuli (Iarocci & McDonald, 2006), albeit showing enhanced perceptual processing of unisensory stimuli, as well as an enhanced role of perception in higher-order cognitive tasks (Enhanced Perceptual Functioning (EPF) model;…

  19. Methods and Apparatus for Autonomous Robotic Control

    NASA Technical Reports Server (NTRS)

    Gorshechnikov, Anatoly (Inventor); Livitz, Gennady (Inventor); Versace, Massimiliano (Inventor); Palma, Jesse (Inventor)

    2017-01-01

    Sensory processing of visual, auditory, and other sensor information (e.g., visual imagery, LIDAR, RADAR) is conventionally based on "stovepiped," or isolated processing, with little interactions between modules. Biological systems, on the other hand, fuse multi-sensory information to identify nearby objects of interest more quickly, more efficiently, and with higher signal-to-noise ratios. Similarly, examples of the OpenSense technology disclosed herein use neurally inspired processing to identify and locate objects in a robot's environment. This enables the robot to navigate its environment more quickly and with lower computational and power requirements.

  20. Exploring the unconscious using faces.

    PubMed

    Axelrod, Vadim; Bar, Moshe; Rees, Geraint

    2015-01-01

    Understanding the mechanisms of unconscious processing is one of the most substantial endeavors of cognitive science. While there are many different empirical ways to address this question, the use of faces in such research has proven exceptionally fruitful. We review here what has been learned about unconscious processing through the use of faces and face-selective neural correlates. A large number of cognitive systems can be explored with faces, including emotions, social cueing and evaluation, attention, multisensory integration, and various aspects of face processing. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Multi-sensory integration in a small brain

    NASA Astrophysics Data System (ADS)

    Gepner, Ruben; Wolk, Jason; Gershow, Marc

    Understanding how fluctuating multi-sensory stimuli are integrated and transformed in neural circuits has proved a difficult task. To address this question, we study the sensori-motor transformations happening in the brain of the Drosophila larva, a tractable model system with about 10,000 neurons. Using genetic tools that allow us to manipulate the activity of individual brain cells through their transparent body, we observe the stochastic decisions made by freely-behaving animals as their visual and olfactory environments fluctuate independently. We then use simple linear-nonlinear models to correlate outputs with relevant features in the inputs, and adaptive filtering processes to track changes in these relevant parameters used by the larva's brain to make decisions. We show how these techniques allow us to probe how statistics of stimuli from different sensory modalities combine to affect behavior, and can potentially guide our understanding of how neural circuits are anatomically and functionally integrated. Supported by NIH Grant 1DP2EB022359 and NSF Grant PHY-1455015.

  2. Different patterns of modality dominance across development.

    PubMed

    Barnhart, Wesley R; Rivera, Samuel; Robinson, Christopher W

    2018-01-01

    The present study sought to better understand how children, young adults, and older adults attend and respond to multisensory information. In Experiment 1, young adults were presented with two spoken words, two pictures, or two word-picture pairings and they had to determine if the two stimuli/pairings were exactly the same or different. Pairing the words and pictures together slowed down visual but not auditory response times and delayed the latency of first fixations, both of which are consistent with a proposed mechanism underlying auditory dominance. Experiment 2 examined the development of modality dominance in children, young adults, and older adults. Cross-modal presentation attenuated visual accuracy and slowed down visual response times in children, whereas older adults showed the opposite pattern, with cross-modal presentation attenuating auditory accuracy and slowing down auditory response times. Cross-modal presentation also delayed first fixations in children and young adults. Mechanisms underlying modality dominance and multisensory processing are discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Detection of Iberian ham aroma by a semiconductor multisensorial system.

    PubMed

    Otero, Laura; Horrillo, M A Carmen; García, María; Sayago, Isabel; Aleixandre, Manuel; Fernández, M A Jesús; Arés, Luis; Gutiérrez, Javier

    2003-11-01

    A semiconductor multisensorial system, based on tin oxide, to control the quality of dry-cured Iberian hams is described. Two types of ham (submitted to different drying temperatures) were selected. Good responses were obtained from the 12 elements forming the multisensor for different operating temperatures. Discrimination between the two types of ham was successfully realised through principal component analysis (PCA).

  4. The Effect of a Suggested Multisensory Phonics Program on Developing Kindergarten Pre-Service Teachers' EFL Reading Accuracy and Phonemic Awareness

    ERIC Educational Resources Information Center

    Ghoneim, Nahed Mohammed Mahmoud; Elghotmy, Heba Elsayed Abdelsalam

    2015-01-01

    The current study investigates the effect of a suggested multisensory phonics program on developing kindergarten pre-service teachers' EFL reading accuracy and phonemic awareness. A total of 40 fourth year kindergarten pre-service teachers, Faculty of Education, participated in the study that involved one group experimental design. Pre-post tests…

  5. The Impact of Multisensory Instruction on Learning Letter Names and Sounds, Word Reading, and Spelling

    ERIC Educational Resources Information Center

    Schlesinger, Nora W.; Gray, Shelley

    2017-01-01

    The purpose of this study was to investigate whether the use of simultaneous multisensory structured language instruction promoted better letter name and sound production, word reading, and word spelling for second grade children with typical development (N = 6) or with dyslexia (N = 5) than structured language instruction alone. The use of…

  6. Multi-Sensory Storytelling for Persons with Profound Intellectual and Multiple Disabilities: An Analysis of the Development, Content and Application in Practice

    ERIC Educational Resources Information Center

    ten Brug, Annet; van der Putten, Annette; Penne, Anneleen; Maes, Bea; Vlaskamp, Carla

    2012-01-01

    Background: Multi-sensory storytelling (MSST) books are individualized stories, which involve sensory stimulation in addition to verbal text. Despite the frequent use of MSST in practice, little research is conducted into its structure, content and effectiveness. This study aims at the analysis of the development, content and application in…

  7. Using the TouchMath Program to Teach Mathematical Computation to At-Risk Students and Students with Disabilities

    ERIC Educational Resources Information Center

    Ellingsen, Ryleigh; Clinton, Elias

    2017-01-01

    This manuscript reviews the empirical literature of the TouchMath© instructional program. The TouchMath© program is a commercial mathematics series that uses a dot notation system to provide multisensory instruction of computation skills. Using the program, students are taught to solve computational tasks in a multisensory manner that does not…

  8. Multisensory Integration of Low-Level Information in Autism Spectrum Disorder: Measuring Susceptibility to the Flash-Beep Illusion

    ERIC Educational Resources Information Center

    Bao, Vanessa A.; Doobay, Victoria; Mottron, Laurent; Collignon, Olivier; Bertone, Armando

    2017-01-01

    Previous studies have suggested audiovisual multisensory integration (MSI) may be atypical in Autism Spectrum Disorder (ASD). However, much of the research having found an alteration in MSI in ASD involved socio-communicative stimuli. The goal of the current study was to investigate MSI abilities in ASD using lower-level stimuli that are not…

  9. Meta-Analysis of the Effectiveness of Individual Intervention in the Controlled Multisensory Environment (Snoezelen[R]) for Individuals with Intellectual Disability

    ERIC Educational Resources Information Center

    Lotan, Meir; Gold, Christian

    2009-01-01

    Background: The Snoezelen[R] is a multisensory intervention approach that has been implemented with various populations. Due to an almost complete absence of rigorous research in this field, the confirmation of this approach as an effective therapeutic intervention is warranted. Method: To evaluate the therapeutic influence of the…

  10. The Use of 'Snoezelen' as Multisensory Stimulation with People with Intellectual Disabilities: A Review of the Research.

    ERIC Educational Resources Information Center

    Hogg, James; Cavet, Judith; Lambe, Loretto; Smeddle, Mary

    2001-01-01

    A research review on the use of Snoezelen (multisensory training) with people with mental retardation demonstrates a wide range of positive outcomes, though there is little evidence of generalization even to the immediate post-Snoezelen environment. The issue of staff attitudes and the place of Snoezelen in facilitating positive interactions is…

  11. Look Closer: The Alertness of People with Profound Intellectual and Multiple Disabilities during Multi-Sensory Storytelling, a Time Sequential Analysis

    ERIC Educational Resources Information Center

    Ten Brug, Annet; Munde, Vera S.; van der Putten, Annette A.J.; Vlaskamp, Carla

    2015-01-01

    Introduction: Multi-sensory storytelling (MSST) is a storytelling method designed for individuals with profound intellectual and multiple disabilities (PIMD). It is essential that listeners be alert during MSST, so that they become familiar with their personalised stories. Repetition and the presentation of stimuli are likely to affect the…

  12. Benefits of Multisensory Structured Language Instruction for At-Risk Foreign Language Learners: A Comparison Study of High School Spanish Students.

    ERIC Educational Resources Information Center

    Sparks, Richard L.; Artzer, Marjorie; Patton, Jon; Ganschow, Leonore; Miller, Karen; Hordubay, Dorothy J.; Walsh, Geri

    1998-01-01

    A study examined the benefits of multisensory structured language (MSL) instruction in Spanish for 39 high school students at risk for foreign-language learning difficulties and 16 controls. On measures of oral and written foreign-language proficiency, the MSL and control groups scored significantly higher than those instructed using traditional…

  13. Effects of Multisensory Speech Training and Visual Phonics on Speech Production of a Hearing-Impaired Child.

    ERIC Educational Resources Information Center

    Zaccagnini, Cindy M.; Antia, Shirin D.

    1993-01-01

    This study of the effects of intensive multisensory speech training on the speech production of a profoundly hearing-impaired child (age nine) found that the addition of Visual Phonics hand cues did not result in speech production gains. All six target phonemes were generalized to new words and maintained after the intervention was discontinued.…

  14. Perceptuo-motor compatibility governs multisensory integration in bimanual coordination dynamics.

    PubMed

    Zelic, Gregory; Mottet, Denis; Lagarde, Julien

    2016-02-01

    The brain has the remarkable ability to bind together inputs from different sensory origin into a coherent percept. Behavioral benefits can result from such ability, e.g., a person typically responds faster and more accurately to cross-modal stimuli than to unimodal stimuli. To date, it is, however, largely unknown whether such multisensory benefits, shown for discrete reactive behaviors, generalize to the continuous coordination of movements. The present study addressed multisensory integration from the perspective of bimanual coordination dynamics, where the perceptual activity no longer triggers a single response but continuously guides the motor action. The task consisted in coordinating anti-symmetrically the continuous flexion-extension of the index fingers, while synchronizing with an external pacer. Three different configurations of metronome were tested, for which we examined whether a cross-modal pacing (audio-tactile beats) improved the stability of the coordination in comparison with unimodal pacing condition (auditory or tactile beats). We found a more stable bimanual coordination for cross-modal pacing, but only when the metronome configuration directly matched the anti-symmetric coordination pattern. We conclude that multisensory integration can benefit the continuous coordination of movements; however, this is constrained by whether the perceptual and motor activities match in space and time.

  15. Interactive Sonification Exploring Emergent Behavior Applying Models for Biological Information and Listening

    PubMed Central

    Choi, Insook

    2018-01-01

    Sonification is an open-ended design task to construct sound informing a listener of data. Understanding application context is critical for shaping design requirements for data translation into sound. Sonification requires methodology to maintain reproducibility when data sources exhibit non-linear properties of self-organization and emergent behavior. This research formalizes interactive sonification in an extensible model to support reproducibility when data exhibits emergent behavior. In the absence of sonification theory, extensibility demonstrates relevant methods across case studies. The interactive sonification framework foregrounds three factors: reproducible system implementation for generating sonification; interactive mechanisms enhancing a listener's multisensory observations; and reproducible data from models that characterize emergent behavior. Supramodal attention research suggests interactive exploration with auditory feedback can generate context for recognizing irregular patterns and transient dynamics. The sonification framework provides circular causality as a signal pathway for modeling a listener interacting with emergent behavior. The extensible sonification model adopts a data acquisition pathway to formalize functional symmetry across three subsystems: Experimental Data Source, Sound Generation, and Guided Exploration. To differentiate time criticality and dimensionality of emerging dynamics, tuning functions are applied between subsystems to maintain scale and symmetry of concurrent processes and temporal dynamics. Tuning functions accommodate sonification design strategies that yield order parameter values to render emerging patterns discoverable as well as rehearsable, to reproduce desired instances for clinical listeners. Case studies are implemented with two computational models, Chua's circuit and Swarm Chemistry social agent simulation, generating data in real-time that exhibits emergent behavior. Heuristic Listening is introduced as an informal model of a listener's clinical attention to data sonification through multisensory interaction in a context of structured inquiry. Three methods are introduced to assess the proposed sonification framework: Listening Scenario classification, data flow Attunement, and Sonification Design Patterns to classify sound control. Case study implementations are assessed against these methods comparing levels of abstraction between experimental data and sound generation. Outcomes demonstrate the framework performance as a reference model for representing experimental implementations, also for identifying common sonification structures having different experimental implementations, identifying common functions implemented in different subsystems, and comparing impact of affordances across multiple implementations of listening scenarios. PMID:29755311

  16. "Multisensory brand search: How the meaning of sounds guides consumers' visual attention": Correction to Knoeferle et al. (2016).

    PubMed

    2017-03-01

    Reports an error in "Multisensory brand search: How the meaning of sounds guides consumers' visual attention" by Klemens M. Knoeferle, Pia Knoeferle, Carlos Velasco and Charles Spence ( Journal of Experimental Psychology: Applied , 2016[Jun], Vol 22[2], 196-210). In the article, under Experiment 2, Design and Stimuli, the set number of target products and visual distractors reported in the second paragraph should be 20 and 13, respectively: "On each trial, the 16 products shown in the display were randomly selected from a set of 20 products belonging to different categories. Out of the set of 20 products, seven were potential targets, whereas the other 13 were used as visual distractors only throughout the experiment (since they were not linked to specific usage or consumption sounds)." Consequently, Appendix A in the supplemental materials has been updated. (The following abstract of the original article appeared in record 2016-28876-002.) Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Design and test of a biosensor-based multisensorial system: a proof of concept study.

    PubMed

    Santonico, Marco; Pennazza, Giorgio; Grasso, Simone; D'Amico, Arnaldo; Bizzarri, Mariano

    2013-12-04

    Sensors are often organized in multidimensional systems or networks for particular applications. This is facilitated by the large improvements in the miniaturization process, power consumption reduction and data analysis techniques nowadays possible. Such sensors are frequently organized in multidimensional arrays oriented to the realization of artificial sensorial systems mimicking the mechanisms of human senses. Instruments that make use of these sensors are frequently employed in the fields of medicine and food science. Among them, the so-called electronic nose and tongue are becoming more and more popular. In this paper an innovative multisensorial system based on sensing materials of biological origin is illustrated. Anthocyanins are exploited here as chemical interactive materials for both quartz microbalance (QMB) transducers used as gas sensors and for electrodes used as liquid electrochemical sensors. The optical properties of anthocyanins are well established and widely used, but they have never been exploited as sensing materials for both gas and liquid sensors in non-optical applications. By using the same set of selected anthocyanins an integrated system has been realized, which includes a gas sensor array based on QMB and a sensor array for liquids made up of suitable Ion Sensitive Electrodes (ISEs). The arrays are also monitored from an optical point of view. This embedded system, is intended to mimic the working principles of the nose, tongue and eyes. We call this setup BIONOTE (for BIOsensor-based multisensorial system for mimicking NOse, Tongue and Eyes). The complete design, fabrication and calibration processes of the BIONOTE system are described herein, and a number of preliminary results are discussed. These results are relative to: (a) the characterization of the optical properties of the tested materials; (b) the performance of the whole system as gas sensor array with respect to ethanol, hexane and isopropyl alcohol detection (concentration range 0.1-7 ppm) and as a liquid sensor array (concentration range 73-98 μM).

  18. Visual-vestibular cue integration for heading perception: applications of optimal cue integration theory.

    PubMed

    Fetsch, Christopher R; Deangelis, Gregory C; Angelaki, Dora E

    2010-05-01

    The perception of self-motion is crucial for navigation, spatial orientation and motor control. In particular, estimation of one's direction of translation, or heading, relies heavily on multisensory integration in most natural situations. Visual and nonvisual (e.g., vestibular) information can be used to judge heading, but each modality alone is often insufficient for accurate performance. It is not surprising, then, that visual and vestibular signals converge frequently in the nervous system, and that these signals interact in powerful ways at the level of behavior and perception. Early behavioral studies of visual-vestibular interactions consisted mainly of descriptive accounts of perceptual illusions and qualitative estimation tasks, often with conflicting results. In contrast, cue integration research in other modalities has benefited from the application of rigorous psychophysical techniques, guided by normative models that rest on the foundation of ideal-observer analysis and Bayesian decision theory. Here we review recent experiments that have attempted to harness these so-called optimal cue integration models for the study of self-motion perception. Some of these studies used nonhuman primate subjects, enabling direct comparisons between behavioral performance and simultaneously recorded neuronal activity. The results indicate that humans and monkeys can integrate visual and vestibular heading cues in a manner consistent with optimal integration theory, and that single neurons in the dorsal medial superior temporal area show striking correlates of the behavioral effects. This line of research and other applications of normative cue combination models should continue to shed light on mechanisms of self-motion perception and the neuronal basis of multisensory integration.

  19. The Effects of a Summer Reading Program Using Simultaneous Multisensory Instruction of Language Arts on Reading Proficiency

    ERIC Educational Resources Information Center

    Magpuri-Lavell, Theresa; Paige, David; Williams, Rosemary; Akins, Kristia; Cameron, Molly

    2014-01-01

    The present study examined the impact of the Simultaneous Multisensory Institute for Language Arts (SMILA) approach on the reading proficiency of 39 students between the ages of 7-11 participating in a summer reading program. The summer reading clinic draws students from the surrounding community which is located in a large urban district in the…

  20. Multi-Sensory Rooms: Comparing Effects of the Snoezelen and the Stimulus Preference Environment on the Behavior of Adults with Profound Mental Retardation

    ERIC Educational Resources Information Center

    Fava, Leonardo; Strauss, Kristin

    2010-01-01

    The present study examined whether Snoezelen and Stimulus Preference environments have differential effects on disruptive and pro-social behaviors in adults with profound mental retardation and autism. In N = 27 adults these target behaviors were recorded for a total of 20 sessions using both multi-sensory rooms. Three comparison groups were…

  1. An Evaluation of an Intervention Using Sign Language and Multi-Sensory Coding to Support Word Learning and Reading Comprehension of Deaf Signing Children

    ERIC Educational Resources Information Center

    van Staden, Annalene

    2013-01-01

    The reading skills of many deaf children lag several years behind those of hearing children, and there is a need for identifying reading difficulties and implementing effective reading support strategies in this population. This study embraces a balanced reading approach, and investigates the efficacy of applying multi-sensory coding strategies…

  2. Attention to sound improves auditory reliability in audio-tactile spatial optimal integration.

    PubMed

    Vercillo, Tiziana; Gori, Monica

    2015-01-01

    The role of attention on multisensory processing is still poorly understood. In particular, it is unclear whether directing attention toward a sensory cue dynamically reweights cue reliability during integration of multiple sensory signals. In this study, we investigated the impact of attention in combining audio-tactile signals in an optimal fashion. We used the Maximum Likelihood Estimation (MLE) model to predict audio-tactile spatial localization on the body surface. We developed a new audio-tactile device composed by several small units, each one consisting of a speaker and a tactile vibrator independently controllable by external software. We tested participants in an attentional and a non-attentional condition. In the attentional experiment, participants performed a dual task paradigm: they were required to evaluate the duration of a sound while performing an audio-tactile spatial task. Three unisensory or multisensory stimuli, conflictual or not conflictual sounds and vibrations arranged along the horizontal axis, were presented sequentially. In the primary task participants had to evaluate in a space bisection task the position of the second stimulus (the probe) with respect to the others (the standards). In the secondary task they had to report occasionally changes in duration of the second auditory stimulus. In the non-attentional task participants had only to perform the primary task (space bisection). Our results showed an enhanced auditory precision (and auditory weights) in the auditory attentional condition with respect to the control non-attentional condition. The results of this study support the idea that modality-specific attention modulates multisensory integration.

  3. A simple approach to ignoring irrelevant variables by population decoding based on multisensory neurons

    PubMed Central

    Kim, HyungGoo R.; Pitkow, Xaq; Angelaki, Dora E.

    2016-01-01

    Sensory input reflects events that occur in the environment, but multiple events may be confounded in sensory signals. For example, under many natural viewing conditions, retinal image motion reflects some combination of self-motion and movement of objects in the world. To estimate one stimulus event and ignore others, the brain can perform marginalization operations, but the neural bases of these operations are poorly understood. Using computational modeling, we examine how multisensory signals may be processed to estimate the direction of self-motion (i.e., heading) and to marginalize out effects of object motion. Multisensory neurons represent heading based on both visual and vestibular inputs and come in two basic types: “congruent” and “opposite” cells. Congruent cells have matched heading tuning for visual and vestibular cues and have been linked to perceptual benefits of cue integration during heading discrimination. Opposite cells have mismatched visual and vestibular heading preferences and are ill-suited for cue integration. We show that decoding a mixed population of congruent and opposite cells substantially reduces errors in heading estimation caused by object motion. In addition, we present a general formulation of an optimal linear decoding scheme that approximates marginalization and can be implemented biologically by simple reinforcement learning mechanisms. We also show that neural response correlations induced by task-irrelevant variables may greatly exceed intrinsic noise correlations. Overall, our findings suggest a general computational strategy by which neurons with mismatched tuning for two different sensory cues may be decoded to perform marginalization operations that dissociate possible causes of sensory inputs. PMID:27334948

  4. Neural Responses in Parietal and Occipital Areas in Response to Visual Events Are Modulated by Prior Multisensory Stimuli

    PubMed Central

    Innes-Brown, Hamish; Barutchu, Ayla; Crewther, David P.

    2013-01-01

    The effect of multi-modal vs uni-modal prior stimuli on the subsequent processing of a simple flash stimulus was studied in the context of the audio-visual ‘flash-beep’ illusion, in which the number of flashes a person sees is influenced by accompanying beep stimuli. EEG recordings were made while combinations of simple visual and audio-visual stimuli were presented. The experiments found that the electric field strength related to a flash stimulus was stronger when it was preceded by a multi-modal flash/beep stimulus, compared to when it was preceded by another uni-modal flash stimulus. This difference was found to be significant in two distinct timeframes – an early timeframe, from 130–160 ms, and a late timeframe, from 300–320 ms. Source localisation analysis found that the increased activity in the early interval was localised to an area centred on the inferior and superior parietal lobes, whereas the later increase was associated with stronger activity in an area centred on primary and secondary visual cortex, in the occipital lobe. The results suggest that processing of a visual stimulus can be affected by the presence of an immediately prior multisensory event. Relatively long-lasting interactions generated by the initial auditory and visual stimuli altered the processing of a subsequent visual stimulus. PMID:24391939

  5. Age-related audiovisual interactions in the superior colliculus of the rat.

    PubMed

    Costa, M; Piché, M; Lepore, F; Guillemot, J-P

    2016-04-21

    It is well established that multisensory integration is a functional characteristic of the superior colliculus that disambiguates external stimuli and therefore reduces the reaction times toward simple audiovisual targets in space. However, in a condition where a complex audiovisual stimulus is used, such as the optical flow in the presence of modulated audio signals, little is known about the processing of the multisensory integration in the superior colliculus. Furthermore, since visual and auditory deficits constitute hallmark signs during aging, we sought to gain some insight on whether audiovisual processes in the superior colliculus are altered with age. Extracellular single-unit recordings were conducted in the superior colliculus of anesthetized Sprague-Dawley adult (10-12 months) and aged (21-22 months) rats. Looming circular concentric sinusoidal (CCS) gratings were presented alone and in the presence of sinusoidally amplitude modulated white noise. In both groups of rats, two different audiovisual response interactions were encountered in the spatial domain: superadditive, and suppressive. In contrast, additive audiovisual interactions were found only in adult rats. Hence, superior colliculus audiovisual interactions were more numerous in adult rats (38%) than in aged rats (8%). These results suggest that intersensory interactions in the superior colliculus play an essential role in space processing toward audiovisual moving objects during self-motion. Moreover, aging has a deleterious effect on complex audiovisual interactions. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  6. Multisensory speech perception in autism spectrum disorder: From phoneme to whole-word perception.

    PubMed

    Stevenson, Ryan A; Baum, Sarah H; Segers, Magali; Ferber, Susanne; Barense, Morgan D; Wallace, Mark T

    2017-07-01

    Speech perception in noisy environments is boosted when a listener can see the speaker's mouth and integrate the auditory and visual speech information. Autistic children have a diminished capacity to integrate sensory information across modalities, which contributes to core symptoms of autism, such as impairments in social communication. We investigated the abilities of autistic and typically-developing (TD) children to integrate auditory and visual speech stimuli in various signal-to-noise ratios (SNR). Measurements of both whole-word and phoneme recognition were recorded. At the level of whole-word recognition, autistic children exhibited reduced performance in both the auditory and audiovisual modalities. Importantly, autistic children showed reduced behavioral benefit from multisensory integration with whole-word recognition, specifically at low SNRs. At the level of phoneme recognition, autistic children exhibited reduced performance relative to their TD peers in auditory, visual, and audiovisual modalities. However, and in contrast to their performance at the level of whole-word recognition, both autistic and TD children showed benefits from multisensory integration for phoneme recognition. In accordance with the principle of inverse effectiveness, both groups exhibited greater benefit at low SNRs relative to high SNRs. Thus, while autistic children showed typical multisensory benefits during phoneme recognition, these benefits did not translate to typical multisensory benefit of whole-word recognition in noisy environments. We hypothesize that sensory impairments in autistic children raise the SNR threshold needed to extract meaningful information from a given sensory input, resulting in subsequent failure to exhibit behavioral benefits from additional sensory information at the level of whole-word recognition. Autism Res 2017. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. Autism Res 2017, 10: 1280-1290. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.

  7. Temporal ventriloquism along the path of apparent motion: speed perception under different spatial grouping principles.

    PubMed

    Ogulmus, Cansu; Karacaoglu, Merve; Kafaligonul, Hulusi

    2018-03-01

    The coordination of intramodal perceptual grouping and crossmodal interactions plays a critical role in constructing coherent multisensory percepts. However, the basic principles underlying such coordinating mechanisms still remain unclear. By taking advantage of an illusion called temporal ventriloquism and its influences on perceived speed, we investigated how audiovisual interactions in time are modulated by the spatial grouping principles of vision. In our experiments, we manipulated the spatial grouping principles of proximity, uniform connectedness, and similarity/common fate in apparent motion displays. Observers compared the speed of apparent motions across different sound timing conditions. Our results revealed that the effects of sound timing (i.e., temporal ventriloquism effects) on perceived speed also existed in visual displays containing more than one object and were modulated by different spatial grouping principles. In particular, uniform connectedness was found to modulate these audiovisual interactions in time. The effect of sound timing on perceived speed was smaller when horizontal connecting bars were introduced along the path of apparent motion. When the objects in each apparent motion frame were not connected or connected with vertical bars, the sound timing was more influential compared to the horizontal bar conditions. Overall, our findings here suggest that the effects of sound timing on perceived speed exist in different spatial configurations and can be modulated by certain intramodal spatial grouping principles such as uniform connectedness.

  8. A three-finger multisensory hand for dexterous space robotic tasks

    NASA Technical Reports Server (NTRS)

    Murase, Yuichi; Komada, Satoru; Uchiyama, Takashi; Machida, Kazuo; Akita, Kenzo

    1994-01-01

    The National Space Development Agency of Japan will launch ETS-7 in 1997, as a test bed for next generation space technology of RV&D and space robot. MITI has been developing a three-finger multisensory hand for complex space robotic tasks. The hand can be operated under remote control or autonomously. This paper describes the design and development of the hand and the performance of a breadboard model.

  9. The Effects of Multisensory Structured Language Instruction on Native Language and Foreign Language Aptitude Skills of At-Risk High School Foreign Language Learners.

    ERIC Educational Resources Information Center

    Sparks, Richard; And Others

    1992-01-01

    A multisensory structured language (MSL) approach was utilized with two groups of at-risk high school students (n=63), taught in either English and Spanish (MSL/ES) or Spanish only. Foreign language aptitude improved for both groups and native language skills for the MSL/ES group. A group receiving traditional foreign language instruction showed…

  10. Multimodal sensorimotor system in unicellular zoospores of a fungus.

    PubMed

    Swafford, Andrew J M; Oakley, Todd H

    2018-01-19

    Complex sensory systems often underlie critical behaviors, including avoiding predators and locating prey, mates and shelter. Multisensory systems that control motor behavior even appear in unicellular eukaryotes, such as Chlamydomonas , which are important laboratory models for sensory biology. However, we know of no unicellular opisthokonts that control motor behavior using a multimodal sensory system. Therefore, existing single-celled models for multimodal sensorimotor integration are very distantly related to animals. Here, we describe a multisensory system that controls the motor function of unicellular fungal zoospores. We found that zoospores of Allomyces arbusculus exhibit both phototaxis and chemotaxis. Furthermore, we report that closely related Allomyces species respond to either the chemical or the light stimuli presented in this study, not both, and likely do not share this multisensory system. This diversity of sensory systems within Allomyces provides a rare example of a comparative framework that can be used to examine the evolution of sensory systems following the gain/loss of available sensory modalities. The tractability of Allomyces and related fungi as laboratory organisms will facilitate detailed mechanistic investigations into the genetic underpinnings of novel photosensory systems, and how multisensory systems may have functioned in early opisthokonts before multicellularity allowed for the evolution of specialized cell types. © 2018. Published by The Company of Biologists Ltd.

  11. Mapping multisensory parietal face and body areas in humans.

    PubMed

    Huang, Ruey-Song; Chen, Ching-fu; Tran, Alyssa T; Holstein, Katie L; Sereno, Martin I

    2012-10-30

    Detection and avoidance of impending obstacles is crucial to preventing head and body injuries in daily life. To safely avoid obstacles, locations of objects approaching the body surface are usually detected via the visual system and then used by the motor system to guide defensive movements. Mediating between visual input and motor output, the posterior parietal cortex plays an important role in integrating multisensory information in peripersonal space. We used functional MRI to map parietal areas that see and feel multisensory stimuli near or on the face and body. Tactile experiments using full-body air-puff stimulation suits revealed somatotopic areas of the face and multiple body parts forming a higher-level homunculus in the superior posterior parietal cortex. Visual experiments using wide-field looming stimuli revealed retinotopic maps that overlap with the parietal face and body areas in the postcentral sulcus at the most anterior border of the dorsal visual pathway. Starting at the parietal face area and moving medially and posteriorly into the lower-body areas, the median of visual polar-angle representations in these somatotopic areas gradually shifts from near the horizontal meridian into the lower visual field. These results suggest the parietal face and body areas fuse multisensory information in peripersonal space to guard an individual from head to toe.

  12. The effects of a multisensory dynamic balance training on the thickness of lower limb muscles in ultrasonography in children with spastic diplegic cerebral palsy.

    PubMed

    Nam, Seung-Min; Kim, Won-Hyo; Yun, Chang-Kyo

    2017-04-01

    [Purpose] This study aimed to investigate the effects of multisensory dynamic balance training on muscles thickness such as rectus femoris, anterior tibialis, medial gastrocnemius, lateral gastrocnemius in children with spastic diplegic cerebral palsy by using ultrasonography. [Subjects and Methods] Fifteen children diagnosed with spastic diplegic cerebral palsy were divided randomly into the balance training group and control group. The experimental group only received a multisensory dynamic balance training, while the control group performed general physiotherapy focused balance and muscle strengthening exercise based Neurodevelopmental treatment. Both groups had a therapy session for 30 minutes per day, three times a week for six weeks. The ultrasonographic muscle thickness were obtained in order to compare and analyze muscle thickness before and after in each group. [Result] The experimental group had significant increases in muscle thickness in the rectus femoris, tibialis anterior, medial gastrocnemius and lateral gastrocnemius muscles. The control group had significant increases in muscle thickness in the tibialis anterior. The test results of the rectus femoris, medial gastrocnemius and lateral gastrocnemius muscle thickness values between the groups showed significant differences. [Conclusion] In conclusion, a multisensory dynamic balance training can be recommended as a treatment method for patients with spastic diplegic cerebral palsy.

  13. Perceived object stability depends on multisensory estimates of gravity.

    PubMed

    Barnett-Cowan, Michael; Fleming, Roland W; Singh, Manish; Bülthoff, Heinrich H

    2011-04-27

    How does the brain estimate object stability? Objects fall over when the gravity-projected centre-of-mass lies outside the point or area of support. To estimate an object's stability visually, the brain must integrate information across the shape and compare its orientation to gravity. When observers lie on their sides, gravity is perceived as tilted toward body orientation, consistent with a representation of gravity derived from multisensory information. We exploited this to test whether vestibular and kinesthetic information affect this visual task or whether the brain estimates object stability solely from visual information. In three body orientations, participants viewed images of objects close to a table edge. We measured the critical angle at which each object appeared equally likely to fall over or right itself. Perceived gravity was measured using the subjective visual vertical. The results show that the perceived critical angle was significantly biased in the same direction as the subjective visual vertical (i.e., towards the multisensory estimate of gravity). Our results rule out a general explanation that the brain depends solely on visual heuristics and assumptions about object stability. Instead, they suggest that multisensory estimates of gravity govern the perceived stability of objects, resulting in objects appearing more stable than they are when the head is tilted in the same direction in which they fall.

  14. Is Attentional Resource Allocation Across Sensory Modalities Task-Dependent?

    PubMed

    Wahn, Basil; König, Peter

    2017-01-01

    Human information processing is limited by attentional resources. That is, via attentional mechanisms, humans select a limited amount of sensory input to process while other sensory input is neglected. In multisensory research, a matter of ongoing debate is whether there are distinct pools of attentional resources for each sensory modality or whether attentional resources are shared across sensory modalities. Recent studies have suggested that attentional resource allocation across sensory modalities is in part task-dependent. That is, the recruitment of attentional resources across the sensory modalities depends on whether processing involves object-based attention (e.g., the discrimination of stimulus attributes) or spatial attention (e.g., the localization of stimuli). In the present paper, we review findings in multisensory research related to this view. For the visual and auditory sensory modalities, findings suggest that distinct resources are recruited when humans perform object-based attention tasks, whereas for the visual and tactile sensory modalities, partially shared resources are recruited. If object-based attention tasks are time-critical, shared resources are recruited across the sensory modalities. When humans perform an object-based attention task in combination with a spatial attention task, partly shared resources are recruited across the sensory modalities as well. Conversely, for spatial attention tasks, attentional processing does consistently involve shared attentional resources for the sensory modalities. Generally, findings suggest that the attentional system flexibly allocates attentional resources depending on task demands. We propose that such flexibility reflects a large-scale optimization strategy that minimizes the brain's costly resource expenditures and simultaneously maximizes capability to process currently relevant information.

  15. Fronto-Parietal Brain Responses to Visuotactile Congruence in an Anatomical Reference Frame

    PubMed Central

    Limanowski, Jakub; Blankenburg, Felix

    2018-01-01

    Spatially and temporally congruent visuotactile stimulation of a fake hand together with one’s real hand may result in an illusory self-attribution of the fake hand. Although this illusion relies on a representation of the two touched body parts in external space, there is tentative evidence that, for the illusion to occur, the seen and felt touches also need to be congruent in an anatomical reference frame. We used functional magnetic resonance imaging and a somatotopical, virtual reality-based setup to isolate the neuronal basis of such a comparison. Participants’ index or little finger was synchronously touched with the index or little finger of a virtual hand, under congruent or incongruent orientations of the real and virtual hands. The left ventral premotor cortex responded significantly more strongly to visuotactile co-stimulation of the same versus different fingers of the virtual and real hand. Conversely, the left anterior intraparietal sulcus responded significantly more strongly to co-stimulation of different versus same fingers. Both responses were independent of hand orientation congruence and of spatial congruence of the visuotactile stimuli. Our results suggest that fronto-parietal areas previously associated with multisensory processing within peripersonal space and with tactile remapping evaluate the congruence of visuotactile stimulation on the body according to an anatomical reference frame. PMID:29556183

  16. Fronto-Parietal Brain Responses to Visuotactile Congruence in an Anatomical Reference Frame.

    PubMed

    Limanowski, Jakub; Blankenburg, Felix

    2018-01-01

    Spatially and temporally congruent visuotactile stimulation of a fake hand together with one's real hand may result in an illusory self-attribution of the fake hand. Although this illusion relies on a representation of the two touched body parts in external space, there is tentative evidence that, for the illusion to occur, the seen and felt touches also need to be congruent in an anatomical reference frame. We used functional magnetic resonance imaging and a somatotopical, virtual reality-based setup to isolate the neuronal basis of such a comparison. Participants' index or little finger was synchronously touched with the index or little finger of a virtual hand, under congruent or incongruent orientations of the real and virtual hands. The left ventral premotor cortex responded significantly more strongly to visuotactile co-stimulation of the same versus different fingers of the virtual and real hand. Conversely, the left anterior intraparietal sulcus responded significantly more strongly to co-stimulation of different versus same fingers. Both responses were independent of hand orientation congruence and of spatial congruence of the visuotactile stimuli. Our results suggest that fronto-parietal areas previously associated with multisensory processing within peripersonal space and with tactile remapping evaluate the congruence of visuotactile stimulation on the body according to an anatomical reference frame.

  17. Memorable Audiovisual Narratives Synchronize Sensory and Supramodal Neural Responses

    PubMed Central

    2016-01-01

    Abstract Our brains integrate information across sensory modalities to generate perceptual experiences and form memories. However, it is difficult to determine the conditions under which multisensory stimulation will benefit or hinder the retrieval of everyday experiences. We hypothesized that the determining factor is the reliability of information processing during stimulus presentation, which can be measured through intersubject correlation of stimulus-evoked activity. We therefore presented biographical auditory narratives and visual animations to 72 human subjects visually, auditorily, or combined, while neural activity was recorded using electroencephalography. Memory for the narrated information, contained in the auditory stream, was tested 3 weeks later. While the visual stimulus alone led to no meaningful retrieval, this related stimulus improved memory when it was combined with the story, even when it was temporally incongruent with the audio. Further, individuals with better subsequent memory elicited neural responses during encoding that were more correlated with their peers. Surprisingly, portions of this predictive synchronized activity were present regardless of the sensory modality of the stimulus. These data suggest that the strength of sensory and supramodal activity is predictive of memory performance after 3 weeks, and that neural synchrony may explain the mnemonic benefit of the functionally uninformative visual context observed for these real-world stimuli. PMID:27844062

  18. Hearing Scenes: A Neuromagnetic Signature of Auditory Source and Reverberant Space Separation

    PubMed Central

    Oliva, Aude

    2017-01-01

    Abstract Perceiving the geometry of surrounding space is a multisensory process, crucial to contextualizing object perception and guiding navigation behavior. Humans can make judgments about surrounding spaces from reverberation cues, caused by sounds reflecting off multiple interior surfaces. However, it remains unclear how the brain represents reverberant spaces separately from sound sources. Here, we report separable neural signatures of auditory space and source perception during magnetoencephalography (MEG) recording as subjects listened to brief sounds convolved with monaural room impulse responses (RIRs). The decoding signature of sound sources began at 57 ms after stimulus onset and peaked at 130 ms, while space decoding started at 138 ms and peaked at 386 ms. Importantly, these neuromagnetic responses were readily dissociable in form and time: while sound source decoding exhibited an early and transient response, the neural signature of space was sustained and independent of the original source that produced it. The reverberant space response was robust to variations in sound source, and vice versa, indicating a generalized response not tied to specific source-space combinations. These results provide the first neuromagnetic evidence for robust, dissociable auditory source and reverberant space representations in the human brain and reveal the temporal dynamics of how auditory scene analysis extracts percepts from complex naturalistic auditory signals. PMID:28451630

  19. Multisensory Technology for Flavor Augmentation: A Mini Review.

    PubMed

    Velasco, Carlos; Obrist, Marianna; Petit, Olivia; Spence, Charles

    2018-01-01

    There is growing interest in the development of new technologies that capitalize on our emerging understanding of the multisensory influences on flavor perception in order to enhance human-food interaction design. This review focuses on the role of (extrinsic) visual, auditory, and haptic/tactile elements in modulating flavor perception and more generally, our food and drink experiences. We review some of the most exciting examples of recent multisensory technologies for augmenting such experiences. Here, we discuss applications for these technologies, for example, in the field of food experience design, in the support of healthy eating, and in the rapidly growing world of sensory marketing. However, as the review makes clear, while there are many opportunities for novel human-food interaction design, there are also a number of challenges that will need to be tackled before new technologies can be meaningfully integrated into our everyday food and drink experiences.

  20. Multisensory Technology for Flavor Augmentation: A Mini Review

    PubMed Central

    Velasco, Carlos; Obrist, Marianna; Petit, Olivia; Spence, Charles

    2018-01-01

    There is growing interest in the development of new technologies that capitalize on our emerging understanding of the multisensory influences on flavor perception in order to enhance human–food interaction design. This review focuses on the role of (extrinsic) visual, auditory, and haptic/tactile elements in modulating flavor perception and more generally, our food and drink experiences. We review some of the most exciting examples of recent multisensory technologies for augmenting such experiences. Here, we discuss applications for these technologies, for example, in the field of food experience design, in the support of healthy eating, and in the rapidly growing world of sensory marketing. However, as the review makes clear, while there are many opportunities for novel human–food interaction design, there are also a number of challenges that will need to be tackled before new technologies can be meaningfully integrated into our everyday food and drink experiences. PMID:29441030

  1. A matter of attention: Crossmodal congruence enhances and impairs performance in a novel trimodal matching paradigm.

    PubMed

    Misselhorn, Jonas; Daume, Jonathan; Engel, Andreas K; Friese, Uwe

    2016-07-29

    A novel crossmodal matching paradigm including vision, audition, and somatosensation was developed in order to investigate the interaction between attention and crossmodal congruence in multisensory integration. To that end, all three modalities were stimulated concurrently while a bimodal focus was defined blockwise. Congruence between stimulus intensity changes in the attended modalities had to be evaluated. We found that crossmodal congruence improved performance if both, the attended modalities and the task-irrelevant distractor were congruent. If the attended modalities were incongruent, the distractor impaired performance due to its congruence relation to one of the attended modalities. Between attentional conditions, magnitudes of crossmodal enhancement or impairment differed. Largest crossmodal effects were seen in visual-tactile matching, intermediate effects for audio-visual and smallest effects for audio-tactile matching. We conclude that differences in crossmodal matching likely reflect characteristics of multisensory neural network architecture. We discuss our results with respect to the timing of perceptual processing and state hypotheses for future physiological studies. Finally, etiological questions are addressed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Audio-Visual Integration in a Redundant Target Paradigm: A Comparison between Rhesus Macaque and Man

    PubMed Central

    Bremen, Peter; Massoudi, Rooholla; Van Wanrooij, Marc M.; Van Opstal, A. J.

    2017-01-01

    The mechanisms underlying multi-sensory interactions are still poorly understood despite considerable progress made since the first neurophysiological recordings of multi-sensory neurons. While the majority of single-cell neurophysiology has been performed in anesthetized or passive-awake laboratory animals, the vast majority of behavioral data stems from studies with human subjects. Interpretation of neurophysiological data implicitly assumes that laboratory animals exhibit perceptual phenomena comparable or identical to those observed in human subjects. To explicitly test this underlying assumption, we here characterized how two rhesus macaques and four humans detect changes in intensity of auditory, visual, and audio-visual stimuli. These intensity changes consisted of a gradual envelope modulation for the sound, and a luminance step for the LED. Subjects had to detect any perceived intensity change as fast as possible. By comparing the monkeys' results with those obtained from the human subjects we found that (1) unimodal reaction times differed across modality, acoustic modulation frequency, and species, (2) the largest facilitation of reaction times with the audio-visual stimuli was observed when stimulus onset asynchronies were such that the unimodal reactions would occur at the same time (response, rather than physical synchrony), and (3) the largest audio-visual reaction-time facilitation was observed when unimodal auditory stimuli were difficult to detect, i.e., at slow unimodal reaction times. We conclude that despite marked unimodal heterogeneity, similar multisensory rules applied to both species. Single-cell neurophysiology in the rhesus macaque may therefore yield valuable insights into the mechanisms governing audio-visual integration that may be informative of the processes taking place in the human brain. PMID:29238295

  3. Audio-Visual Integration in a Redundant Target Paradigm: A Comparison between Rhesus Macaque and Man.

    PubMed

    Bremen, Peter; Massoudi, Rooholla; Van Wanrooij, Marc M; Van Opstal, A J

    2017-01-01

    The mechanisms underlying multi-sensory interactions are still poorly understood despite considerable progress made since the first neurophysiological recordings of multi-sensory neurons. While the majority of single-cell neurophysiology has been performed in anesthetized or passive-awake laboratory animals, the vast majority of behavioral data stems from studies with human subjects. Interpretation of neurophysiological data implicitly assumes that laboratory animals exhibit perceptual phenomena comparable or identical to those observed in human subjects. To explicitly test this underlying assumption, we here characterized how two rhesus macaques and four humans detect changes in intensity of auditory, visual, and audio-visual stimuli. These intensity changes consisted of a gradual envelope modulation for the sound, and a luminance step for the LED. Subjects had to detect any perceived intensity change as fast as possible. By comparing the monkeys' results with those obtained from the human subjects we found that (1) unimodal reaction times differed across modality, acoustic modulation frequency, and species, (2) the largest facilitation of reaction times with the audio-visual stimuli was observed when stimulus onset asynchronies were such that the unimodal reactions would occur at the same time (response, rather than physical synchrony), and (3) the largest audio-visual reaction-time facilitation was observed when unimodal auditory stimuli were difficult to detect, i.e., at slow unimodal reaction times. We conclude that despite marked unimodal heterogeneity, similar multisensory rules applied to both species. Single-cell neurophysiology in the rhesus macaque may therefore yield valuable insights into the mechanisms governing audio-visual integration that may be informative of the processes taking place in the human brain.

  4. The multi-sensory approach as a geoeducational strategy

    NASA Astrophysics Data System (ADS)

    Musacchio, Gemma; Piangiamore, Giovanna Lucia; Pino, Nicola Alessandro

    2014-05-01

    Geoscience knowledge has a strong impact in modern society as it relates to natural hazards, sustainability and environmental issues. The general public has a demanding attitude towards the understanding of crucial geo-scientific topics that is only partly satisfied by science communication strategies and/or by outreach or school programs. A proper knowledge of the phenomena might help trigger crucial inquiries when approaching mitigation of geo-hazards and geo-resources, while providing the right tool for the understanding of news and ideas floating from the web or other media, and, in other words, help communication to be more efficient. Nonetheless available educational resources seem to be inadequate in meeting the goal, while research institutions are facing the challenge to experience new communication strategies and non-conventional way of learning capable to allow the understanding of crucial scientific contents. We suggest the use of multi-sensory approach as a successful non-conventional way of learning for children and as a different perspective of learning for older students and adults. Sense organs stimulation are perceived and processed to build the knowledge of the surrounding, including all sorts of hazards. Powerfully relying in the sense of sight, Humans have somehow lost most of their ability for a deep perception of the environment enriched by all the other senses. Since hazards involve emotions we argue that new ways to approach the learning might go exactly through emotions that one might stress with a tactile experience, a hearing or smell stimulation. To test and support our idea we are building a package of learning activities and exhibits based on a multi-sensory experience where the sight is not allowed.

  5. Multisensory Rehabilitation Training Improves Spatial Perception in Totally but Not Partially Visually Deprived Children

    PubMed Central

    Cappagli, Giulia; Finocchietti, Sara; Baud-Bovy, Gabriel; Cocchi, Elena; Gori, Monica

    2017-01-01

    Since it has been shown that spatial development can be delayed in blind children, focused sensorimotor trainings that associate auditory and motor information might be used to prevent the risk of spatial-related developmental delays or impairments from an early age. With this aim, we proposed a new technological device based on the implicit link between action and perception: ABBI (Audio Bracelet for Blind Interaction) is an audio bracelet that produces a sound when a movement occurs by allowing the substitution of the visuo-motor association with a new audio-motor association. In this study, we assessed the effects of an extensive but entertaining sensorimotor training with ABBI on the development of spatial hearing in a group of seven 3–5 years old children with congenital blindness (n = 2; light perception or no perception of light) or low vision (n = 5; visual acuity range 1.1–1.7 LogMAR). The training required the participants to play several spatial games individually and/or together with the psychomotor therapist 1 h per week for 3 months: the spatial games consisted of exercises meant to train their ability to associate visual and motor-related signals from their body, in order to foster the development of multisensory processes. We measured spatial performance by asking participants to indicate the position of one single fixed (static condition) or moving (dynamic condition) sound source on a vertical sensorized surface. We found that spatial performance of congenitally blind but not low vision children is improved after the training, indicating that early interventions with the use of science-driven devices based on multisensory capabilities can provide consistent advancements in therapeutic interventions, improving the quality of life of children with visual disability. PMID:29097987

  6. Perceptual attraction in tool use: evidence for a reliability-based weighting mechanism.

    PubMed

    Debats, Nienke B; Ernst, Marc O; Heuer, Herbert

    2017-04-01

    Humans are well able to operate tools whereby their hand movement is linked, via a kinematic transformation, to a spatially distant object moving in a separate plane of motion. An everyday example is controlling a cursor on a computer monitor. Despite these separate reference frames, the perceived positions of the hand and the object were found to be biased toward each other. We propose that this perceptual attraction is based on the principles by which the brain integrates redundant sensory information of single objects or events, known as optimal multisensory integration. That is, 1 ) sensory information about the hand and the tool are weighted according to their relative reliability (i.e., inverse variances), and 2 ) the unisensory reliabilities sum up in the integrated estimate. We assessed whether perceptual attraction is consistent with optimal multisensory integration model predictions. We used a cursor-control tool-use task in which we manipulated the relative reliability of the unisensory hand and cursor position estimates. The perceptual biases shifted according to these relative reliabilities, with an additional bias due to contextual factors that were present in experiment 1 but not in experiment 2 The biased position judgments' variances were, however, systematically larger than the predicted optimal variances. Our findings suggest that the perceptual attraction in tool use results from a reliability-based weighting mechanism similar to optimal multisensory integration, but that certain boundary conditions for optimality might not be satisfied. NEW & NOTEWORTHY Kinematic tool use is associated with a perceptual attraction between the spatially separated hand and the effective part of the tool. We provide a formal account for this phenomenon, thereby showing that the process behind it is similar to optimal integration of sensory information relating to single objects. Copyright © 2017 the American Physiological Society.

  7. Auditory and visual sequence learning in humans and monkeys using an artificial grammar learning paradigm.

    PubMed

    Milne, Alice E; Petkov, Christopher I; Wilson, Benjamin

    2017-07-05

    Language flexibly supports the human ability to communicate using different sensory modalities, such as writing and reading in the visual modality and speaking and listening in the auditory domain. Although it has been argued that nonhuman primate communication abilities are inherently multisensory, direct behavioural comparisons between human and nonhuman primates are scant. Artificial grammar learning (AGL) tasks and statistical learning experiments can be used to emulate ordering relationships between words in a sentence. However, previous comparative work using such paradigms has primarily investigated sequence learning within a single sensory modality. We used an AGL paradigm to evaluate how humans and macaque monkeys learn and respond to identically structured sequences of either auditory or visual stimuli. In the auditory and visual experiments, we found that both species were sensitive to the ordering relationships between elements in the sequences. Moreover, the humans and monkeys produced largely similar response patterns to the visual and auditory sequences, indicating that the sequences are processed in comparable ways across the sensory modalities. These results provide evidence that human sequence processing abilities stem from an evolutionarily conserved capacity that appears to operate comparably across the sensory modalities in both human and nonhuman primates. The findings set the stage for future neurobiological studies to investigate the multisensory nature of these sequencing operations in nonhuman primates and how they compare to related processes in humans. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  8. Change of reference frame for tactile localization during child development.

    PubMed

    Pagel, Birthe; Heed, Tobias; Röder, Brigitte

    2009-11-01

    Temporal order judgements (TOJ) for two tactile stimuli, one presented to the left and one to the right hand, are less precise when the hands are crossed over the midline than when the hands are uncrossed. This 'crossed hand' effect has been considered as evidence for a remapping of tactile input into an external reference frame. Since late, but not early, blind individuals show such remapping, it has been hypothesized that the use of an external reference frame develops during childhood. Five- to 10-year-old children were therefore tested with the tactile TOJ task, both with uncrossed and crossed hands. Overall performance in the TOJ task improved with age. While children older than 5 1/2 years displayed a crossed hand effect, younger children did not. Therefore the use of an external reference frame for tactile, and possibly multisensory, localization seems to be acquired at age 5.

  9. Experiencing the Sights, Smells, Sounds, and Climate of Southern Italy in VR.

    PubMed

    Manghisi, Vito M; Fiorentino, Michele; Gattullo, Michele; Boccaccio, Antonio; Bevilacqua, Vitoantonio; Cascella, Giuseppe L; Dassisti, Michele; Uva, Antonio E

    2017-01-01

    This article explores what it takes to make interactive computer graphics and VR attractive as a promotional vehicle, from the points of view of tourism agencies and the tourists themselves. The authors exploited current VR and human-machine interface (HMI) technologies to develop an interactive, innovative, and attractive user experience called the Multisensory Apulia Touristic Experience (MATE). The MATE system implements a natural gesture-based interface and multisensory stimuli, including visuals, audio, smells, and climate effects.

  10. Language Processing as Cue Integration: Grounding the Psychology of Language in Perception and Neurophysiology

    PubMed Central

    Martin, Andrea E.

    2016-01-01

    I argue that cue integration, a psychophysiological mechanism from vision and multisensory perception, offers a computational linking hypothesis between psycholinguistic theory and neurobiological models of language. I propose that this mechanism, which incorporates probabilistic estimates of a cue's reliability, might function in language processing from the perception of a phoneme to the comprehension of a phrase structure. I briefly consider the implications of the cue integration hypothesis for an integrated theory of language that includes acquisition, production, dialogue and bilingualism, while grounding the hypothesis in canonical neural computation. PMID:26909051

  11. Crossmodal Connections of Primary Sensory Cortices Largely Vanish During Normal Aging

    PubMed Central

    Henschke, Julia U.; Ohl, Frank W.; Budinger, Eike

    2018-01-01

    During aging, human response times (RTs) to unisensory and crossmodal stimuli decrease. However, the elderly benefit more from crossmodal stimulus representations than younger people. The underlying short-latency multisensory integration process is mediated by direct crossmodal connections at the level of primary sensory cortices. We investigate the age-related changes of these connections using a rodent model (Mongolian gerbil), retrograde tracer injections into the primary auditory (A1), somatosensory (S1), and visual cortex (V1), and immunohistochemistry for markers of apoptosis (Caspase-3), axonal plasticity (Growth associated protein 43, GAP 43), and a calcium-binding protein (Parvalbumin, PV). In adult animals, primary sensory cortices receive a substantial number of direct thalamic inputs from nuclei of their matched, but also from nuclei of non-matched sensory modalities. There are also direct intracortical connections among primary sensory cortices and connections with secondary sensory cortices of other modalities. In very old animals, the crossmodal connections strongly decrease in number or vanish entirely. This is likely due to a retraction of the projection neuron axonal branches rather than ongoing programmed cell death. The loss of crossmodal connections is also accompanied by changes in anatomical correlates of inhibition and excitation in the sensory thalamus and cortex. Together, the loss and restructuring of crossmodal connections during aging suggest a shift of multisensory processing from primary cortices towards other sensory brain areas in elderly individuals. PMID:29551970

  12. Crossmodal Connections of Primary Sensory Cortices Largely Vanish During Normal Aging.

    PubMed

    Henschke, Julia U; Ohl, Frank W; Budinger, Eike

    2018-01-01

    During aging, human response times (RTs) to unisensory and crossmodal stimuli decrease. However, the elderly benefit more from crossmodal stimulus representations than younger people. The underlying short-latency multisensory integration process is mediated by direct crossmodal connections at the level of primary sensory cortices. We investigate the age-related changes of these connections using a rodent model (Mongolian gerbil), retrograde tracer injections into the primary auditory (A1), somatosensory (S1), and visual cortex (V1), and immunohistochemistry for markers of apoptosis (Caspase-3), axonal plasticity (Growth associated protein 43, GAP 43), and a calcium-binding protein (Parvalbumin, PV). In adult animals, primary sensory cortices receive a substantial number of direct thalamic inputs from nuclei of their matched, but also from nuclei of non-matched sensory modalities. There are also direct intracortical connections among primary sensory cortices and connections with secondary sensory cortices of other modalities. In very old animals, the crossmodal connections strongly decrease in number or vanish entirely. This is likely due to a retraction of the projection neuron axonal branches rather than ongoing programmed cell death. The loss of crossmodal connections is also accompanied by changes in anatomical correlates of inhibition and excitation in the sensory thalamus and cortex. Together, the loss and restructuring of crossmodal connections during aging suggest a shift of multisensory processing from primary cortices towards other sensory brain areas in elderly individuals.

  13. An fMRI study of multimodal selective attention in schizophrenia

    PubMed Central

    Mayer, Andrew R.; Hanlon, Faith M.; Teshiba, Terri M.; Klimaj, Stefan D.; Ling, Josef M.; Dodd, Andrew B.; Calhoun, Vince D.; Bustillo, Juan R.; Toulouse, Trent

    2015-01-01

    Background Studies have produced conflicting evidence regarding whether cognitive control deficits in patients with schizophrenia result from dysfunction within the cognitive control network (CCN; top-down) and/or unisensory cortex (bottom-up). Aims To investigate CCN and sensory cortex involvement during multisensory cognitive control in patients with schizophrenia. Method Patients with schizophrenia and healthy controls underwent functional magnetic resonance imaging while performing a multisensory Stroop task involving auditory and visual distracters. Results Patients with schizophrenia exhibited an overall pattern of response slowing, and these behavioural deficits were associated with a pattern of patient hyperactivation within auditory, sensorimotor and posterior parietal cortex. In contrast, there were no group differences in functional activation within prefrontal nodes of the CCN, with small effect sizes observed (incongruent–congruent trials). Patients with schizophrenia also failed to upregulate auditory cortex with concomitant increased attentional demands. Conclusions Results suggest a prominent role for dysfunction within auditory, sensorimotor and parietal areas relative to prefrontal CCN nodes during multisensory cognitive control. PMID:26382953

  14. Does media multitasking always hurt? A positive correlation between multitasking and multisensory integration.

    PubMed

    Lui, Kelvin F H; Wong, Alan C-N

    2012-08-01

    Heavy media multitaskers have been found to perform poorly in certain cognitive tasks involving task switching, selective attention, and working memory. An account for this is that with a breadth-biased style of cognitive control, multitaskers tend to pay attention to various information available in the environment, without sufficient focus on the information most relevant to the task at hand. This cognitive style, however, may not cause a general deficit in all kinds of tasks. We tested the hypothesis that heavy media multitaskers would perform better in a multisensory integration task than would others, due to their extensive experience in integrating information from different modalities. Sixty-three participants filled out a questionnaire about their media usage and completed a visual search task with and without synchronous tones (pip-and-pop paradigm). It was found that a higher degree of media multitasking was correlated with better multisensory integration. The fact that heavy media multitaskers are not deficient in all kinds of cognitive tasks suggests that media multitasking does not always hurt.

  15. Integration of auditory and visual communication information in the primate ventrolateral prefrontal cortex.

    PubMed

    Sugihara, Tadashi; Diltz, Mark D; Averbeck, Bruno B; Romanski, Lizabeth M

    2006-10-25

    The integration of auditory and visual stimuli is crucial for recognizing objects, communicating effectively, and navigating through our complex world. Although the frontal lobes are involved in memory, communication, and language, there has been no evidence that the integration of communication information occurs at the single-cell level in the frontal lobes. Here, we show that neurons in the macaque ventrolateral prefrontal cortex (VLPFC) integrate audiovisual communication stimuli. The multisensory interactions included both enhancement and suppression of a predominantly auditory or a predominantly visual response, although multisensory suppression was the more common mode of response. The multisensory neurons were distributed across the VLPFC and within previously identified unimodal auditory and visual regions (O'Scalaidhe et al., 1997; Romanski and Goldman-Rakic, 2002). Thus, our study demonstrates, for the first time, that single prefrontal neurons integrate communication information from the auditory and visual domains, suggesting that these neurons are an important node in the cortical network responsible for communication.

  16. Integration of Auditory and Visual Communication Information in the Primate Ventrolateral Prefrontal Cortex

    PubMed Central

    Sugihara, Tadashi; Diltz, Mark D.; Averbeck, Bruno B.; Romanski, Lizabeth M.

    2009-01-01

    The integration of auditory and visual stimuli is crucial for recognizing objects, communicating effectively, and navigating through our complex world. Although the frontal lobes are involved in memory, communication, and language, there has been no evidence that the integration of communication information occurs at the single-cell level in the frontal lobes. Here, we show that neurons in the macaque ventrolateral prefrontal cortex (VLPFC) integrate audiovisual communication stimuli. The multisensory interactions included both enhancement and suppression of a predominantly auditory or a predominantly visual response, although multisensory suppression was the more common mode of response. The multisensory neurons were distributed across the VLPFC and within previously identified unimodal auditory and visual regions (O’Scalaidhe et al., 1997; Romanski and Goldman-Rakic, 2002). Thus, our study demonstrates, for the first time, that single prefrontal neurons integrate communication information from the auditory and visual domains, suggesting that these neurons are an important node in the cortical network responsible for communication. PMID:17065454

  17. Multisensory System for the Detection and Localization of Peripheral Subcutaneous Veins

    PubMed Central

    Fernández, Roemi; Armada, Manuel

    2017-01-01

    This paper proposes a multisensory system for the detection and localization of peripheral subcutaneous veins, as a first step for achieving automatic robotic insertion of catheters in the near future. The multisensory system is based on the combination of a SWIR (Short-Wave Infrared) camera, a TOF (Time-Of-Flight) camera and a NIR (Near Infrared) lighting source. The associated algorithm consists of two main parts: one devoted to the features extraction from the SWIR image, and another envisaged for the registration of the range data provided by the TOF camera, with the SWIR image and the results of the peripheral veins detection. In this way, the detected subcutaneous veins are mapped onto the 3D reconstructed surface, providing a full representation of the region of interest for the automatic catheter insertion. Several experimental tests were carried out in order to evaluate the capabilities of the presented approach. Preliminary results demonstrate the feasibility of the proposed design and highlight the potential benefits of the solution. PMID:28422075

  18. Perceived synchrony for realistic and dynamic audiovisual events.

    PubMed

    Eg, Ragnhild; Behne, Dawn M

    2015-01-01

    In well-controlled laboratory experiments, researchers have found that humans can perceive delays between auditory and visual signals as short as 20 ms. Conversely, other experiments have shown that humans can tolerate audiovisual asynchrony that exceeds 200 ms. This seeming contradiction in human temporal sensitivity can be attributed to a number of factors such as experimental approaches and precedence of the asynchronous signals, along with the nature, duration, location, complexity and repetitiveness of the audiovisual stimuli, and even individual differences. In order to better understand how temporal integration of audiovisual events occurs in the real world, we need to close the gap between the experimental setting and the complex setting of everyday life. With this work, we aimed to contribute one brick to the bridge that will close this gap. We compared perceived synchrony for long-running and eventful audiovisual sequences to shorter sequences that contain a single audiovisual event, for three types of content: action, music, and speech. The resulting windows of temporal integration showed that participants were better at detecting asynchrony for the longer stimuli, possibly because the long-running sequences contain multiple corresponding events that offer audiovisual timing cues. Moreover, the points of subjective simultaneity differ between content types, suggesting that the nature of a visual scene could influence the temporal perception of events. An expected outcome from this type of experiment was the rich variation among participants' distributions and the derived points of subjective simultaneity. Hence, the designs of similar experiments call for more participants than traditional psychophysical studies. Heeding this caution, we conclude that existing theories on multisensory perception are ready to be tested on more natural and representative stimuli.

  19. Perceived synchrony for realistic and dynamic audiovisual events

    PubMed Central

    Eg, Ragnhild; Behne, Dawn M.

    2015-01-01

    In well-controlled laboratory experiments, researchers have found that humans can perceive delays between auditory and visual signals as short as 20 ms. Conversely, other experiments have shown that humans can tolerate audiovisual asynchrony that exceeds 200 ms. This seeming contradiction in human temporal sensitivity can be attributed to a number of factors such as experimental approaches and precedence of the asynchronous signals, along with the nature, duration, location, complexity and repetitiveness of the audiovisual stimuli, and even individual differences. In order to better understand how temporal integration of audiovisual events occurs in the real world, we need to close the gap between the experimental setting and the complex setting of everyday life. With this work, we aimed to contribute one brick to the bridge that will close this gap. We compared perceived synchrony for long-running and eventful audiovisual sequences to shorter sequences that contain a single audiovisual event, for three types of content: action, music, and speech. The resulting windows of temporal integration showed that participants were better at detecting asynchrony for the longer stimuli, possibly because the long-running sequences contain multiple corresponding events that offer audiovisual timing cues. Moreover, the points of subjective simultaneity differ between content types, suggesting that the nature of a visual scene could influence the temporal perception of events. An expected outcome from this type of experiment was the rich variation among participants' distributions and the derived points of subjective simultaneity. Hence, the designs of similar experiments call for more participants than traditional psychophysical studies. Heeding this caution, we conclude that existing theories on multisensory perception are ready to be tested on more natural and representative stimuli. PMID:26082738

  20. Preliminary evidence for deficits in multisensory integration in autism spectrum disorders: the mirror neuron hypothesis.

    PubMed

    Oberman, Lindsay M; Ramachandran, Vilayanur S

    2008-01-01

    Autism is a complex disorder, characterized by social, cognitive, communicative, and motor symptoms. One suggestion, proposed in the current study, to explain the spectrum of symptoms is an underlying impairment in multisensory integration (MSI) systems such as a mirror neuron-like system. The mirror neuron system, thought to play a critical role in skills such as imitation, empathy, and language can be thought of as a multisensory system, converting sensory stimuli into motor representations. Consistent with this, we report preliminary evidence for deficits in a task thought to tap into MSI--"the bouba-kiki task" in children with ASD. The bouba-kiki effect is produced when subjects are asked to pair nonsense shapes with nonsense "words". We found that neurotypical children chose the nonsense "word" whose phonemic structure corresponded with the visual shape of the stimuli 88% of the time. This is presumably because of mirror neuron-like multisensory systems that integrate the visual shape with the corresponding motor gestures used to pronounce the nonsense word. Surprisingly, individuals with ASD only chose the corresponding name 56% of the time. The poor performance by the ASD group on this task suggests a deficit in MSI, perhaps related to impaired MSI brain systems. Though this is a behavioral study, it provides a testable hypothesis for the communication impairments in children with ASD that implicates a specific neural system and fits well with the current findings suggesting an impairment in the mirror systems in individuals with ASD.

  1. Functional mobility and balance in community-dwelling elderly submitted to multisensory versus strength exercises

    PubMed Central

    Alfieri, Fábio Marcon; Riberto, Marcelo; Gatz, Lucila Silveira; Ribeiro, Carla Paschoal Corsi; Lopes, José Augusto Fernandes; Santarém, José Maria; Battistella, Linamara Rizzo

    2010-01-01

    It is well documented that aging impairs balance and functional mobility. The objective of this study was to compare the efficacy of multisensory versus strength exercises on these parameters. We performed a simple blinded randomized controlled trial with 46 community-dwelling elderly allocated to strength ([GST], N = 23, 70.2-years-old ± 4.8 years) or multisensory ([GMS], N = 23, 68.8-years-old ± 5.9 years) exercises twice a week for 12 weeks. Subjects were evaluated by blinded raters using the timed ‘up and go’ test (TUG), the Guralnik test battery, and a force platform. By the end of the treatment, the GMS group showed a significant improvement in TUG (9.1 ± 1.9 seconds (s) to 8.0 ± 1.0 s, P = 0.002); Guralnik test battery (10.6 ± 1.2 to 11.3 ± 0.8 P = 0.009); lateromedial (6.1 ± 11.7 cm to 3.1 ± 1.6 cm, P = 0.02) and anteroposterior displacement (4.7 ± 4.2 cm to 3.4 ± 1.0 cm, P = 0.03), which were not observed in the GST group. These results reproduce previous findings in the literature and mean that the stimulus to sensibility results in better achievements for the control of balance and dynamic activities. Multisensory exercises were shown to be more efficacious than strength exercises to improve functional mobility. PMID:20711437

  2. The cortical spatiotemporal correlate of otolith stimulation: Vestibular evoked potentials by body translations.

    PubMed

    Ertl, M; Moser, M; Boegle, R; Conrad, J; Zu Eulenburg, P; Dieterich, M

    2017-07-15

    The vestibular organ senses linear and rotational acceleration of the head during active and passive motion. These signals are necessary for bipedal locomotion, navigation, the coordination of eye and head movements in 3D space. The temporal dynamics of vestibular processing in cortical structures have hardly been studied in humans, let alone with natural stimulation. The aim was to investigate the cortical vestibular network related to natural otolith stimulation using a hexapod motion platform. We conducted two experiments, 1. to estimate the sources of the vestibular evoked potentials (VestEPs) by means of distributed source localization (n=49), and 2. to reveal modulations of the VestEPs through the underlying acceleration intensity (n=24). For both experiments subjects were accelerated along the main axis (left/right, up/down, fore/aft) while the EEG was recorded. We were able to identify five VestEPs (P1, N1, P2, N2, P3) with latencies between 38 and 461 ms as well as an evoked beta-band response peaking with a latency of 68 ms in all subjects and for all acceleration directions. Source localization gave the cingulate sulcus visual (CSv) area and the opercular-insular region as the main origin of the evoked potentials. No lateralization effects due to handedness could be observed. In the second experiment, area CSv was shown to be integral in the processing of acceleration intensities as sensed by the otolith organs, hinting at its potential role in ego-motion detection. These robust VestEPs could be used to investigate the mechanisms of inter-regional interaction in the natural context of vestibular processing and multisensory integration. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Structural and Functional Integrity of the Intraparietal Sulcus in Moderate and Severe Traumatic Brain Injury

    PubMed Central

    Sours, Chandler; Raghavan, Prashant; Medina, Alexandre E.; Roys, Steven; Jiang, Li; Zhuo, Jiachen

    2017-01-01

    Abstract Severe and moderate traumatic brain injury (sTBI) often results in long-term cognitive deficits such as reduced processing speed and attention. The intraparietal sulcus (IPS) is a neocortical structure that plays a crucial role in the deeply interrelated processes of multi-sensory processing and top down attention. Therefore, we hypothesized that disruptions in the functional and structural connections of the IPS may play a role in the development of such deficits. To examine these connections, we used resting state magnetic resonance imaging (rsfMRI and diffusion kurtosis imaging (DKI) in a cohort of 27 patients with sTBI (29.3 ± 8.9 years) and 27 control participants (29.8 ± 10.3 years). Participants were prospectively recruited and received rsfMRI and neuropsychological assessments including the Automated Neuropsychological Assessment Metrics (ANAM) at greater than 6 months post-injury. A subset of participants received a DKI scan. Results suggest that patients with sTBI performed worse than control participants on multiple subtests of the ANAM suggesting reduced cognitive performance. Reduced resting state functional connectivity between the IPS and cortical regions associated with multi-sensory processing and the dorsal attention network was observed in the patients with sTBI. The patients also showed reduced structural integrity of the superior longitudinal fasciculus (SLF), a key white matter tract connecting the IPS to anterior frontal areas, as measured by reduced mean kurtosis (MK) and fractional anisotropy (FA) and increased mean diffusivity (MD). Further, this reduced structural integrity of the SLF was associated with a reduction in overall cognitive performance. These findings suggest that disruptions in the structural and functional connectivity of the IPS may contribute to chronic cognitive deficits experienced by these patients. PMID:27931179

  4. Structural and Functional Integrity of the Intraparietal Sulcus in Moderate and Severe Traumatic Brain Injury.

    PubMed

    Sours, Chandler; Raghavan, Prashant; Medina, Alexandre E; Roys, Steven; Jiang, Li; Zhuo, Jiachen; Gullapalli, Rao P

    2017-04-01

    Severe and moderate traumatic brain injury (sTBI) often results in long-term cognitive deficits such as reduced processing speed and attention. The intraparietal sulcus (IPS) is a neocortical structure that plays a crucial role in the deeply interrelated processes of multi-sensory processing and top down attention. Therefore, we hypothesized that disruptions in the functional and structural connections of the IPS may play a role in the development of such deficits. To examine these connections, we used resting state magnetic resonance imaging (rsfMRI and diffusion kurtosis imaging (DKI) in a cohort of 27 patients with sTBI (29.3 ± 8.9 years) and 27 control participants (29.8 ± 10.3 years). Participants were prospectively recruited and received rsfMRI and neuropsychological assessments including the Automated Neuropsychological Assessment Metrics (ANAM) at greater than 6 months post-injury. A subset of participants received a DKI scan. Results suggest that patients with sTBI performed worse than control participants on multiple subtests of the ANAM suggesting reduced cognitive performance. Reduced resting state functional connectivity between the IPS and cortical regions associated with multi-sensory processing and the dorsal attention network was observed in the patients with sTBI. The patients also showed reduced structural integrity of the superior longitudinal fasciculus (SLF), a key white matter tract connecting the IPS to anterior frontal areas, as measured by reduced mean kurtosis (MK) and fractional anisotropy (FA) and increased mean diffusivity (MD). Further, this reduced structural integrity of the SLF was associated with a reduction in overall cognitive performance. These findings suggest that disruptions in the structural and functional connectivity of the IPS may contribute to chronic cognitive deficits experienced by these patients.

  5. Neural correlates of audiotactile phonetic processing in early-blind readers: an fMRI study.

    PubMed

    Pishnamazi, Morteza; Nojaba, Yasaman; Ganjgahi, Habib; Amousoltani, Asie; Oghabian, Mohammad Ali

    2016-05-01

    Reading is a multisensory function that relies on arbitrary associations between auditory speech sounds and symbols from a second modality. Studies of bimodal phonetic perception have mostly investigated the integration of visual letters and speech sounds. Blind readers perform an analogous task by using tactile Braille letters instead of visual letters. The neural underpinnings of audiotactile phonetic processing have not been studied before. We used functional magnetic resonance imaging to reveal the neural correlates of audiotactile phonetic processing in 16 early-blind Braille readers. Braille letters and corresponding speech sounds were presented in unimodal, and congruent/incongruent bimodal configurations. We also used a behavioral task to measure the speed of blind readers in identifying letters presented via tactile and/or auditory modalities. Reaction times for tactile stimuli were faster. The reaction times for bimodal stimuli were equal to those for the slower auditory-only stimuli. fMRI analyses revealed the convergence of unimodal auditory and unimodal tactile responses in areas of the right precentral gyrus and bilateral crus I of the cerebellum. The left and right planum temporale fulfilled the 'max criterion' for bimodal integration, but activities of these areas were not sensitive to the phonetical congruency between sounds and Braille letters. Nevertheless, congruency effects were found in regions of frontal lobe and cerebellum. Our findings suggest that, unlike sighted readers who are assumed to have amodal phonetic representations, blind readers probably process letters and sounds separately. We discuss that this distinction might be due to mal-development of multisensory neural circuits in early blinds or it might be due to inherent differences between Braille and print reading mechanisms.

  6. Evidence for Enhanced Interoceptive Accuracy in Professional Musicians

    PubMed Central

    Schirmer-Mokwa, Katharina L.; Fard, Pouyan R.; Zamorano, Anna M.; Finkel, Sebastian; Birbaumer, Niels; Kleber, Boris A.

    2015-01-01

    Interoception is defined as the perceptual activity involved in the processing of internal bodily signals. While the ability of internal perception is considered a relatively stable trait, recent data suggest that learning to integrate multisensory information can modulate it. Making music is a uniquely rich multisensory experience that has shown to alter motor, sensory, and multimodal representations in the brain of musicians. We hypothesize that musical training also heightens interoceptive accuracy comparable to other perceptual modalities. Thirteen professional singers, twelve string players, and thirteen matched non-musicians were examined using a well-established heartbeat discrimination paradigm complemented by self-reported dispositional traits. Results revealed that both groups of musicians displayed higher interoceptive accuracy than non-musicians, whereas no differences were found between singers and string-players. Regression analyses showed that accumulated musical practice explained about 49% variation in heartbeat perception accuracy in singers but not in string-players. Psychometric data yielded a number of psychologically plausible inter-correlations in musicians related to performance anxiety. However, dispositional traits were not a confounding factor on heartbeat discrimination accuracy. Together, these data provide first evidence indicating that professional musicians show enhanced interoceptive accuracy compared to non-musicians. We argue that musical training largely accounted for this effect. PMID:26733836

  7. An SVM-based solution for fault detection in wind turbines.

    PubMed

    Santos, Pedro; Villa, Luisa F; Reñones, Aníbal; Bustillo, Andres; Maudes, Jesús

    2015-03-09

    Research into fault diagnosis in machines with a wide range of variable loads and speeds, such as wind turbines, is of great industrial interest. Analysis of the power signals emitted by wind turbines for the diagnosis of mechanical faults in their mechanical transmission chain is insufficient. A successful diagnosis requires the inclusion of accelerometers to evaluate vibrations. This work presents a multi-sensory system for fault diagnosis in wind turbines, combined with a data-mining solution for the classification of the operational state of the turbine. The selected sensors are accelerometers, in which vibration signals are processed using angular resampling techniques and electrical, torque and speed measurements. Support vector machines (SVMs) are selected for the classification task, including two traditional and two promising new kernels. This multi-sensory system has been validated on a test-bed that simulates the real conditions of wind turbines with two fault typologies: misalignment and imbalance. Comparison of SVM performance with the results of artificial neural networks (ANNs) shows that linear kernel SVM outperforms other kernels and ANNs in terms of accuracy, training and tuning times. The suitability and superior performance of linear SVM is also experimentally analyzed, to conclude that this data acquisition technique generates linearly separable datasets.

  8. Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature space.

    PubMed

    Ma, Wei Ji; Zhou, Xiang; Ross, Lars A; Foxe, John J; Parra, Lucas C

    2009-01-01

    Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.

  9. Improving therapeutic outcomes in autism spectrum disorders: Enhancing social communication and sensory processing through the use of interactive robots.

    PubMed

    Sartorato, Felippe; Przybylowski, Leon; Sarko, Diana K

    2017-07-01

    For children with autism spectrum disorders (ASDs), social robots are increasingly utilized as therapeutic tools in order to enhance social skills and communication. Robots have been shown to generate a number of social and behavioral benefits in children with ASD including heightened engagement, increased attention, and decreased social anxiety. Although social robots appear to be effective social reinforcement tools in assistive therapies, the perceptual mechanism underlying these benefits remains unknown. To date, social robot studies have primarily relied on expertise in fields such as engineering and clinical psychology, with measures of social robot efficacy principally limited to qualitative observational assessments of children's interactions with robots. In this review, we examine a range of socially interactive robots that currently have the most widespread use as well as the utility of these robots and their therapeutic effects. In addition, given that social interactions rely on audiovisual communication, we discuss how enhanced sensory processing and integration of robotic social cues may underlie the perceptual and behavioral benefits that social robots confer. Although overall multisensory processing (including audiovisual integration) is impaired in individuals with ASD, social robot interactions may provide therapeutic benefits by allowing audiovisual social cues to be experienced through a simplified version of a human interaction. By applying systems neuroscience tools to identify, analyze, and extend the multisensory perceptual substrates that may underlie the therapeutic benefits of social robots, future studies have the potential to strengthen the clinical utility of social robots for individuals with ASD. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Convergence of multimodal sensory pathways to the mushroom body calyx in Drosophila melanogaster

    PubMed Central

    Yagi, Ryosuke; Mabuchi, Yuta; Mizunami, Makoto; Tanaka, Nobuaki K.

    2016-01-01

    Detailed structural analyses of the mushroom body which plays critical roles in olfactory learning and memory revealed that it is directly connected with multiple primary sensory centers in Drosophila. Connectivity patterns between the mushroom body and primary sensory centers suggest that each mushroom body lobe processes information on different combinations of multiple sensory modalities. This finding provides a novel focus of research by Drosophila genetics for perception of the external world by integrating multisensory signals. PMID:27404960

  11. Multisensory regulation of maternal behavior and masculine sexual behavior: a revised view.

    PubMed

    Stern, J M

    1990-01-01

    Frank Beach's view of the multisensory regulation in Norway rats of copulation in males (12) and of pup retrieval in females (23) is critically analyzed and revised in terms of Lashley's influence, Beach's other work, and current neurobiological knowledge. Beach's view was that no single sensory stimulus is essential to elicit these behaviors, but that all relevant stimuli available summate in the neocortex; consequently, (a) sexual "arousal" is increased in males, leading to copulation, and (b) the "efficiency," or likelihood, of retrieval is increased in postpartum mothers. The revised view is based on a component analysis in which each of these behaviors consists of a chain of motoric responses elicited by somatosensory stimulation. Distal stimuli emanating from the female or pups induce proximity by provoking orientation, attention and arousal; the meaning of these stimuli is largely learned by conditioned associations during the initial executions of the behavior, although odors may have a prepotent influence for some individuals. Stimuli are integrated in a multisensory manner by both subcortical and neocortical mechanisms. Generalizations concerning the reproductive behavior of other mammalian species are suggested.

  12. Multisensory integration of speech sounds with letters vs. visual speech: only visual speech induces the mismatch negativity.

    PubMed

    Stekelenburg, Jeroen J; Keetels, Mirjam; Vroomen, Jean

    2018-05-01

    Numerous studies have demonstrated that the vision of lip movements can alter the perception of auditory speech syllables (McGurk effect). While there is ample evidence for integration of text and auditory speech, there are only a few studies on the orthographic equivalent of the McGurk effect. Here, we examined whether written text, like visual speech, can induce an illusory change in the perception of speech sounds on both the behavioural and neural levels. In a sound categorization task, we found that both text and visual speech changed the identity of speech sounds from an /aba/-/ada/ continuum, but the size of this audiovisual effect was considerably smaller for text than visual speech. To examine at which level in the information processing hierarchy these multisensory interactions occur, we recorded electroencephalography in an audiovisual mismatch negativity (MMN, a component of the event-related potential reflecting preattentive auditory change detection) paradigm in which deviant text or visual speech was used to induce an illusory change in a sequence of ambiguous sounds halfway between /aba/ and /ada/. We found that only deviant visual speech induced an MMN, but not deviant text, which induced a late P3-like positive potential. These results demonstrate that text has much weaker effects on sound processing than visual speech does, possibly because text has different biological roots than visual speech. © 2018 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  13. Multisensory Integration Strategy for Modality-Specific Loss of Inhibition Control in Older Adults.

    PubMed

    Lee, Ahreum; Ryu, Hokyoung; Kim, Jae-Kwan; Jeong, Eunju

    2018-04-11

    Older adults are known to have lesser cognitive control capability and greater susceptibility to distraction than young adults. Previous studies have reported age-related problems in selective attention and inhibitory control, yielding mixed results depending on modality and context in which stimuli and tasks were presented. The purpose of the study was to empirically demonstrate a modality-specific loss of inhibitory control in processing audio-visual information with ageing. A group of 30 young adults (mean age = 25.23, Standar Desviation (SD) = 1.86) and 22 older adults (mean age = 55.91, SD = 4.92) performed the audio-visual contour identification task (AV-CIT). We compared performance of visual/auditory identification (Uni-V, Uni-A) with that of visual/auditory identification in the presence of distraction in counterpart modality (Multi-V, Multi-A). The findings showed a modality-specific effect on inhibitory control. Uni-V performance was significantly better than Multi-V, indicating that auditory distraction significantly hampered visual target identification. However, Multi-A performance was significantly enhanced compared to Uni-A, indicating that auditory target performance was significantly enhanced by visual distraction. Additional analysis showed an age-specific effect on enhancement between Uni-A and Multi-A depending on the level of visual inhibition. Together, our findings indicated that the loss of visual inhibitory control was beneficial for the auditory target identification presented in a multimodal context in older adults. A likely multisensory information processing strategy in the older adults was further discussed in relation to aged cognition.

  14. Multisensory integration in early vestibular processing in mice: the encoding of passive vs. active motion

    PubMed Central

    Medrea, Ioana

    2013-01-01

    The mouse has become an important model system for studying the cellular basis of learning and coding of heading by the vestibular system. Here we recorded from single neurons in the vestibular nuclei to understand how vestibular pathways encode self-motion under natural conditions, during which proprioceptive and motor-related signals as well as vestibular inputs provide feedback about an animal's movement through the world. We recorded neuronal responses in alert behaving mice focusing on a group of neurons, termed vestibular-only cells, that are known to control posture and project to higher-order centers. We found that the majority (70%, n = 21/30) of neurons were bimodal, in that they responded robustly to passive stimulation of proprioceptors as well as passive stimulation of the vestibular system. Additionally, the linear summation of a given neuron's vestibular and neck sensitivities predicted well its responses when both stimuli were applied simultaneously. In contrast, neuronal responses were suppressed when the same motion was actively generated, with the one striking exception that the activity of bimodal neurons similarly and robustly encoded head on body position in all conditions. Our results show that proprioceptive and motor-related signals are combined with vestibular information at the first central stage of vestibular processing in mice. We suggest that these results have important implications for understanding the multisensory integration underlying accurate postural control and the neural representation of directional heading in the head direction cell network of mice. PMID:24089394

  15. Environment, physical activity, and neurogenesis: implications for prevention and treatment of Alzhemier's disease.

    PubMed

    Briones, Teresita L

    2006-02-01

    Age is the biggest risk factor for the development of neurodegenerative diseases. Consequently, as the population ages it becomes more critical to find ways to avoid the debilitating cost of neurodegenerative diseases such as Alzheimer's. Some of the non-invasive strategies that can potentially slow down the mental decline associated with aging are exercise and use of multi-sensory environmental stimulation. The beneficial effects of both exercise and multi-sensory environmental stimulation have been well-documented, thus it is possible that these strategies can either provide neuroprotection or increase resistance to the development of age-related cognitive problems.

  16. A Weighted Measurement Fusion Particle Filter for Nonlinear Multisensory Systems Based on Gauss–Hermite Approximation

    PubMed Central

    Li, Yun

    2017-01-01

    We addressed the fusion estimation problem for nonlinear multisensory systems. Based on the Gauss–Hermite approximation and weighted least square criterion, an augmented high-dimension measurement from all sensors was compressed into a lower dimension. By combining the low-dimension measurement function with the particle filter (PF), a weighted measurement fusion PF (WMF-PF) is presented. The accuracy of WMF-PF appears good and has a lower computational cost when compared to centralized fusion PF (CF-PF). An example is given to show the effectiveness of the proposed algorithms. PMID:28956862

  17. Age-Related Deficits in Auditory Confrontation Naming

    PubMed Central

    Hanna-Pladdy, Brenda; Choi, Hyun

    2015-01-01

    The naming of manipulable objects in older and younger adults was evaluated across auditory, visual, and multisensory conditions. Older adults were less accurate and slower in naming across conditions, and all subjects were more impaired and slower to name action sounds than pictures or audiovisual combinations. Moreover, there was a sensory by age group interaction, revealing lower accuracy and increased latencies in auditory naming for older adults unrelated to hearing insensitivity but modest improvement to multisensory cues. These findings support age-related deficits in object action naming and suggest that auditory confrontation naming may be more sensitive than visual naming. PMID:20677880

  18. Learning to associate auditory and visual stimuli: behavioral and neural mechanisms.

    PubMed

    Altieri, Nicholas; Stevenson, Ryan A; Wallace, Mark T; Wenger, Michael J

    2015-05-01

    The ability to effectively combine sensory inputs across modalities is vital for acquiring a unified percept of events. For example, watching a hammer hit a nail while simultaneously identifying the sound as originating from the event requires the ability to identify spatio-temporal congruencies and statistical regularities. In this study, we applied a reaction time and hazard function measure known as capacity (e.g., Townsend and AshbyCognitive Theory 200-239, 1978) to quantify the extent to which observers learn paired associations between simple auditory and visual patterns in a model theoretic manner. As expected, results showed that learning was associated with an increase in accuracy, but more significantly, an increase in capacity. The aim of this study was to associate capacity measures of multisensory learning, with neural based measures, namely mean global field power (GFP). We observed a co-variation between an increase in capacity, and a decrease in GFP amplitude as learning occurred. This suggests that capacity constitutes a reliable behavioral index of efficient energy expenditure in the neural domain.

  19. A simple and efficient method to enhance audiovisual binding tendencies

    PubMed Central

    Wozny, David R.; Shams, Ladan

    2017-01-01

    Individuals vary in their tendency to bind signals from multiple senses. For the same set of sights and sounds, one individual may frequently integrate multisensory signals and experience a unified percept, whereas another individual may rarely bind them and often experience two distinct sensations. Thus, while this binding/integration tendency is specific to each individual, it is not clear how plastic this tendency is in adulthood, and how sensory experiences may cause it to change. Here, we conducted an exploratory investigation which provides evidence that (1) the brain’s tendency to bind in spatial perception is plastic, (2) that it can change following brief exposure to simple audiovisual stimuli, and (3) that exposure to temporally synchronous, spatially discrepant stimuli provides the most effective method to modify it. These results can inform current theories about how the brain updates its internal model of the surrounding sensory world, as well as future investigations seeking to increase integration tendencies. PMID:28462016

  20. Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training.

    PubMed

    Eberhardt, Silvio P; Auer, Edward T; Bernstein, Lynne E

    2014-01-01

    In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee's primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee's lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT).

  1. Multisensory emotion perception in congenitally, early, and late deaf CI users

    PubMed Central

    Nava, Elena; Villwock, Agnes K.; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte

    2017-01-01

    Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a different deafness onset and were compared to three groups of age- and gender-matched hearing control participants. We hypothesized that congenitally deaf (CD) but not early deaf (ED) and late deaf (LD) CI users would show reduced multisensory interactions and a higher visual dominance in emotion perception than their hearing controls. The CD (n = 7), ED (deafness onset: <3 years of age; n = 7), and LD (deafness onset: >3 years; n = 13) CI users and the control participants performed an emotion recognition task with auditory, visual, and audio-visual emotionally congruent and incongruent nonsense speech stimuli. In different blocks, participants judged either the vocal (Voice task) or the facial expressions (Face task). In the Voice task, all three CI groups performed overall less efficiently than their respective controls and experienced higher interference from incongruent facial information. Furthermore, the ED CI users benefitted more than their controls from congruent faces and the CD CI users showed an analogous trend. In the Face task, recognition efficiency of the CI users and controls did not differ. Our results suggest that CI users acquire multisensory interactions to some degree, even after congenital deafness. When judging affective prosody they appear impaired and more strongly biased by concurrent facial information than typically hearing individuals. We speculate that limitations inherent to the CI contribute to these group differences. PMID:29023525

  2. Multisensory emotion perception in congenitally, early, and late deaf CI users.

    PubMed

    Fengler, Ineke; Nava, Elena; Villwock, Agnes K; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte

    2017-01-01

    Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a different deafness onset and were compared to three groups of age- and gender-matched hearing control participants. We hypothesized that congenitally deaf (CD) but not early deaf (ED) and late deaf (LD) CI users would show reduced multisensory interactions and a higher visual dominance in emotion perception than their hearing controls. The CD (n = 7), ED (deafness onset: <3 years of age; n = 7), and LD (deafness onset: >3 years; n = 13) CI users and the control participants performed an emotion recognition task with auditory, visual, and audio-visual emotionally congruent and incongruent nonsense speech stimuli. In different blocks, participants judged either the vocal (Voice task) or the facial expressions (Face task). In the Voice task, all three CI groups performed overall less efficiently than their respective controls and experienced higher interference from incongruent facial information. Furthermore, the ED CI users benefitted more than their controls from congruent faces and the CD CI users showed an analogous trend. In the Face task, recognition efficiency of the CI users and controls did not differ. Our results suggest that CI users acquire multisensory interactions to some degree, even after congenital deafness. When judging affective prosody they appear impaired and more strongly biased by concurrent facial information than typically hearing individuals. We speculate that limitations inherent to the CI contribute to these group differences.

  3. Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training

    PubMed Central

    Eberhardt, Silvio P.; Auer Jr., Edward T.; Bernstein, Lynne E.

    2014-01-01

    In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee’s primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee’s lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT). PMID:25400566

  4. Cognitive integration of asynchronous natural or non-natural auditory and visual information in videos of real-world events: an event-related potential study.

    PubMed

    Liu, B; Wang, Z; Wu, G; Meng, X

    2011-04-28

    In this paper, we aim to study the cognitive integration of asynchronous natural or non-natural auditory and visual information in videos of real-world events. Videos with asynchronous semantically consistent or inconsistent natural sound or speech were used as stimuli in order to compare the difference and similarity between multisensory integrations of videos with asynchronous natural sound and speech. The event-related potential (ERP) results showed that N1 and P250 components were elicited irrespective of whether natural sounds were consistent or inconsistent with critical actions in videos. Videos with inconsistent natural sound could elicit N400-P600 effects compared to videos with consistent natural sound, which was similar to the results from unisensory visual studies. Videos with semantically consistent or inconsistent speech could both elicit N1 components. Meanwhile, videos with inconsistent speech would elicit N400-LPN effects in comparison with videos with consistent speech, which showed that this semantic processing was probably related to recognition memory. Moreover, the N400 effect elicited by videos with semantically inconsistent speech was larger and later than that elicited by videos with semantically inconsistent natural sound. Overall, multisensory integration of videos with natural sound or speech could be roughly divided into two stages. For the videos with natural sound, the first stage might reflect the connection between the received information and the stored information in memory; and the second one might stand for the evaluation process of inconsistent semantic information. For the videos with speech, the first stage was similar to the first stage of videos with natural sound; while the second one might be related to recognition memory process. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.

  5. Cerebral gray matter volume in patients with chronic migraine: correlations with clinical features.

    PubMed

    Coppola, Gianluca; Petolicchio, Barbara; Di Renzo, Antonio; Tinelli, Emanuele; Di Lorenzo, Cherubino; Parisi, Vincenzo; Serrao, Mariano; Calistri, Valentina; Tardioli, Stefano; Cartocci, Gaia; Ambrosini, Anna; Caramia, Francesca; Di Piero, Vittorio; Pierelli, Francesco

    2017-12-08

    To date, few MRI studies have been performed in patients affected by chronic migraine (CM), especially in those without medication overuse. Here, we performed magnetic resonance imaging (MRI) voxel-based morphometry (VBM) analyses to investigate the gray matter (GM) volume of the whole brain in patients affected by CM. Our aim was to investigate whether fluctuations in the GM volumes were related to the clinical features of CM. Twenty untreated patients with CM without a past medical history of medication overuse underwent 3-Tesla MRI scans and were compared to a group of 20 healthy controls (HCs). We used SPM12 and the CAT12 toolbox to process the MRI data and to perform VBM analyses of the structural T1-weighted MRI scans. The GM volume of patients was compared to that of HCs with various corrected and uncorrected thresholds. To check for possible correlations, patients' clinical features and GM maps were regressed. Initially, we did not find significant differences in the GM volume between patients with CM and HCs (p < 0.05 corrected for multiple comparisons). However, using more-liberal uncorrected statistical thresholds, we noted that compared to HCs, patients with CM exhibited clusters of regions with lower GM volumes including the cerebellum, left middle temporal gyrus, left temporal pole/amygdala/hippocampus/pallidum/orbitofrontal cortex, and left occipital areas (Brodmann areas 17/18). The GM volume of the cerebellar hemispheres was negatively correlated with the disease duration and positively correlated with the number of tablets taken per month. No gross morphometric changes were observed in patients with CM when compared with HCs. However, using more-liberal uncorrected statistical thresholds, we observed that CM is associated with subtle GM volume changes in several brain areas known to be involved in nociception/antinociception, multisensory integration, and analgesic dependence. We speculate that these slight morphometric impairments could lead, at least in a subgroup of patients, to the development and continuation of maladaptive acute medication usage.

  6. Multi-Sensor Systems and Data Fusion for Telecommunications, Remote Sensing and Radar (les Systemes multi-senseurs et le fusionnement des donnees pour les telecommunications, la teledetection et les radars)

    DTIC Science & Technology

    1998-04-01

    The result of the project is a demonstration of the fusion process, the sensors management and the real-time capabilities using simulated sensors...demonstrator (TAD) is a system that demonstrates the core ele- ment of a battlefield ground surveillance system by simulation in near real-time. The core...Management and Sensor/Platform simulation . The surveillance system observes the real world through a non-collocated heterogene- ous multisensory system

  7. The spatial reliability of task-irrelevant sounds modulates bimodal audiovisual integration: An event-related potential study.

    PubMed

    Li, Qi; Yu, Hongtao; Wu, Yan; Gao, Ning

    2016-08-26

    The integration of multiple sensory inputs is essential for perception of the external world. The spatial factor is a fundamental property of multisensory audiovisual integration. Previous studies of the spatial constraints on bimodal audiovisual integration have mainly focused on the spatial congruity of audiovisual information. However, the effect of spatial reliability within audiovisual information on bimodal audiovisual integration remains unclear. In this study, we used event-related potentials (ERPs) to examine the effect of spatial reliability of task-irrelevant sounds on audiovisual integration. Three relevant ERP components emerged: the first at 140-200ms over a wide central area, the second at 280-320ms over the fronto-central area, and a third at 380-440ms over the parieto-occipital area. Our results demonstrate that ERP amplitudes elicited by audiovisual stimuli with reliable spatial relationships are larger than those elicited by stimuli with inconsistent spatial relationships. In addition, we hypothesized that spatial reliability within an audiovisual stimulus enhances feedback projections to the primary visual cortex from multisensory integration regions. Overall, our findings suggest that the spatial linking of visual and auditory information depends on spatial reliability within an audiovisual stimulus and occurs at a relatively late stage of processing. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. An SVM-Based Solution for Fault Detection in Wind Turbines

    PubMed Central

    Santos, Pedro; Villa, Luisa F.; Reñones, Aníbal; Bustillo, Andres; Maudes, Jesús

    2015-01-01

    Research into fault diagnosis in machines with a wide range of variable loads and speeds, such as wind turbines, is of great industrial interest. Analysis of the power signals emitted by wind turbines for the diagnosis of mechanical faults in their mechanical transmission chain is insufficient. A successful diagnosis requires the inclusion of accelerometers to evaluate vibrations. This work presents a multi-sensory system for fault diagnosis in wind turbines, combined with a data-mining solution for the classification of the operational state of the turbine. The selected sensors are accelerometers, in which vibration signals are processed using angular resampling techniques and electrical, torque and speed measurements. Support vector machines (SVMs) are selected for the classification task, including two traditional and two promising new kernels. This multi-sensory system has been validated on a test-bed that simulates the real conditions of wind turbines with two fault typologies: misalignment and imbalance. Comparison of SVM performance with the results of artificial neural networks (ANNs) shows that linear kernel SVM outperforms other kernels and ANNs in terms of accuracy, training and tuning times. The suitability and superior performance of linear SVM is also experimentally analyzed, to conclude that this data acquisition technique generates linearly separable datasets. PMID:25760051

  9. Effects of multisensory stimulation on cognition, depression and anxiety levels of mildly-affected Alzheimer's patients.

    PubMed

    Ozdemir, Leyla; Akdemir, Nuran

    2009-08-15

    The purpose of this study was to investigate and assess the effects of musical therapy, painting inanimate-animate object pictures, and orientation to time-place-person interventions on the cognitive state, depression, and anxiety levels of mildly-affected Alzheimer's patients. The study using a quasi-experimental design was conducted with 27 mildly-affected Alzheimer's patients. The effects of the multisensory stimulation were evaluated with the "Mini Mental State Examination," the "Geriatric Depression Scale," and the "Beck Anxiety Scale." All of these were administered one day prior to beginning the study, immediately after its completion, and three weeks thereafter. A significant negative correlation was determined to exist between the MMSE-depression scores and MMSE-anxiety scores; the correlation between the depression-anxiety scores, on the other hand, had a positive significance. The shifts over time in the MMSE, depression and anxiety scores were significant. The primary conclusion of the study is that the multisensory stimulation method applied to mildly-affected Alzheimer's patients had a positive effect on their cognitive state, depression, and anxiety, and that this effect continued for three weeks following completion of the study intervention, with a tendency to decline progressively.

  10. Comparison of multisensory and strength training for postural control in the elderly

    PubMed Central

    Alfieri, Fábio Marcon; Riberto, Marcelo; Gatz, Lucila Silveira; Ribeiro, Carla Paschoal Corsi; Lopes, José Augusto Fernandes; Battistella, Linamara Rizzo

    2012-01-01

    Objective The objective of this study was to analyze the efficacy of multisensory versus muscle strengthening to improve postural control in healthy community-dwelling elderly. Participants We performed a single-blinded study with 46 community-dwelling elderly allocated to strength (GS, n = 23; 70.18 ± 4.8 years 22 women and 1 man) and multisensory exercises groups (GM, n = 23; 68.8 ± 5.9 years; 22 women and 1 man) for 12 weeks. Methods We performed isokinetic evaluations of muscle groups in the ankle and foot including dorsiflexors, plantar flexors, inversion, and eversion. The oscillation of the center of pressure was assessed with a force platform. Results The GM group presented a reduction in the oscillation (66.8 ± 273.4 cm2 to 11.1 ± 11.6 cm2; P = 0.02), which was not observed in the GS group. The GM group showed better results for the peak torque and work than the GS group, but without statistical significance. Conclusion Although the GM group presented better results, it is not possible to state that one exercise regimen proved more efficacious than the other in improving balance control. PMID:22654512

  11. A neuroscientific perspective on music therapy.

    PubMed

    Koelsch, Stefan

    2009-07-01

    During the last years, a number of studies demonstrated that music listening (and even more so music production) activates a multitude of brain structures involved in cognitive, sensorimotor, and emotional processing. For example, music engages sensory processes, attention, memory-related processes, perception-action mediation ("mirror neuron system" activity), multisensory integration, activity changes in core areas of emotional processing, processing of musical syntax and musical meaning, and social cognition. It is likely that the engagement of these processes by music can have beneficial effects on the psychological and physiological health of individuals, although the mechanisms underlying such effects are currently not well understood. This article gives a brief overview of factors contributing to the effects of music-therapeutic work. Then, neuroscientific studies using music to investigate emotion, perception-action mediation ("mirror function"), and social cognition are reviewed, including illustrations of the relevance of these domains for music therapy.

  12. Multisensory Integration Strategy for Modality-Specific Loss of Inhibition Control in Older Adults

    PubMed Central

    Ryu, Hokyoung; Kim, Jae-Kwan; Jeong, Eunju

    2018-01-01

    Older adults are known to have lesser cognitive control capability and greater susceptibility to distraction than young adults. Previous studies have reported age-related problems in selective attention and inhibitory control, yielding mixed results depending on modality and context in which stimuli and tasks were presented. The purpose of the study was to empirically demonstrate a modality-specific loss of inhibitory control in processing audio-visual information with ageing. A group of 30 young adults (mean age = 25.23, Standard Deviation (SD) = 1.86) and 22 older adults (mean age = 55.91, SD = 4.92) performed the audio-visual contour identification task (AV-CIT). We compared performance of visual/auditory identification (Uni-V, Uni-A) with that of visual/auditory identification in the presence of distraction in counterpart modality (Multi-V, Multi-A). The findings showed a modality-specific effect on inhibitory control. Uni-V performance was significantly better than Multi-V, indicating that auditory distraction significantly hampered visual target identification. However, Multi-A performance was significantly enhanced compared to Uni-A, indicating that auditory target performance was significantly enhanced by visual distraction. Additional analysis showed an age-specific effect on enhancement between Uni-A and Multi-A depending on the level of visual inhibition. Together, our findings indicated that the loss of visual inhibitory control was beneficial for the auditory target identification presented in a multimodal context in older adults. A likely multisensory information processing strategy in the older adults was further discussed in relation to aged cognition. PMID:29641462

  13. Gustatory and reward brain circuits in the control of food intake

    PubMed Central

    Oliveira-Maia, Albino J.; Roberts, Craig D.; Simon, Sidney A.; Nicolelis, Miguel A.L.

    2012-01-01

    Gustation is a multisensory process allowing for the selection of nutrients and the rejection of irritating and/or toxic compounds. Since obesity is a highly prevalent condition that is critically dependent on food intake and energy expenditure, a deeper understanding of gustatory processing is an important objective in biomedical research. Recent findings have provided evidence that central gustatory processes are distributed across several cortical and sub-cortical brain areas. Furthermore, these gustatory sensory circuits are closely related to the circuits that process reward. Here, we present an overview of the activation and connectivity between central gustatory and reward areas. Moreover, and given the limitations in number and effectiveness of treatments currently available for overweight patients, we discuss the possibility of modulating neuronal activity in these circuits as an alternative in the treatment of obesity. PMID:21197607

  14. Sound effects: Multimodal input helps infants find displaced objects.

    PubMed

    Shinskey, Jeanne L

    2017-09-01

    Before 9 months, infants use sound to retrieve a stationary object hidden by darkness but not one hidden by occlusion, suggesting auditory input is more salient in the absence of visual input. This article addresses how audiovisual input affects 10-month-olds' search for displaced objects. In AB tasks, infants who previously retrieved an object at A subsequently fail to find it after it is displaced to B, especially following a delay between hiding and retrieval. Experiment 1 manipulated auditory input by keeping the hidden object audible versus silent, and visual input by presenting the delay in the light versus dark. Infants succeeded more at B with audible than silent objects and, unexpectedly, more after delays in the light than dark. Experiment 2 presented both the delay and search phases in darkness. The unexpected light-dark difference disappeared. Across experiments, the presence of auditory input helped infants find displaced objects, whereas the absence of visual input did not. Sound might help by strengthening object representation, reducing memory load, or focusing attention. This work provides new evidence on when bimodal input aids object processing, corroborates claims that audiovisual processing improves over the first year of life, and contributes to multisensory approaches to studying cognition. Statement of contribution What is already known on this subject Before 9 months, infants use sound to retrieve a stationary object hidden by darkness but not one hidden by occlusion. This suggests they find auditory input more salient in the absence of visual input in simple search tasks. After 9 months, infants' object processing appears more sensitive to multimodal (e.g., audiovisual) input. What does this study add? This study tested how audiovisual input affects 10-month-olds' search for an object displaced in an AB task. Sound helped infants find displaced objects in both the presence and absence of visual input. Object processing becomes more sensitive to bimodal input as multisensory functions develop across the first year. © 2016 The British Psychological Society.

  15. Using joint ICA to link function and structure using MEG and DTI in schizophrenia

    PubMed Central

    Stephen, JM; Coffman, BA; Jung, RE; Bustillo, JR; Aine, CJ; Calhoun, VD

    2013-01-01

    In this study we employed joint independent component analysis (jICA) to perform a novel multivariate integration of magnetoencephalography (MEG) and diffusion tensor imaging (DTI) data to investigate the link between function and structure. This model-free approach allows one to identify covariation across modalities with different temporal and spatial scales [temporal variation in MEG and spatial variation in fractional anisotropy (FA) maps]. Healthy controls (HC) and patients with schizophrenia (SP) participated in an auditory/visual multisensory integration paradigm to probe cortical connectivity in schizophrenia. To allow direct comparisons across participants and groups, the MEG data were registered to an average head position and regional waveforms were obtained by calculating the local field power of the planar gradiometers. Diffusion tensor images obtained in the same individuals were preprocessed to provide FA maps for each participant. The MEG/FA data were then integrated using the jICA software (http://mialab.mrn.org/software/fit). We identified MEG/FA components that demonstrated significantly different (p < 0.05) covariation in MEG/FA data between diagnostic groups (SP vs. HC) and three components that captured the predominant sensory responses in the MEG data. Lower FA values in bilateral posterior parietal regions, which include anterior/posterior association tracts, were associated with reduced MEG amplitude (120-170 ms) of the visual response in occipital sensors in SP relative to HC. Additionally, increased FA in a right medial frontal region was linked with larger amplitude late MEG activity (300-400 ms) in bilateral central channels for SP relative to HC. Step-wise linear regression provided evidence that right temporal, occipital and late central components were significant predictors of reaction time and cognitive performance based on the Measurement and Treatment Research to Improve Cognition in Schizophrenia (MATRICS) cognitive assessment battery. These results point to dysfunction in a posterior visual processing network in schizophrenia, with reduced MEG amplitude, reduced FA and poorer overall performance on the MATRICS. Interestingly, the spatial location of the MEG activity and the associated FA regions are spatially consistent with white matter regions that subserve these brain areas. This novel approach provides evidence for significant pairing between function (electrophysiology) and structure (white matter integrity) and demonstrates the sensitivity of this multivariate, multimodal integration technique to group differences in function and structure. PMID:23777757

  16. Cross-modal versus within-modal recall: differences in behavioral and brain responses.

    PubMed

    Butler, Andrew J; James, Karin H

    2011-10-31

    Although human experience is multisensory in nature, previous research has focused predominantly on memory for unisensory as opposed to multisensory information. In this work, we sought to investigate behavioral and neural differences between the cued recall of cross-modal audiovisual associations versus within-modal visual or auditory associations. Participants were presented with cue-target associations comprised of pairs of nonsense objects, pairs of nonsense sounds, objects paired with sounds, and sounds paired with objects. Subsequently, they were required to recall the modality of the target given the cue while behavioral accuracy, reaction time, and blood oxygenation level dependent (BOLD) activation were measured. Successful within-modal recall was associated with modality-specific reactivation in primary perceptual regions, and was more accurate than cross-modal retrieval. When auditory targets were correctly or incorrectly recalled using a cross-modal visual cue, there was re-activation in auditory association cortex, and recall of information from cross-modal associations activated the hippocampus to a greater degree than within-modal associations. Findings support theories that propose an overlap between regions active during perception and memory, and show that behavioral and neural differences exist between within- and cross-modal associations. Overall the current study highlights the importance of the role of multisensory information in memory. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Multisensory Motion Perception in 3–4 Month-Old Infants

    PubMed Central

    Nava, Elena; Grassi, Massimo; Brenna, Viola; Croci, Emanuela; Turati, Chiara

    2017-01-01

    Human infants begin very early in life to take advantage of multisensory information by extracting the invariant amodal information that is conveyed redundantly by multiple senses. Here we addressed the question as to whether infants can bind multisensory moving stimuli, and whether this occurs even if the motion produced by the stimuli is only illusory. Three- to 4-month-old infants were presented with two bimodal pairings: visuo-tactile and audio-visual. Visuo-tactile pairings consisted of apparently vertically moving bars (the Barber Pole illusion) moving in either the same or opposite direction with a concurrent tactile stimulus consisting of strokes given on the infant’s back. Audio-visual pairings consisted of the Barber Pole illusion in its visual and auditory version, the latter giving the impression of a continuous rising or ascending pitch. We found that infants were able to discriminate congruently (same direction) vs. incongruently moving (opposite direction) pairs irrespective of modality (Experiment 1). Importantly, we also found that congruently moving visuo-tactile and audio-visual stimuli were preferred over incongruently moving bimodal stimuli (Experiment 2). Our findings suggest that very young infants are able to extract motion as amodal component and use it to match stimuli that only apparently move in the same direction. PMID:29187829

  18. Examining the Effect of Age on Visual-Vestibular Self-Motion Perception Using a Driving Paradigm.

    PubMed

    Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L

    2017-05-01

    Previous psychophysical research has examined how younger adults and non-human primates integrate visual and vestibular cues to perceive self-motion. However, there is much to be learned about how multisensory self-motion perception changes with age, and how these changes affect performance on everyday tasks involving self-motion. Evidence suggests that older adults display heightened multisensory integration compared with younger adults; however, few previous studies have examined this for visual-vestibular integration. To explore age differences in the way that visual and vestibular cues contribute to self-motion perception, we had younger and older participants complete a basic driving task containing visual and vestibular cues. We compared their performance against a previously established control group that experienced visual cues alone. Performance measures included speed, speed variability, and lateral position. Vestibular inputs resulted in more precise speed control among older adults, but not younger adults, when traversing curves. Older adults demonstrated more variability in lateral position when vestibular inputs were available versus when they were absent. These observations align with previous evidence of age-related differences in multisensory integration and demonstrate that they may extend to visual-vestibular integration. These findings may have implications for vehicle and simulator design when considering older users.

  19. Age Differences in Visual-Auditory Self-Motion Perception during a Simulated Driving Task

    PubMed Central

    Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L.

    2016-01-01

    Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e., optic flow) and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e., engine, tire, and wind sounds). Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion. PMID:27199829

  20. A measure for assessing the effects of audiovisual speech integration.

    PubMed

    Altieri, Nicholas; Townsend, James T; Wenger, Michael J

    2014-06-01

    We propose a measure of audiovisual speech integration that takes into account accuracy and response times. This measure should prove beneficial for researchers investigating multisensory speech recognition, since it relates to normal-hearing and aging populations. As an example, age-related sensory decline influences both the rate at which one processes information and the ability to utilize cues from different sensory modalities. Our function assesses integration when both auditory and visual information are available, by comparing performance on these audiovisual trials with theoretical predictions for performance under the assumptions of parallel, independent self-terminating processing of single-modality inputs. We provide example data from an audiovisual identification experiment and discuss applications for measuring audiovisual integration skills across the life span.

Top