Gohel, Bakul; Lee, Peter; Jeong, Yong
2016-08-01
Brain regions that respond to more than one sensory modality are characterized as multisensory regions. Studies on the processing of shape or object information have revealed recruitment of the lateral occipital cortex, posterior parietal cortex, and other regions regardless of input sensory modalities. However, it remains unknown whether such regions show similar (modality-invariant) or different (modality-specific) neural oscillatory dynamics, as recorded using magnetoencephalography (MEG), in response to identical shape information processing tasks delivered to different sensory modalities. Modality-invariant or modality-specific neural oscillatory dynamics indirectly suggest modality-independent or modality-dependent participation of particular brain regions, respectively. Therefore, this study investigated the modality-specificity of neural oscillatory dynamics in the form of spectral power modulation patterns in response to visual and tactile sequential shape-processing tasks that are well-matched in terms of speed and content between the sensory modalities. Task-related changes in spectral power modulation and differences in spectral power modulation between sensory modalities were investigated at source-space (voxel) level, using a multivariate pattern classification (MVPC) approach. Additionally, whole analyses were extended from the voxel level to the independent-component level to take account of signal leakage effects caused by inverse solution. The modality-specific spectral dynamics in multisensory and higher-order brain regions, such as the lateral occipital cortex, posterior parietal cortex, inferior temporal cortex, and other brain regions, showed task-related modulation in response to both sensory modalities. This suggests modality-dependency of such brain regions on the input sensory modality for sequential shape-information processing. Copyright © 2016 Elsevier B.V. All rights reserved.
Modality-specific selective attention attenuates multisensory integration.
Mozolic, Jennifer L; Hugenschmidt, Christina E; Peiffer, Ann M; Laurienti, Paul J
2008-01-01
Stimuli occurring in multiple sensory modalities that are temporally synchronous or spatially coincident can be integrated together to enhance perception. Additionally, the semantic content or meaning of a stimulus can influence cross-modal interactions, improving task performance when these stimuli convey semantically congruent or matching information, but impairing performance when they contain non-matching or distracting information. Attention is one mechanism that is known to alter processing of sensory stimuli by enhancing perception of task-relevant information and suppressing perception of task-irrelevant stimuli. It is not known, however, to what extent attention to a single sensory modality can minimize the impact of stimuli in the unattended sensory modality and reduce the integration of stimuli across multiple sensory modalities. Our hypothesis was that modality-specific selective attention would limit processing of stimuli in the unattended sensory modality, resulting in a reduction of performance enhancements produced by semantically matching multisensory stimuli, and a reduction in performance decrements produced by semantically non-matching multisensory stimuli. The results from two experiments utilizing a cued discrimination task demonstrate that selective attention to a single sensory modality prevents the integration of matching multisensory stimuli that is normally observed when attention is divided between sensory modalities. Attention did not reliably alter the amount of distraction caused by non-matching multisensory stimuli on this task; however, these findings highlight a critical role for modality-specific selective attention in modulating multisensory integration.
Dolivo, Vassilissa; Taborsky, Michael
2017-05-01
Sensory modalities individuals use to obtain information from the environment differ among conspecifics. The relative contributions of genetic divergence and environmental plasticity to this variance remain yet unclear. Numerous studies have shown that specific sensory enrichments or impoverishments at the postnatal stage can shape neural development, with potential lifelong effects. For species capable of adjusting to novel environments, specific sensory stimulation at a later life stage could also induce specific long-lasting behavioral effects. To test this possibility, we enriched young adult Norway rats with either visual, auditory, or olfactory cues. Four to 8 months after the enrichment period we tested each rat for their learning ability in 3 two-choice discrimination tasks, involving either visual, auditory, or olfactory stimulus discrimination, in a full factorial design. No sensory modality was more relevant than others for the proposed task per se, but rats performed better when tested in the modality for which they had been enriched. This shows that specific environmental conditions encountered during early adulthood have specific long-lasting effects on the learning abilities of rats. Furthermore, we disentangled the relative contributions of genetic and environmental causes of the response. The reaction norms of learning abilities in relation to the stimulus modality did not differ between families, so interindividual divergence was mainly driven by environmental rather than genetic factors. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Parallel pathways for cross-modal memory retrieval in Drosophila.
Zhang, Xiaonan; Ren, Qingzhong; Guo, Aike
2013-05-15
Memory-retrieval processing of cross-modal sensory preconditioning is vital for understanding the plasticity underlying the interactions between modalities. As part of the sensory preconditioning paradigm, it has been hypothesized that the conditioned response to an unreinforced cue depends on the memory of the reinforced cue via a sensory link between the two cues. To test this hypothesis, we studied cross-modal memory-retrieval processing in a genetically tractable model organism, Drosophila melanogaster. By expressing the dominant temperature-sensitive shibire(ts1) (shi(ts1)) transgene, which blocks synaptic vesicle recycling of specific neural subsets with the Gal4/UAS system at the restrictive temperature, we specifically blocked visual and olfactory memory retrieval, either alone or in combination; memory acquisition remained intact for these modalities. Blocking the memory retrieval of the reinforced olfactory cues did not impair the conditioned response to the unreinforced visual cues or vice versa, in contrast to the canonical memory-retrieval processing of sensory preconditioning. In addition, these conditioned responses can be abolished by blocking the memory retrieval of the two modalities simultaneously. In sum, our results indicated that a conditioned response to an unreinforced cue in cross-modal sensory preconditioning can be recalled through parallel pathways.
Ceponiene, R; Westerfield, M; Torki, M; Townsend, J
2008-06-18
Major accounts of aging implicate changes in processing external stimulus information. Little is known about differential effects of auditory and visual sensory aging, and the mechanisms of sensory aging are still poorly understood. Using event-related potentials (ERPs) elicited by unattended stimuli in younger (M=25.5 yrs) and older (M=71.3 yrs) subjects, this study examined mechanisms of sensory aging under minimized attention conditions. Auditory and visual modalities were examined to address modality-specificity vs. generality of sensory aging. Between-modality differences were robust. The earlier-latency responses (P1, N1) were unaffected in the auditory modality but were diminished in the visual modality. The auditory N2 and early visual N2 were diminished. Two similarities between the modalities were age-related enhancements in the late P2 range and positive behavior-early N2 correlation, the latter suggesting that N2 may reflect long-latency inhibition of irrelevant stimuli. Since there is no evidence for salient differences in neuro-biological aging between the two sensory regions, the observed between-modality differences are best explained by the differential reliance of auditory and visual systems on attention. Visual sensory processing relies on facilitation by visuo-spatial attention, withdrawal of which appears to be more disadvantageous in older populations. In contrast, auditory processing is equipped with powerful inhibitory capacities. However, when the whole auditory modality is unattended, thalamo-cortical gating deficits may not manifest in the elderly. In contrast, ERP indices of longer-latency, stimulus-level inhibitory modulation appear to diminish with age.
Westman, A S; Stuve, M
2001-04-01
Three studies explored whether young adults' preference for using a sense modality, e.g., hearing, correlated with presence or clarity of attributes of that sense modality in earliest memories from childhood, elementary school, or high school. In Study 1, 75 graduates or seniors in fine arts, fashion merchandising, music, conducting, or dance showed no greater frequency or clarity of any modality's sensory attributes. In Study 2, 213 beginning university students' ratings of current importance of activities emphasizing a sense modality correlated with sensory contents of recollections only for smell and taste. In Study 3, 102 beginning students' ratings of current enjoyment in using a sense modality and sensory contents of recollections were correlated and involved every modality except vision.
Seeing touch is correlated with content-specific activity in primary somatosensory cortex.
Meyer, Kaspar; Kaplan, Jonas T; Essex, Ryan; Damasio, Hanna; Damasio, Antonio
2011-09-01
There is increasing evidence to suggest that primary sensory cortices can become active in the absence of external stimulation in their respective modalities. This occurs, for example, when stimuli processed via one sensory modality imply features characteristic of a different modality; for instance, visual stimuli that imply touch have been observed to activate the primary somatosensory cortex (SI). In the present study, we addressed the question of whether such cross-modal activations are content specific. To this end, we investigated neural activity in the primary somatosensory cortex of subjects who observed human hands engaged in the haptic exploration of different everyday objects. Using multivariate pattern analysis of functional magnetic resonance imaging data, we were able to predict, based exclusively on the activity pattern in SI, which of several objects a subject saw being explored. Along with previous studies that found similar evidence for other modalities, our results suggest that primary sensory cortices represent information relevant for their modality even when this information enters the brain via a different sensory system.
Erdogan, Goker; Yildirim, Ilker; Jacobs, Robert A.
2015-01-01
People learn modality-independent, conceptual representations from modality-specific sensory signals. Here, we hypothesize that any system that accomplishes this feat will include three components: a representational language for characterizing modality-independent representations, a set of sensory-specific forward models for mapping from modality-independent representations to sensory signals, and an inference algorithm for inverting forward models—that is, an algorithm for using sensory signals to infer modality-independent representations. To evaluate this hypothesis, we instantiate it in the form of a computational model that learns object shape representations from visual and/or haptic signals. The model uses a probabilistic grammar to characterize modality-independent representations of object shape, uses a computer graphics toolkit and a human hand simulator to map from object representations to visual and haptic features, respectively, and uses a Bayesian inference algorithm to infer modality-independent object representations from visual and/or haptic signals. Simulation results show that the model infers identical object representations when an object is viewed, grasped, or both. That is, the model’s percepts are modality invariant. We also report the results of an experiment in which different subjects rated the similarity of pairs of objects in different sensory conditions, and show that the model provides a very accurate account of subjects’ ratings. Conceptually, this research significantly contributes to our understanding of modality invariance, an important type of perceptual constancy, by demonstrating how modality-independent representations can be acquired and used. Methodologically, it provides an important contribution to cognitive modeling, particularly an emerging probabilistic language-of-thought approach, by showing how symbolic and statistical approaches can be combined in order to understand aspects of human perception. PMID:26554704
Do early sensory cortices integrate cross-modal information?
Kayser, Christoph; Logothetis, Nikos K
2007-09-01
Our different senses provide complementary evidence about the environment and their interaction often aids behavioral performance or alters the quality of the sensory percept. A traditional view defers the merging of sensory information to higher association cortices, and posits that a large part of the brain can be reduced into a collection of unisensory systems that can be studied in isolation. Recent studies, however, challenge this view and suggest that cross-modal interactions can already occur in areas hitherto regarded as unisensory. We review results from functional imaging and electrophysiology exemplifying cross-modal interactions that occur early during the evoked response, and at the earliest stages of sensory cortical processing. Although anatomical studies revealed several potential origins of these cross-modal influences, there is yet no clear relation between particular functional observations and specific anatomical connections. In addition, our view on sensory integration at the neuronal level is coined by many studies on subcortical model systems of sensory integration; yet, the patterns of cross-modal interaction in cortex deviate from these model systems in several ways. Consequently, future studies on cortical sensory integration need to leave the descriptive level and need to incorporate cross-modal influences into models of the organization of sensory processing. Only then will we be able to determine whether early cross-modal interactions truly merit the label sensory integration, and how they increase a sensory system's ability to scrutinize its environment and finally aid behavior.
Frisoli, Antonio; Solazzi, Massimiliano; Reiner, Miriam; Bergamasco, Massimo
2011-06-30
The aim of this study was to understand the integration of cutaneous and kinesthetic sensory modalities in haptic perception of shape orientation. A specific robotic apparatus was employed to simulate the exploration of virtual surfaces by active touch with two fingers, with kinesthetic only, cutaneous only and combined sensory feedback. The cutaneous feedback was capable of displaying the local surface orientation at the contact point, through a small plate indenting the fingerpad at contact. A psychophysics test was conducted with SDT methodology on 6 subjects to assess the discrimination threshold of angle perception between two parallel surfaces, with three sensory modalities and two shape sizes. Results show that the cutaneous sensor modality is not affected by size of shape, but kinesthetic performance is decreasing with smaller size. Cutaneous and kinesthetic sensory cues are integrated according to a Bayesian model, so that the combined sensory stimulation always performs better than single modalities alone. Copyright © 2010 Elsevier Inc. All rights reserved.
Short-term memory for event duration: modality specificity and goal dependency.
Takahashi, Kohske; Watanabe, Katsumi
2012-11-01
Time perception is involved in various cognitive functions. This study investigated the characteristics of short-term memory for event duration by examining how the length of the retention period affects inter- and intramodal duration judgment. On each trial, a sample stimulus was followed by a comparison stimulus, after a variable delay period (0.5-5 s). The sample and comparison stimuli were presented in the visual or auditory modality. The participants determined whether the comparison stimulus was longer or shorter than the sample stimulus. The distortion pattern of subjective duration during the delay period depended on the sensory modality of the comparison stimulus but was not affected by that of the sample stimulus. When the comparison stimulus was visually presented, the retained duration of the sample stimulus was shortened as the delay period increased. Contrarily, when the comparison stimulus was presented in the auditory modality, the delay period had little to no effect on the retained duration. Furthermore, whenever the participants did not know the sensory modality of the comparison stimulus beforehand, the effect of the delay period disappeared. These results suggest that the memory process for event duration is specific to sensory modality and that its performance is determined depending on the sensory modality in which the retained duration will be used subsequently.
Paulk, Angelique C.; Zhou, Yanqiong; Stratton, Peter; Liu, Li
2013-01-01
Neural networks in vertebrates exhibit endogenous oscillations that have been associated with functions ranging from sensory processing to locomotion. It remains unclear whether oscillations may play a similar role in the insect brain. We describe a novel “whole brain” readout for Drosophila melanogaster using a simple multichannel recording preparation to study electrical activity across the brain of flies exposed to different sensory stimuli. We recorded local field potential (LFP) activity from >2,000 registered recording sites across the fly brain in >200 wild-type and transgenic animals to uncover specific LFP frequency bands that correlate with: 1) brain region; 2) sensory modality (olfactory, visual, or mechanosensory); and 3) activity in specific neural circuits. We found endogenous and stimulus-specific oscillations throughout the fly brain. Central (higher-order) brain regions exhibited sensory modality-specific increases in power within narrow frequency bands. Conversely, in sensory brain regions such as the optic or antennal lobes, LFP coherence, rather than power, best defined sensory responses across modalities. By transiently activating specific circuits via expression of TrpA1, we found that several circuits in the fly brain modulate LFP power and coherence across brain regions and frequency domains. However, activation of a neuromodulatory octopaminergic circuit specifically increased neuronal coherence in the optic lobes during visual stimulation while decreasing coherence in central brain regions. Our multichannel recording and brain registration approach provides an effective way to track activity simultaneously across the fly brain in vivo, allowing investigation of functional roles for oscillations in processing sensory stimuli and modulating behavior. PMID:23864378
Cortical GABAergic Interneurons in Cross-Modal Plasticity following Early Blindness
Desgent, Sébastien; Ptito, Maurice
2012-01-01
Early loss of a given sensory input in mammals causes anatomical and functional modifications in the brain via a process called cross-modal plasticity. In the past four decades, several animal models have illuminated our understanding of the biological substrates involved in cross-modal plasticity. Progressively, studies are now starting to emphasise on cell-specific mechanisms that may be responsible for this intermodal sensory plasticity. Inhibitory interneurons expressing γ-aminobutyric acid (GABA) play an important role in maintaining the appropriate dynamic range of cortical excitation, in critical periods of developmental plasticity, in receptive field refinement, and in treatment of sensory information reaching the cerebral cortex. The diverse interneuron population is very sensitive to sensory experience during development. GABAergic neurons are therefore well suited to act as a gate for mediating cross-modal plasticity. This paper attempts to highlight the links between early sensory deprivation, cortical GABAergic interneuron alterations, and cross-modal plasticity, discuss its implications, and further provide insights for future research in the field. PMID:22720175
On the dependence of response inhibition processes on sensory modality.
Bodmer, Benjamin; Beste, Christian
2017-04-01
The ability to inhibit responses is a central sensorimotor function but only recently the importance of sensory processes for motor inhibition mechanisms went more into the research focus. In this regard it is elusive, whether there are differences between sensory modalities to trigger response inhibition processes. Due to functional neuroanatomical considerations strong differences may exist, for example, between the visual and the tactile modality. In the current study we examine what neurophysiological mechanisms as well as functional neuroanatomical networks are modulated during response inhibition. Therefore, a Go/NoGo-paradigm employing a novel combination of visual, tactile, and visuotactile stimuli was used. The data show that the tactile modality is more powerful than the visual modality to trigger response inhibition processes. However, the tactile modality loses its efficacy to trigger response inhibition processes when being combined with the visual modality. This may be due to competitive mechanisms leading to a suppression of certain sensory stimuli and the response selection level. Variations in sensory modalities specifically affected conflict monitoring processes during response inhibition by modulating activity in a frontal parietal network including the right inferior frontal gyrus, anterior cingulate cortex and the temporoparietal junction. Attentional selection processes are not modulated. The results suggest that the functional neuroanatomical networks involved in response inhibition critically depends on the nature of the sensory input. Hum Brain Mapp 38:1941-1951, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
A sharp image or a sharp knife: norms for the modality-exclusivity of 774 concept-property items.
van Dantzig, Saskia; Cowell, Rosemary A; Zeelenberg, René; Pecher, Diane
2011-03-01
According to recent embodied cognition theories, mental concepts are represented by modality-specific sensory-motor systems. Much of the evidence for modality-specificity in conceptual processing comes from the property-verification task. When applying this and other tasks, it is important to select items based on their modality-exclusivity. We collected modality ratings for a set of 387 properties, each of which was paired with two different concepts, yielding a total of 774 concept-property items. For each item, participants rated the degree to which the property could be experienced through five perceptual modalities (vision, audition, touch, smell, and taste). Based on these ratings, we computed a measure of modality exclusivity, the degree to which a property is perceived exclusively through one sensory modality. In this paper, we briefly sketch the theoretical background of conceptual knowledge, discuss the use of the property-verification task in cognitive research, provide our norms and statistics, and validate the norms in a memory experiment. We conclude that our norms are important for researchers studying modality-specific effects in conceptual processing.
Sight and sound converge to form modality-invariant representations in temporo-parietal cortex
Man, Kingson; Kaplan, Jonas T.; Damasio, Antonio; Meyer, Kaspar
2013-01-01
People can identify objects in the environment with remarkable accuracy, irrespective of the sensory modality they use to perceive them. This suggests that information from different sensory channels converges somewhere in the brain to form modality-invariant representations, i.e., representations that reflect an object independently of the modality through which it has been apprehended. In this functional magnetic resonance imaging study of human subjects, we first identified brain areas that responded to both visual and auditory stimuli and then used crossmodal multivariate pattern analysis to evaluate the neural representations in these regions for content-specificity (i.e., do different objects evoke different representations?) and modality-invariance (i.e., do the sight and the sound of the same object evoke a similar representation?). While several areas became activated in response to both auditory and visual stimulation, only the neural patterns recorded in a region around the posterior part of the superior temporal sulcus displayed both content-specificity and modality-invariance. This region thus appears to play an important role in our ability to recognize objects in our surroundings through multiple sensory channels and to process them at a supra-modal (i.e., conceptual) level. PMID:23175818
Shapes, scents and sounds: quantifying the full multi-sensory basis of conceptual knowledge.
Hoffman, Paul; Lambon Ralph, Matthew A
2013-01-01
Contemporary neuroscience theories assume that concepts are formed through experience in multiple sensory-motor modalities. Quantifying the contribution of each modality to different object categories is critical to understanding the structure of the conceptual system and to explaining category-specific knowledge deficits. Verbal feature listing is typically used to elicit this information but has a number of drawbacks: sensory knowledge often cannot easily be translated into verbal features and many features are experienced in multiple modalities. Here, we employed a more direct approach in which subjects rated their knowledge of objects in each sensory-motor modality separately. Compared with these ratings, feature listing over-estimated the importance of visual form and functional knowledge and under-estimated the contributions of other sensory channels. An item's sensory rating proved to be a better predictor of lexical-semantic processing speed than the number of features it possessed, suggesting that ratings better capture the overall quantity of sensory information associated with a concept. Finally, the richer, multi-modal rating data not only replicated the sensory-functional distinction between animals and non-living things but also revealed novel distinctions between different types of artefact. Hierarchical cluster analyses indicated that mechanical devices (e.g., vehicles) were distinct from other non-living objects because they had strong sound and motion characteristics, making them more similar to animals in this respect. Taken together, the ratings align with neuroscience evidence in suggesting that a number of distinct sensory processing channels make important contributions to object knowledge. Multi-modal ratings for 160 objects are provided as supplementary materials. Copyright © 2012 Elsevier Ltd. All rights reserved.
Pa, Judy; Wilson, Stephen M; Pickell, Herbert; Bellugi, Ursula; Hickok, Gregory
2008-12-01
Despite decades of research, there is still disagreement regarding the nature of the information that is maintained in linguistic short-term memory (STM). Some authors argue for abstract phonological codes, whereas others argue for more general sensory traces. We assess these possibilities by investigating linguistic STM in two distinct sensory-motor modalities, spoken and signed language. Hearing bilingual participants (native in English and American Sign Language) performed equivalent STM tasks in both languages during functional magnetic resonance imaging. Distinct, sensory-specific activations were seen during the maintenance phase of the task for spoken versus signed language. These regions have been previously shown to respond to nonlinguistic sensory stimulation, suggesting that linguistic STM tasks recruit sensory-specific networks. However, maintenance-phase activations common to the two languages were also observed, implying some form of common process. We conclude that linguistic STM involves sensory-dependent neural networks, but suggest that sensory-independent neural networks may also exist.
Foss-Feig, Jennifer H.; Heacock, Jessica L.; Cascio, Carissa J.
2011-01-01
Autism spectrum disorders (ASD) are often associated with aberrant responses to sensory stimuli, which are thought to contribute to the social, communication, and repetitive behavior deficits that define ASD. However, there are few studies that separate aberrant sensory responses by individual sensory modality to assess modality-specific associations between sensory features and core symptoms. Differences in response to tactile stimuli are prevalent in ASD, and tactile contact early in infancy is a foundation for the development of social and communication skills affected by ASD. We assessed the association between three aberrant patterns of tactile responsiveness (hyper-responsiveness, hypo-responsiveness, sensory seeking) and core symptoms of ASD. Both sensory and core features were measured with converging methods including both parent-report and direct observation. Our results demonstrate that for the tactile modality, sensory hypo-responsiveness correlates strongly with increased social and communication impairments, and to a lesser degree, repetitive behaviors. Sensory seeking was found to correlate strongly with social impairment, nonverbal communication impairment, and repetitive behaviors. Surprisingly, tactile hyper-responsiveness did not significantly correlate with any core features of ASD. This differential association between specific tactile processing patterns and core features provides an important step in defining the significance of sensory symptoms in ASD, and may be useful in the development of sensory–based approaches for early detection and intervention. PMID:22059092
Chi, Yukai; Yue, Zhenzhu; Liu, Yupin; Mo, Lei; Chen, Qi
2014-08-01
There are ongoing debates on whether object concepts are coded as supramodal identity-based or modality-specific representations in the human brain. In this fMRI study, we adopted a cross-modal "prime-neutral cue-target" semantic priming paradigm, in which the prime-target relationship was manipulated along both the identity and the modality dimensions. The prime and the target could refer to either the same or different semantic identities, and could be delivered via either the same or different sensory modalities. By calculating the main effects and interactions of this 2 (identity cue validity: "Identity_Cued" vs. "Identity_Uncued") × 2 (modality cue validity: "Modality_Cued" vs. "Modality_Uncued") factorial design, we aimed at dissociating three neural networks involved in creating novel identity-specific representations independent of sensory modality, in creating modality-specific representations independent of semantic identity, and in evaluating changes of an object along both the identity and the modality dimensions, respectively. Our results suggested that bilateral lateral occipital cortex was involved in creating a new supramodal semantic representation irrespective of the input modality, left dorsal premotor cortex, and left intraparietal sulcus were involved in creating a new modality-specific representation irrespective of its semantic identity, and bilateral superior temporal sulcus was involved in creating a representation when the identity and modality properties were both cued or both uncued. In addition, right inferior frontal gyrus showed enhanced neural activity only when both the identity and the modality of the target were new, indicating its functional role in novelty detection. Copyright © 2014 Wiley Periodicals, Inc.
Which Aspects of Visual Attention Are Changed by Deafness? The Case of the Attentional Network Test
ERIC Educational Resources Information Center
Dye, Matthew W. G.; Baril, Dara E.; Bavelier, Daphne
2007-01-01
The loss of one sensory modality can lead to a reorganization of the other intact sensory modalities. In the case of individuals who are born profoundly deaf, there is growing evidence of changes in visual functions. Specifically, deaf individuals demonstrate enhanced visual processing in the periphery, and in particular enhanced peripheral visual…
Amodal processing in human prefrontal cortex.
Tamber-Rosenau, Benjamin J; Dux, Paul E; Tombu, Michael N; Asplund, Christopher L; Marois, René
2013-07-10
Information enters the cortex via modality-specific sensory regions, whereas actions are produced by modality-specific motor regions. Intervening central stages of information processing map sensation to behavior. Humans perform this central processing in a flexible, abstract manner such that sensory information in any modality can lead to response via any motor system. Cognitive theories account for such flexible behavior by positing amodal central information processing (e.g., "central executive," Baddeley and Hitch, 1974; "supervisory attentional system," Norman and Shallice, 1986; "response selection bottleneck," Pashler, 1994). However, the extent to which brain regions embodying central mechanisms of information processing are amodal remains unclear. Here we apply multivariate pattern analysis to functional magnetic resonance imaging (fMRI) data to compare response selection, a cognitive process widely believed to recruit an amodal central resource across sensory and motor modalities. We show that most frontal and parietal cortical areas known to activate across a wide variety of tasks code modality, casting doubt on the notion that these regions embody a central processor devoid of modality representation. Importantly, regions of anterior insula and dorsolateral prefrontal cortex consistently failed to code modality across four experiments. However, these areas code at least one other task dimension, process (instantiated as response selection vs response execution), ensuring that failure to find coding of modality is not driven by insensitivity of multivariate pattern analysis in these regions. We conclude that abstract encoding of information modality is primarily a property of subregions of the prefrontal cortex.
Sensory and Repetitive Behaviors among Children with Autism Spectrum Disorder at Home
Kirby, Anne V.; Boyd, Brian A.; Williams, Kathryn; Faldowski, Richard A.; Baranek, Grace T.
2017-01-01
Atypical sensory and repetitive behaviors are defining features of autism spectrum disorder (ASD) and are thought to be influenced by environmental factors; however, there is a lack of naturalistic research exploring contexts surrounding these behaviors. The current study involved video recording observations of 32 children with ASD (2 – 12 years of age) engaging in sensory and repetitive behaviors during home activities. Behavioral coding was used to determine what activity contexts, sensory modalities, and stimulus characteristics were associated with specific behavior types: hyperresponsive, hyporesponsive, sensory seeking, and repetitive/stereotypic. Results indicated that hyperresponsive behaviors were most associated with activities of daily living and family-initiated stimuli, whereas sensory seeking behaviors were associated with free play activities and child-initiated stimuli. Behaviors associated with multiple sensory modalities simultaneously were common, emphasizing the multi-sensory nature of children’s behaviors in natural contexts. Implications for future research more explicitly considering context are discussed. PMID:27091950
Common Sense in Choice: The Effect of Sensory Modality on Neural Value Representations.
Shuster, Anastasia; Levy, Dino J
2018-01-01
Although it is well established that the ventromedial prefrontal cortex (vmPFC) represents value using a common currency across categories of rewards, it is unknown whether the vmPFC represents value irrespective of the sensory modality in which alternatives are presented. In the current study, male and female human subjects completed a decision-making task while their neural activity was recorded using functional magnetic resonance imaging. On each trial, subjects chose between a safe alternative and a lottery, which was presented visually or aurally. A univariate conjunction analysis revealed that the anterior portion of the vmPFC tracks subjective value (SV) irrespective of the sensory modality. Using a novel cross-modality multivariate classifier, we were able to decode auditory value based on visual trials and vice versa. In addition, we found that the visual and auditory sensory cortices, which were identified using functional localizers, are also sensitive to the value of stimuli, albeit in a modality-specific manner. Whereas both primary and higher-order auditory cortices represented auditory SV (aSV), only a higher-order visual area represented visual SV (vSV). These findings expand our understanding of the common currency network of the brain and shed a new light on the interplay between sensory and value information processing.
Common Sense in Choice: The Effect of Sensory Modality on Neural Value Representations
2018-01-01
Abstract Although it is well established that the ventromedial prefrontal cortex (vmPFC) represents value using a common currency across categories of rewards, it is unknown whether the vmPFC represents value irrespective of the sensory modality in which alternatives are presented. In the current study, male and female human subjects completed a decision-making task while their neural activity was recorded using functional magnetic resonance imaging. On each trial, subjects chose between a safe alternative and a lottery, which was presented visually or aurally. A univariate conjunction analysis revealed that the anterior portion of the vmPFC tracks subjective value (SV) irrespective of the sensory modality. Using a novel cross-modality multivariate classifier, we were able to decode auditory value based on visual trials and vice versa. In addition, we found that the visual and auditory sensory cortices, which were identified using functional localizers, are also sensitive to the value of stimuli, albeit in a modality-specific manner. Whereas both primary and higher-order auditory cortices represented auditory SV (aSV), only a higher-order visual area represented visual SV (vSV). These findings expand our understanding of the common currency network of the brain and shed a new light on the interplay between sensory and value information processing. PMID:29619408
Biasing the brain's attentional set: I. cue driven deployments of intersensory selective attention.
Foxe, John J; Simpson, Gregory V; Ahlfors, Seppo P; Saron, Clifford D
2005-10-01
Brain activity associated with directing attention to one of two possible sensory modalities was examined using high-density mapping of human event-related potentials. The deployment of selective attention was based on visually presented symbolic cue-words instructing subjects on a trial-by-trial basis, which sensory modality to attend. We measured the spatio-temporal pattern of activation in the approximately 1 second period between the cue-instruction and a subsequent compound auditory-visual imperative stimulus. This allowed us to assess the flow of processing across brain regions involved in deploying and sustaining inter-sensory selective attention, prior to the actual selective processing of the compound audio-visual target stimulus. Activity over frontal and parietal areas showed sensory specific increases in activation during the early part of the anticipatory period (~230 ms), probably representing the activation of fronto-parietal attentional deployment systems for top-down control of attention. In the later period preceding the arrival of the "to-be-attended" stimulus, sustained differential activity was seen over fronto-central regions and parieto-occipital regions, suggesting the maintenance of sensory-specific biased attentional states that would allow for subsequent selective processing. Although there was clear sensory biasing in this late sustained period, it was also clear that both sensory systems were being prepared during the cue-target period. These late sensory-specific biasing effects were also accompanied by sustained activations over frontal cortices that also showed both common and sensory specific activation patterns, suggesting that maintenance of the biased state includes top-down inputs from generators in frontal cortices, some of which are sensory-specific regions. These data support extensive interactions between sensory, parietal and frontal regions during processing of cue information, deployment of attention, and maintenance of the focus of attention in anticipation of impending attentionally relevant input.
Striem-Amit, Ella; Cohen, Laurent; Dehaene, Stanislas; Amedi, Amir
2012-11-08
Using a visual-to-auditory sensory-substitution algorithm, congenitally fully blind adults were taught to read and recognize complex images using "soundscapes"--sounds topographically representing images. fMRI was used to examine key questions regarding the visual word form area (VWFA): its selectivity for letters over other visual categories without visual experience, its feature tolerance for reading in a novel sensory modality, and its plasticity for scripts learned in adulthood. The blind activated the VWFA specifically and selectively during the processing of letter soundscapes relative to both textures and visually complex object categories and relative to mental imagery and semantic-content controls. Further, VWFA recruitment for reading soundscapes emerged after 2 hr of training in a blind adult on a novel script. Therefore, the VWFA shows category selectivity regardless of input sensory modality, visual experience, and long-term familiarity or expertise with the script. The VWFA may perform a flexible task-specific rather than sensory-specific computation, possibly linking letter shapes to phonology. Copyright © 2012 Elsevier Inc. All rights reserved.
Brain correlates of automatic visual change detection.
Cléry, H; Andersson, F; Fonlupt, P; Gomot, M
2013-07-15
A number of studies support the presence of visual automatic detection of change, but little is known about the brain generators involved in such processing and about the modulation of brain activity according to the salience of the stimulus. The study presented here was designed to locate the brain activity elicited by unattended visual deviant and novel stimuli using fMRI. Seventeen adult participants were presented with a passive visual oddball sequence while performing a concurrent visual task. Variations in BOLD signal were observed in the modality-specific sensory cortex, but also in non-specific areas involved in preattentional processing of changing events. A degree-of-deviance effect was observed, since novel stimuli elicited more activity in the sensory occipital regions and at the medial frontal site than small changes. These findings could be compared to those obtained in the auditory modality and might suggest a "general" change detection process operating in several sensory modalities. Copyright © 2013 Elsevier Inc. All rights reserved.
Is Attentional Resource Allocation Across Sensory Modalities Task-Dependent?
Wahn, Basil; König, Peter
2017-01-01
Human information processing is limited by attentional resources. That is, via attentional mechanisms, humans select a limited amount of sensory input to process while other sensory input is neglected. In multisensory research, a matter of ongoing debate is whether there are distinct pools of attentional resources for each sensory modality or whether attentional resources are shared across sensory modalities. Recent studies have suggested that attentional resource allocation across sensory modalities is in part task-dependent. That is, the recruitment of attentional resources across the sensory modalities depends on whether processing involves object-based attention (e.g., the discrimination of stimulus attributes) or spatial attention (e.g., the localization of stimuli). In the present paper, we review findings in multisensory research related to this view. For the visual and auditory sensory modalities, findings suggest that distinct resources are recruited when humans perform object-based attention tasks, whereas for the visual and tactile sensory modalities, partially shared resources are recruited. If object-based attention tasks are time-critical, shared resources are recruited across the sensory modalities. When humans perform an object-based attention task in combination with a spatial attention task, partly shared resources are recruited across the sensory modalities as well. Conversely, for spatial attention tasks, attentional processing does consistently involve shared attentional resources for the sensory modalities. Generally, findings suggest that the attentional system flexibly allocates attentional resources depending on task demands. We propose that such flexibility reflects a large-scale optimization strategy that minimizes the brain's costly resource expenditures and simultaneously maximizes capability to process currently relevant information.
Dissociating the Representation of Action- and Sound-Related Concepts in Middle Temporal Cortex
ERIC Educational Resources Information Center
Kiefer, Markus; Trumpp, Natalie; Herrnberger, Barbel; Sim, Eun-Jin; Hoenig, Klaus; Pulvermuller, Friedemann
2012-01-01
Modality-specific models of conceptual memory propose close links between concepts and the sensory-motor systems. Neuroimaging studies found, in different subject groups, that action-related and sound-related concepts activated different parts of posterior middle temporal gyrus (pMTG), suggesting a modality-specific representation of conceptual…
Jacquin-Courtois, S; Rode, G; Pavani, F; O'Shea, J; Giard, M H; Boisson, D; Rossetti, Y
2010-03-01
Unilateral neglect is a disabling syndrome frequently observed following right hemisphere brain damage. Symptoms range from visuo-motor impairments through to deficient visuo-spatial imagery, but impairment can also affect the auditory modality. A short period of adaptation to a rightward prismatic shift of the visual field is known to improve a wide range of hemispatial neglect symptoms, including visuo-manual tasks, mental imagery, postural imbalance, visuo-verbal measures and number bisection. The aim of the present study was to assess whether the beneficial effects of prism adaptation may generalize to auditory manifestations of neglect. Auditory extinction, whose clinical manifestations are independent of the sensory modalities engaged in visuo-manual adaptation, was examined in neglect patients before and after prism adaptation. Two separate groups of neglect patients (all of whom exhibited left auditory extinction) underwent prism adaptation: one group (n = 6) received a classical prism treatment ('Prism' group), the other group (n = 6) was submitted to the same procedure, but wore neutral glasses creating no optical shift (placebo 'Control' group). Auditory extinction was assessed by means of a dichotic listening task performed three times: prior to prism exposure (pre-test), upon prism removal (0 h post-test) and 2 h later (2 h post-test). The total number of correct responses, the lateralization index (detection asymmetry between the two ears) and the number of left-right fusion errors were analysed. Our results demonstrate that prism adaptation can improve left auditory extinction, thus revealing transfer of benefit to a sensory modality that is orthogonal to the visual, proprioceptive and motor modalities directly implicated in the visuo-motor adaptive process. The observed benefit was specific to the detection asymmetry between the two ears and did not affect the total number of responses. This indicates a specific effect of prism adaptation on lateralized processes rather than on general arousal. Our results suggest that the effects of prism adaptation can extend to unexposed sensory systems. The bottom-up approach of visuo-motor adaptation appears to interact with higher order brain functions related to multisensory integration and can have beneficial effects on sensory processing in different modalities. These findings should stimulate the development of therapeutic approaches aimed at bypassing the affected sensory processing modality by adapting other sensory modalities.
Oxytocin mediates early experience-dependent cross-modal plasticity in the sensory cortices.
Zheng, Jing-Jing; Li, Shu-Jing; Zhang, Xiao-Di; Miao, Wan-Ying; Zhang, Dinghong; Yao, Haishan; Yu, Xiang
2014-03-01
Sensory experience is critical to development and plasticity of neural circuits. Here we report a new form of plasticity in neonatal mice, where early sensory experience cross-modally regulates development of all sensory cortices via oxytocin signaling. Unimodal sensory deprivation from birth through whisker deprivation or dark rearing reduced excitatory synaptic transmission in the correspondent sensory cortex and cross-modally in other sensory cortices. Sensory experience regulated synthesis and secretion of the neuropeptide oxytocin as well as its level in the cortex. Both in vivo oxytocin injection and increased sensory experience elevated excitatory synaptic transmission in multiple sensory cortices and significantly rescued the effects of sensory deprivation. Together, these results identify a new function for oxytocin in promoting cross-modal, experience-dependent cortical development. This link between sensory experience and oxytocin is particularly relevant to autism, where hypersensitivity or hyposensitivity to sensory inputs is prevalent and oxytocin is a hotly debated potential therapy.
Cappe, Céline; Morel, Anne; Barone, Pascal
2009-01-01
Multisensory and sensorimotor integrations are usually considered to occur in superior colliculus and cerebral cortex, but few studies proposed the thalamus as being involved in these integrative processes. We investigated whether the organization of the thalamocortical (TC) systems for different modalities partly overlap, representing an anatomical support for multisensory and sensorimotor interplay in thalamus. In 2 macaque monkeys, 6 neuroanatomical tracers were injected in the rostral and caudal auditory cortex, posterior parietal cortex (PE/PEa in area 5), and dorsal and ventral premotor cortical areas (PMd, PMv), demonstrating the existence of overlapping territories of thalamic projections to areas of different modalities (sensory and motor). TC projections, distinct from the ones arising from specific unimodal sensory nuclei, were observed from motor thalamus to PE/PEa or auditory cortex and from sensory thalamus to PMd/PMv. The central lateral nucleus and the mediodorsal nucleus project to all injected areas, but the most significant overlap across modalities was found in the medial pulvinar nucleus. The present results demonstrate the presence of thalamic territories integrating different sensory modalities with motor attributes. Based on the divergent/convergent pattern of TC and corticothalamic projections, 4 distinct mechanisms of multisensory and sensorimotor interplay are proposed. PMID:19150924
Ronald, Kelly L; Sesterhenn, Timothy M; Fernandez-Juricic, Esteban; Lucas, Jeffrey R
2017-11-01
Many animals communicate with multimodal signals. While we have an understanding of multimodal signal production, we know relatively less about receiver filtering of multimodal signals and whether filtering capacity in one modality influences filtering in a second modality. Most multimodal signals contain a temporal element, such as change in frequency over time or a dynamic visual display. We examined the relationship in temporal resolution across two modalities to test whether females are (1) sensory 'specialists', where a trade-off exists between the sensory modalities, (2) sensory 'generalists', where a positive relationship exists between the modalities, or (3) whether no relationship exists between modalities. We used female brown-headed cowbirds (Molothrus ater) to investigate this question as males court females with an audiovisual display. We found a significant positive relationship between female visual and auditory temporal resolution, suggesting that females are sensory 'generalists'. Females appear to resolve information well across multiple modalities, which may select for males that signal their quality similarly across modalities.
Measurement and Research Tools.
ERIC Educational Resources Information Center
1997
This document contains four papers from a symposium on measurement and research tools for human resource development (HRD). "The 'Best Fit' Training: Measure Employee Learning Style Strengths" (Daniel L. Parry) discusses a study of the physiological aspect of sensory intake known as modality, more specifically, modality as measured by…
Fujisaki, Waka; Nishida, Shin'ya
2010-08-07
The human brain processes different aspects of the surrounding environment through multiple sensory modalities, and each modality can be subdivided into multiple attribute-specific channels. When the brain rebinds sensory content information ('what') across different channels, temporal coincidence ('when') along with spatial coincidence ('where') provides a critical clue. It however remains unknown whether neural mechanisms for binding synchronous attributes are specific to each attribute combination, or universal and central. In human psychophysical experiments, we examined how combinations of visual, auditory and tactile attributes affect the temporal frequency limit of synchrony-based binding. The results indicated that the upper limits of cross-attribute binding were lower than those of within-attribute binding, and surprisingly similar for any combination of visual, auditory and tactile attributes (2-3 Hz). They are unlikely to be the limits for judging synchrony, since the temporal limit of a cross-attribute synchrony judgement was higher and varied with the modality combination (4-9 Hz). These findings suggest that cross-attribute temporal binding is mediated by a slow central process that combines separately processed 'what' and 'when' properties of a single event. While the synchrony performance reflects temporal bottlenecks existing in 'when' processing, the binding performance reflects the central temporal limit of integrating 'when' and 'what' properties.
A cross-modal investigation of the neural substrates for ongoing cognition
Wang, Megan; He, Biyu J.
2014-01-01
What neural mechanisms underlie the seamless flow of our waking consciousness? A necessary albeit insufficient condition for such neural mechanisms is that they should be consistently modulated across time were a segment of the conscious stream to be repeated twice. In this study, we experimentally manipulated the content of a story followed by subjects during functional magnetic resonance imaging (fMRI) independently from the modality of sensory input (as visual text or auditory speech) as well as attentional focus. We then extracted brain activity patterns consistently modulated across subjects by the evolving content of the story regardless of whether it was presented visually or auditorily. Specifically, in one experiment we presented the same story to different subjects via either auditory or visual modality. In a second experiment, we presented two different stories simultaneously, one auditorily, one visually, and manipulated the subjects' attentional focus. This experimental design allowed us to dissociate brain activities underlying modality-specific sensory processing from modality-independent story processing. We uncovered a network of brain regions consistently modulated by the evolving content of a story regardless of the sensory modality used for stimulus input, including the superior temporal sulcus/gyrus (STS/STG), the inferior frontal gyrus (IFG), the posterior cingulate cortex (PCC), the medial frontal cortex (MFC), the temporal pole (TP), and the temporoparietal junction (TPJ). Many of these regions have previously been implicated in semantic processing. Interestingly, different stories elicited similar brain activity patterns, but with subtle differences potentially attributable to varying degrees of emotional valence and self-relevance. PMID:25206347
Aytemür, Ali; Almeida, Nathalia; Lee, Kwang-Hyuk
2017-02-01
Adaptation to delayed sensory feedback following an action produces a subjective time compression between the action and the feedback (temporal recalibration effect, TRE). TRE is important for sensory delay compensation to maintain a relationship between causally related events. It is unclear whether TRE is a sensory modality-specific phenomenon. In 3 experiments employing a sensorimotor synchronization task, we investigated this question using cathodal transcranial direct-current stimulation (tDCS). We found that cathodal tDCS over the visual cortex, and to a lesser extent over the auditory cortex, produced decreased visual TRE. However, both auditory and visual cortex tDCS did not produce any measurable effects on auditory TRE. Our study revealed different nature of TRE in auditory and visual domains. Visual-motor TRE, which is more variable than auditory TRE, is a sensory modality-specific phenomenon, modulated by the auditory cortex. The robustness of auditory-motor TRE, unaffected by tDCS, suggests the dominance of the auditory system in temporal processing, by providing a frame of reference in the realignment of sensorimotor timing signals. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sensory modality of smoking cues modulates neural cue reactivity.
Yalachkov, Yavor; Kaiser, Jochen; Görres, Andreas; Seehaus, Arne; Naumer, Marcus J
2013-01-01
Behavioral experiments have demonstrated that the sensory modality of presentation modulates drug cue reactivity. The present study on nicotine addiction tested whether neural responses to smoking cues are modulated by the sensory modality of stimulus presentation. We measured brain activation using functional magnetic resonance imaging (fMRI) in 15 smokers and 15 nonsmokers while they viewed images of smoking paraphernalia and control objects and while they touched the same objects without seeing them. Haptically presented, smoking-related stimuli induced more pronounced neural cue reactivity than visual cues in the left dorsal striatum in smokers compared to nonsmokers. The severity of nicotine dependence correlated positively with the preference for haptically explored smoking cues in the left inferior parietal lobule/somatosensory cortex, right fusiform gyrus/inferior temporal cortex/cerebellum, hippocampus/parahippocampal gyrus, posterior cingulate cortex, and supplementary motor area. These observations are in line with the hypothesized role of the dorsal striatum for the expression of drug habits and the well-established concept of drug-related automatized schemata, since haptic perception is more closely linked to the corresponding object-specific action pattern than visual perception. Moreover, our findings demonstrate that with the growing severity of nicotine dependence, brain regions involved in object perception, memory, self-processing, and motor control exhibit an increasing preference for haptic over visual smoking cues. This difference was not found for control stimuli. Considering the sensory modality of the presented cues could serve to develop more reliable fMRI-specific biomarkers, more ecologically valid experimental designs, and more effective cue-exposure therapies of addiction.
Cortico-Cortical Connections of Primary Sensory Areas and Associated Symptoms in Migraine.
Hodkinson, Duncan J; Veggeberg, Rosanna; Kucyi, Aaron; van Dijk, Koene R A; Wilcox, Sophie L; Scrivani, Steven J; Burstein, Rami; Becerra, Lino; Borsook, David
2016-01-01
Migraine is a recurring, episodic neurological disorder characterized by headache, nausea, vomiting, and sensory disturbances. These events are thought to arise from the activation and sensitization of neurons along the trigemino-vascular pathway. From animal studies, it is known that thalamocortical projections play an important role in the transmission of nociceptive signals from the meninges to the cortex. However, little is currently known about the potential involvement of cortico-cortical feedback projections from higher-order multisensory areas and/or feedforward projections from principle primary sensory areas or subcortical structures. In a large cohort of human migraine patients ( N = 40) and matched healthy control subjects ( N = 40), we used resting-state intrinsic functional connectivity to examine the cortical networks associated with the three main sensory perceptual modalities of vision, audition, and somatosensation. Specifically, we sought to explore the complexity of the sensory networks as they converge and become functionally coupled in multimodal systems. We also compared self-reported retrospective migraine symptoms in the same patients, examining the prevalence of sensory symptoms across the different phases of the migraine cycle. Our results show widespread and persistent disturbances in the perceptions of multiple sensory modalities. Consistent with this observation, we discovered that primary sensory areas maintain local functional connectivity but express impaired long-range connections to higher-order association areas (including regions of the default mode and salience network). We speculate that cortico-cortical interactions are necessary for the integration of information within and across the sensory modalities and, thus, could play an important role in the initiation of migraine and/or the development of its associated symptoms.
Aging and response interference across sensory modalities.
Guerreiro, Maria J S; Adam, Jos J; Van Gerven, Pascal W M
2014-06-01
Advancing age is associated with decrements in selective attention. It was recently hypothesized that age-related differences in selective attention depend on sensory modality. The goal of the present study was to investigate the role of sensory modality in age-related vulnerability to distraction, using a response interference task. To this end, 16 younger (mean age = 23.1 years) and 24 older (mean age = 65.3 years) adults performed four response interference tasks, involving all combinations of visual and auditory targets and distractors. The results showed that response interference effects differ across sensory modalities, but not across age groups. These results indicate that sensory modality plays an important role in vulnerability to distraction, but not in age-related distractibility by irrelevant spatial information.
Properties of intermodal transfer after dual visuo- and auditory-motor adaptation.
Schmitz, Gerd; Bock, Otmar L
2017-10-01
Previous work documented that sensorimotor adaptation transfers between sensory modalities: When subjects adapt with one arm to a visuomotor distortion while responding to visual targets, they also appear to be adapted when they are subsequently tested with auditory targets. Vice versa, when they adapt to an auditory-motor distortion while pointing to auditory targets, they appear to be adapted when they are subsequently tested with visual targets. Therefore, it was concluded that visuomotor as well as auditory-motor adaptation use the same adaptation mechanism. Furthermore, it has been proposed that sensory information from the trained modality is weighted larger than sensory information from an untrained one, because transfer between sensory modalities is incomplete. The present study tested these hypotheses for dual arm adaptation. One arm adapted to an auditory-motor distortion and the other either to an opposite directed auditory-motor or visuomotor distortion. We found that both arms adapted significantly. However, compared to reference data on single arm adaptation, adaptation in the dominant arm was reduced indicating interference from the non-dominant to the dominant arm. We further found that arm-specific aftereffects of adaptation, which reflect recalibration of sensorimotor transformation rules, were stronger or equally strong when targets were presented in the previously adapted compared to the non-adapted sensory modality, even when one arm adapted visually and the other auditorily. The findings are discussed with respect to a recently published schematic model on sensorimotor adaptation. Copyright © 2017 Elsevier B.V. All rights reserved.
Fujisaki, Waka; Nishida, Shin'ya
2010-01-01
The human brain processes different aspects of the surrounding environment through multiple sensory modalities, and each modality can be subdivided into multiple attribute-specific channels. When the brain rebinds sensory content information (‘what’) across different channels, temporal coincidence (‘when’) along with spatial coincidence (‘where’) provides a critical clue. It however remains unknown whether neural mechanisms for binding synchronous attributes are specific to each attribute combination, or universal and central. In human psychophysical experiments, we examined how combinations of visual, auditory and tactile attributes affect the temporal frequency limit of synchrony-based binding. The results indicated that the upper limits of cross-attribute binding were lower than those of within-attribute binding, and surprisingly similar for any combination of visual, auditory and tactile attributes (2–3 Hz). They are unlikely to be the limits for judging synchrony, since the temporal limit of a cross-attribute synchrony judgement was higher and varied with the modality combination (4–9 Hz). These findings suggest that cross-attribute temporal binding is mediated by a slow central process that combines separately processed ‘what’ and ‘when’ properties of a single event. While the synchrony performance reflects temporal bottlenecks existing in ‘when’ processing, the binding performance reflects the central temporal limit of integrating ‘when’ and ‘what’ properties. PMID:20335212
Mental Imagery Induces Cross-Modal Sensory Plasticity and Changes Future Auditory Perception.
Berger, Christopher C; Ehrsson, H Henrik
2018-04-01
Can what we imagine in our minds change how we perceive the world in the future? A continuous process of multisensory integration and recalibration is responsible for maintaining a correspondence between the senses (e.g., vision, touch, audition) and, ultimately, a stable and coherent perception of our environment. This process depends on the plasticity of our sensory systems. The so-called ventriloquism aftereffect-a shift in the perceived localization of sounds presented alone after repeated exposure to spatially mismatched auditory and visual stimuli-is a clear example of this type of plasticity in the audiovisual domain. In a series of six studies with 24 participants each, we investigated an imagery-induced ventriloquism aftereffect in which imagining a visual stimulus elicits the same frequency-specific auditory aftereffect as actually seeing one. These results demonstrate that mental imagery can recalibrate the senses and induce the same cross-modal sensory plasticity as real sensory stimuli.
Response format, magnitude of laterality effects, and sex differences in laterality.
Voyer, Daniel; Doyle, Randi A
2012-01-01
The present study examined the evidence for the claim that response format might affect the magnitude of laterality effects by means of a meta-analysis. The analysis included the 396 effect sizes drawn from 266 studies retrieved by Voyer (1996) and relevant to the main effect of laterality and sex differences in laterality for verbal and non-verbal tasks in the auditory, tactile, and visual sensory modality. The response format used in specific studies was the only moderator variable of interest in the present analysis, resulting in four broad response categories (oral, written, computer, and pointing). A meta-analysis analogue to ANOVA showed no significant influence of response format on either the main effect of laterality or sex differences in laterality when all sensory modalities were combined. However, when modalities were considered separately, response format affected the main effect of laterality in the visual modality, with a clear advantage for written responses. Further pointed analyses revealed some specific differences among response formats. Results are discussed in terms of their implications for the measurement of laterality.
Gender differences in emotion recognition: Impact of sensory modality and emotional category.
Lambrecht, Lena; Kreifelts, Benjamin; Wildgruber, Dirk
2014-04-01
Results from studies on gender differences in emotion recognition vary, depending on the types of emotion and the sensory modalities used for stimulus presentation. This makes comparability between different studies problematic. This study investigated emotion recognition of healthy participants (N = 84; 40 males; ages 20 to 70 years), using dynamic stimuli, displayed by two genders in three different sensory modalities (auditory, visual, audio-visual) and five emotional categories. The participants were asked to categorise the stimuli on the basis of their nonverbal emotional content (happy, alluring, neutral, angry, and disgusted). Hit rates and category selection biases were analysed. Women were found to be more accurate in recognition of emotional prosody. This effect was partially mediated by hearing loss for the frequency of 8,000 Hz. Moreover, there was a gender-specific selection bias for alluring stimuli: Men, as compared to women, chose "alluring" more often when a stimulus was presented by a woman as compared to a man.
Herrera, Esperanza; Sandoval, Maria Cristina; Camargo, Diana M; Salvini, Tania F
2011-01-01
Different cryotherapy modalities have distinct effects on sensory and motor nerve conduction parameters. However, it is unclear how these parameters change during the post-cooling period and how the exercise carried out in this period would influence the recovery of nerve conduction velocity (NCV). To compare the effects of three cryotherapy modalities on post-cooling NCV and to analyze the effect of walking on the recovery of sensory and motor NCV. Thirty six healthy young subjects were randomly allocated into three groups: ice massage (n=12), ice pack (n=12) and cold water immersion (n=12). The modalities were applied to the right leg. The subjects of each modality group were again randomized to perform a post-cooling activity: a) 30 min rest, b) walking 15 min followed by 15 min rest. The NCV of sural (sensory) and posterior tibial (motor) nerves was evaluated. Initial (pre-cooling) and final (30 min post-cooling) NCV were compared using a paired t-test. The effects of the modalities and the post-cooling activities on NCV were evaluated by an analysis of covariance. The significance level was α=0.05. There was a significant difference between immersion and ice massage on final sensory NCV (p=0.009). Ice pack and ice massage showed similar effects (p>0.05). Walking accelerated the recovery of sensory and motor NCV, regardless of the modality previously applied (p<0.0001). Cold water immersion was the most effective modality for maintaining reduced sensory nerve conduction after cooling. Walking after cooling, with any of the three modalities, enhances the recovery of sensory and motor NCV.
Manassa, R P; McCormick, M I; Chivers, D P; Ferrari, M C O
2013-08-22
The ability of prey to observe and learn to recognize potential predators from the behaviour of nearby individuals can dramatically increase survival and, not surprisingly, is widespread across animal taxa. A range of sensory modalities are available for this learning, with visual and chemical cues being well-established modes of transmission in aquatic systems. The use of other sensory cues in mediating social learning in fishes, including mechano-sensory cues, remains unexplored. Here, we examine the role of different sensory cues in social learning of predator recognition, using juvenile damselfish (Amphiprion percula). Specifically, we show that a predator-naive observer can socially learn to recognize a novel predator when paired with a predator-experienced conspecific in total darkness. Furthermore, this study demonstrates that when threatened, individuals release chemical cues (known as disturbance cues) into the water. These cues induce an anti-predator response in nearby individuals; however, they do not facilitate learnt recognition of the predator. As such, another sensory modality, probably mechano-sensory in origin, is responsible for information transfer in the dark. This study highlights the diversity of sensory cues used by coral reef fishes in a social learning context.
Visual Information Present in Infragranular Layers of Mouse Auditory Cortex.
Morrill, Ryan J; Hasenstaub, Andrea R
2018-03-14
The cerebral cortex is a major hub for the convergence and integration of signals from across the sensory modalities; sensory cortices, including primary regions, are no exception. Here we show that visual stimuli influence neural firing in the auditory cortex of awake male and female mice, using multisite probes to sample single units across multiple cortical layers. We demonstrate that visual stimuli influence firing in both primary and secondary auditory cortex. We then determine the laminar location of recording sites through electrode track tracing with fluorescent dye and optogenetic identification using layer-specific markers. Spiking responses to visual stimulation occur deep in auditory cortex and are particularly prominent in layer 6. Visual modulation of firing rate occurs more frequently at areas with secondary-like auditory responses than those with primary-like responses. Auditory cortical responses to drifting visual gratings are not orientation-tuned, unlike visual cortex responses. The deepest cortical layers thus appear to be an important locus for cross-modal integration in auditory cortex. SIGNIFICANCE STATEMENT The deepest layers of the auditory cortex are often considered its most enigmatic, possessing a wide range of cell morphologies and atypical sensory responses. Here we show that, in mouse auditory cortex, these layers represent a locus of cross-modal convergence, containing many units responsive to visual stimuli. Our results suggest that this visual signal conveys the presence and timing of a stimulus rather than specifics about that stimulus, such as its orientation. These results shed light on both how and what types of cross-modal information is integrated at the earliest stages of sensory cortical processing. Copyright © 2018 the authors 0270-6474/18/382854-09$15.00/0.
Ezak , Meredith J.; Hong , Elizabeth; Chaparro-Garcia , Angela; Ferkey , Denise M.
2010-01-01
Olfaction and some forms of taste (including bitter) are mediated by G protein-coupled signal transduction pathways. Olfactory and gustatory ligands bind to chemosensory G protein-coupled receptors (GPCRs) in specialized sensory cells to activate intracellular signal transduction cascades. G protein-coupled receptor kinases (GRKs) are negative regulators of signaling that specifically phosphorylate activated GPCRs to terminate signaling. Although loss of GRK function usually results in enhanced cellular signaling, Caenorhabditis elegans lacking GRK-2 function are not hypersensitive to chemosensory stimuli. Instead, grk-2 mutant animals do not chemotax toward attractive olfactory stimuli or avoid aversive tastes and smells. We show here that loss-of-function mutations in the transient receptor potential vanilloid (TRPV) channels OSM-9 and OCR-2 selectively restore grk-2 behavioral avoidance of bitter tastants, revealing modality-specific mechanisms for TRPV channel function in the regulation of C. elegans chemosensation. Additionally, a single amino acid point mutation in OCR-2 that disrupts TRPV channel-mediated gene expression, but does not decrease channel function in chemosensory primary signal transduction, also restores grk-2 bitter taste avoidance. Thus, loss of GRK-2 function may lead to changes in gene expression, via OSM-9/OCR-2, to selectively alter the levels of signaling components that transduce or regulate bitter taste responses. Our results suggest a novel mechanism and multiple modality-specific pathways that sensory cells employ in response to aberrant signal transduction. PMID:20176974
Associative learning changes cross-modal representations in the gustatory cortex
Vincis, Roberto; Fontanini, Alfredo
2016-01-01
A growing body of literature has demonstrated that primary sensory cortices are not exclusively unimodal, but can respond to stimuli of different sensory modalities. However, several questions concerning the neural representation of cross-modal stimuli remain open. Indeed, it is poorly understood if cross-modal stimuli evoke unique or overlapping representations in a primary sensory cortex and whether learning can modulate these representations. Here we recorded single unit responses to auditory, visual, somatosensory, and olfactory stimuli in the gustatory cortex (GC) of alert rats before and after associative learning. We found that, in untrained rats, the majority of GC neurons were modulated by a single modality. Upon learning, both prevalence of cross-modal responsive neurons and their breadth of tuning increased, leading to a greater overlap of representations. Altogether, our results show that the gustatory cortex represents cross-modal stimuli according to their sensory identity, and that learning changes the overlap of cross-modal representations. DOI: http://dx.doi.org/10.7554/eLife.16420.001 PMID:27572258
Llorca, P M; Pereira, B; Jardri, R; Chereau-Boudet, I; Brousse, G; Misdrahi, D; Fénelon, G; Tronche, A-M; Schwan, R; Lançon, C; Marques, A; Ulla, M; Derost, P; Debilly, B; Durif, F; de Chazeron, I
2016-12-01
Hallucinations have been described in various clinical populations, but they are neither disorder nor disease specific. In schizophrenia patients, hallucinations are hallmark symptoms and auditory ones are described as the more frequent. In Parkinson's disease, the descriptions of hallucination modalities are sparse, but the hallucinations do tend to have less negative consequences. Our study aims to explore the phenomenology of hallucinations in both hallucinating schizophrenia patients and Parkinson's disease patients using the Psycho-Sensory hAllucinations Scale (PSAS). The main objective is to describe the phenomena of these clinical symptoms in those two specific populations. Each hallucinatory sensory modality significantly differed between Parkinson's disease and schizophrenia patients. Auditory, olfactory/gustatory and cœnesthetic hallucinations were more frequent in schizophrenia than visual hallucinations. The guardian angel item, usually not explored in schizophrenia, was described by 46% of these patients. The combination of auditory and visual hallucinations was the most frequent for both Parkinson's disease and schizophrenia. The repercussion index summing characteristics of each hallucination (frequency, duration, negative aspects, conviction, impact, control and sound intensity) was always higher for schizophrenia. A broader view including widespread characteristics and interdisciplinary works must be encouraged to better understand the complexity of the process involved in hallucinations.
Llorca, P. M.; Pereira, B.; Jardri, R.; Chereau-Boudet, I.; Brousse, G.; Misdrahi, D.; Fénelon, G.; Tronche, A.-M.; Schwan, R.; Lançon, C.; Marques, A.; Ulla, M.; Derost, P.; Debilly, B.; Durif, F.; de Chazeron, I.
2016-01-01
Hallucinations have been described in various clinical populations, but they are neither disorder nor disease specific. In schizophrenia patients, hallucinations are hallmark symptoms and auditory ones are described as the more frequent. In Parkinson’s disease, the descriptions of hallucination modalities are sparse, but the hallucinations do tend to have less negative consequences. Our study aims to explore the phenomenology of hallucinations in both hallucinating schizophrenia patients and Parkinson’s disease patients using the Psycho-Sensory hAllucinations Scale (PSAS). The main objective is to describe the phenomena of these clinical symptoms in those two specific populations. Each hallucinatory sensory modality significantly differed between Parkinson’s disease and schizophrenia patients. Auditory, olfactory/gustatory and cœnesthetic hallucinations were more frequent in schizophrenia than visual hallucinations. The guardian angel item, usually not explored in schizophrenia, was described by 46% of these patients. The combination of auditory and visual hallucinations was the most frequent for both Parkinson’s disease and schizophrenia. The repercussion index summing characteristics of each hallucination (frequency, duration, negative aspects, conviction, impact, control and sound intensity) was always higher for schizophrenia. A broader view including widespread characteristics and interdisciplinary works must be encouraged to better understand the complexity of the process involved in hallucinations. PMID:27905557
Modality-independent coding of spatial layout in the human brain
Wolbers, Thomas; Klatzky, Roberta L.; Loomis, Jack M.; Wutte, Magdalena G.; Giudice, Nicholas A.
2011-01-01
Summary In many non-human species, neural computations of navigational information such as position and orientation are not tied to a specific sensory modality [1, 2]. Rather, spatial signals are integrated from multiple input sources, likely leading to abstract representations of space. In contrast, the potential for abstract spatial representations in humans is not known, as most neuroscientific experiments on human navigation have focused exclusively on visual cues. Here, we tested the modality independence hypothesis with two fMRI experiments that characterized computations in regions implicated in processing spatial layout [3]. According to the hypothesis, such regions should be recruited for spatial computation of 3-D geometric configuration, independent of a specific sensory modality. In support of this view, sighted participants showed strong activation of the parahippocampal place area (PPA) and the retrosplenial cortex (RSC) for visual and haptic exploration of information-matched scenes but not objects. Functional connectivity analyses suggested that these effects were not related to visual recoding, which was further supported by a similar preference for haptic scenes found with blind participants. Taken together, these findings establish the PPA/RSC network as critical in modality-independent spatial computations and provide important evidence for a theory of high-level abstract spatial information processing in the human brain. PMID:21620708
Sensory over responsivity and obsessive compulsive symptoms: A cluster analysis.
Ben-Sasson, Ayelet; Podoly, Tamar Yonit
2017-02-01
Several studies have examined the sensory component in Obsesseive Compulsive Disorder (OCD) and described an OCD subtype which has a unique profile, and that Sensory Phenomena (SP) is a significant component of this subtype. SP has some commonalities with Sensory Over Responsivity (SOR) and might be in part a characteristic of this subtype. Although there are some studies that have examined SOR and its relation to Obsessive Compulsive Symptoms (OCS), literature lacks sufficient data on this interplay. First to further examine the correlations between OCS and SOR, and to explore the correlations between SOR modalities (i.e. smell, touch, etc.) and OCS subscales (i.e. washing, ordering, etc.). Second, to investigate the cluster analysis of SOR and OCS dimensions in adults, that is, to classify the sample using the sensory scores to find whether a sensory OCD subtype can be specified. Our third goal was to explore the psychometric features of a new sensory questionnaire: the Sensory Perception Quotient (SPQ). A sample of non clinical adults (n=350) was recruited via e-mail, social media and social networks. Participants completed questionnaires for measuring SOR, OCS, and anxiety. SOR and OCI-F scores were moderately significantly correlated (n=274), significant correlations between all SOR modalities and OCS subscales were found with no specific higher correlation between one modality to one OCS subscale. Cluster analysis revealed four distinct clusters: (1) No OC and SOR symptoms (NONE; n=100), (2) High OC and SOR symptoms (BOTH; n=28), (3) Moderate OC symptoms (OCS; n=63), (4) Moderate SOR symptoms (SOR; n=83). The BOTH cluster had significantly higher anxiety levels than the other clusters, and shared OC subscales scores with the OCS cluster. The BOTH cluster also reported higher SOR scores across tactile, vision, taste and olfactory modalities. The SPQ was found reliable and suitable to detect SOR, the sample SPQ scores was normally distributed (n=350). SOR is a dimensional feature that can influence the severity of OCS and may characterize a unique sensory OCD subtype. Copyright © 2016 Elsevier Inc. All rights reserved.
Modality distribution of sensory neurons in the feline caudate nucleus and the substantia nigra.
Márkus, Zita; Eördegh, Gabriella; Paróczy, Zsuzsanna; Benedek, G; Nagy, A
2008-09-01
Despite extensive analysis of the motor functions of the basal ganglia and the fact that multisensory information processing appears critical for the execution of their behavioral action, little is known concerning the sensory functions of the caudate nucleus (CN) and the substantia nigra (SN). In the present study, we set out to describe the sensory modality distribution and to determine the proportions of multisensory units within the CN and the SN. The separate single sensory modality tests demonstrated that a majority of the neurons responded to only one modality, so that they seemed to be unimodal. In contrast with these findings, a large proportion of these neurons exhibited significant multisensory cross-modal interactions. Thus, these neurons should also be classified as multisensory. Our results suggest that a surprisingly high proportion of sensory neurons in the basal ganglia are multisensory, and demonstrate that an analysis without a consideration of multisensory cross-modal interactions may strongly underrepresent the number of multisensory units. We conclude that a majority of the sensory neurons in the CN and SN process multisensory information and only a minority of these units are clearly unimodal.
Jiang, Haiteng; van Gerven, Marcel A J; Jensen, Ole
2015-03-01
It has been proposed that long-term memory encoding is not only dependent on engaging task-relevant regions but also on disengaging task-irrelevant regions. In particular, oscillatory alpha activity has been shown to be involved in shaping the functional architecture of the working brain because it reflects the functional disengagement of specific regions in attention and memory tasks. We here ask if such allocation of resources by alpha oscillations generalizes to long-term memory encoding in a cross-modal setting in which we acquired the ongoing brain activity using magnetoencephalography. Participants were asked to encode pictures while ignoring simultaneously presented words and vice versa. We quantified the brain activity during rehearsal reflecting subsequent memory in the different attention conditions. The key finding was that successful long-term memory encoding is reflected by alpha power decreases in the sensory region of the to-be-attended modality and increases in the sensory region of the to-be-ignored modality to suppress distraction during rehearsal period. Our results corroborate related findings from attention studies by demonstrating that alpha activity is also important for the allocation of resources during long-term memory encoding in the presence of distracters.
Explicit processing demands reveal language modality-specific organization of working memory.
Rudner, Mary; Rönnberg, Jerker
2008-01-01
The working memory model for Ease of Language Understanding (ELU) predicts that processing differences between language modalities emerge when cognitive demands are explicit. This prediction was tested in three working memory experiments with participants who were Deaf Signers (DS), Hearing Signers (HS), or Hearing Nonsigners (HN). Easily nameable pictures were used as stimuli to avoid confounds relating to sensory modality. Performance was largely similar for DS, HS, and HN, suggesting that previously identified intermodal differences may be due to differences in retention of sensory information. When explicit processing demands were high, differences emerged between DS and HN, suggesting that although working memory storage in both groups is sensitive to temporal organization, retrieval is not sensitive to temporal organization in DS. A general effect of semantic similarity was also found. These findings are discussed in relation to the ELU model.
Harvey, Joshua Paul
2013-06-01
Synesthesia, the conscious, idiosyncratic, repeatable, and involuntary sensation of one sensory modality in response to another, is a condition that has puzzled both researchers and philosophers for centuries. Much time has been spent proving the condition's existence as well as investigating its etiology, but what can be learned from synesthesia remains a poorly discussed topic. Here, synaesthesia is presented as a possible answer rather than a question to the current gaps in our understanding of sensory perception. By first appreciating the similarities between normal sensory perception and synesthesia, one can use what is known about synaesthesia, from behavioral and imaging studies, to inform our understanding of "normal" sensory perception. In particular, in considering synesthesia, one can better understand how and where the different sensory modalities interact in the brain, how different sensory modalities can interact without confusion - the binding problem - as well as how sensory perception develops.
Daee, Pedram; Mirian, Maryam S; Ahmadabadi, Majid Nili
2014-01-01
In a multisensory task, human adults integrate information from different sensory modalities--behaviorally in an optimal Bayesian fashion--while children mostly rely on a single sensor modality for decision making. The reason behind this change of behavior over age and the process behind learning the required statistics for optimal integration are still unclear and have not been justified by the conventional Bayesian modeling. We propose an interactive multisensory learning framework without making any prior assumptions about the sensory models. In this framework, learning in every modality and in their joint space is done in parallel using a single-step reinforcement learning method. A simple statistical test on confidence intervals on the mean of reward distributions is used to select the most informative source of information among the individual modalities and the joint space. Analyses of the method and the simulation results on a multimodal localization task show that the learning system autonomously starts with sensory selection and gradually switches to sensory integration. This is because, relying more on modalities--i.e. selection--at early learning steps (childhood) is more rewarding than favoring decisions learned in the joint space since, smaller state-space in modalities results in faster learning in every individual modality. In contrast, after gaining sufficient experiences (adulthood), the quality of learning in the joint space matures while learning in modalities suffers from insufficient accuracy due to perceptual aliasing. It results in tighter confidence interval for the joint space and consequently causes a smooth shift from selection to integration. It suggests that sensory selection and integration are emergent behavior and both are outputs of a single reward maximization process; i.e. the transition is not a preprogrammed phenomenon.
Neuropeptidergic Signaling Partitions Arousal Behaviors in Zebrafish
Schoppik, David; Shi, Veronica J.; Zimmerman, Steven; Coleman, Haley A.; Greenwood, Joel; Soucy, Edward R.
2014-01-01
Animals modulate their arousal state to ensure that their sensory responsiveness and locomotor activity match environmental demands. Neuropeptides can regulate arousal, but studies of their roles in vertebrates have been constrained by the vast array of neuropeptides and their pleiotropic effects. To overcome these limitations, we systematically dissected the neuropeptidergic modulation of arousal in larval zebrafish. We quantified spontaneous locomotor activity and responsiveness to sensory stimuli after genetically induced expression of seven evolutionarily conserved neuropeptides, including adenylate cyclase activating polypeptide 1b (adcyap1b), cocaine-related and amphetamine-related transcript (cart), cholecystokinin (cck), calcitonin gene-related peptide (cgrp), galanin, hypocretin, and nociceptin. Our study reveals that arousal behaviors are dissociable: neuropeptide expression uncoupled spontaneous activity from sensory responsiveness, and uncovered modality-specific effects upon sensory responsiveness. Principal components analysis and phenotypic clustering revealed both shared and divergent features of neuropeptidergic functions: hypocretin and cgrp stimulated spontaneous locomotor activity, whereas galanin and nociceptin attenuated these behaviors. In contrast, cart and adcyap1b enhanced sensory responsiveness yet had minimal impacts on spontaneous activity, and cck expression induced the opposite effects. Furthermore, hypocretin and nociceptin induced modality-specific differences in responsiveness to changes in illumination. Our study provides the first systematic and high-throughput analysis of neuropeptidergic modulation of arousal, demonstrates that arousal can be partitioned into independent behavioral components, and reveals novel and conserved functions of neuropeptides in regulating arousal. PMID:24573274
Cross-modal decoupling in temporal attention.
Mühlberg, Stefanie; Oriolo, Giovanni; Soto-Faraco, Salvador
2014-06-01
Prior studies have repeatedly reported behavioural benefits to events occurring at attended, compared to unattended, points in time. It has been suggested that, as for spatial orienting, temporal orienting of attention spreads across sensory modalities in a synergistic fashion. However, the consequences of cross-modal temporal orienting of attention remain poorly understood. One challenge is that the passage of time leads to an increase in event predictability throughout a trial, thus making it difficult to interpret possible effects (or lack thereof). Here we used a design that avoids complete temporal predictability to investigate whether attending to a sensory modality (vision or touch) at a point in time confers beneficial access to events in the other, non-attended, sensory modality (touch or vision, respectively). In contrast to previous studies and to what happens with spatial attention, we found that events in one (unattended) modality do not automatically benefit from happening at the time point when another modality is expected. Instead, it seems that attention can be deployed in time with relative independence for different sensory modalities. Based on these findings, we argue that temporal orienting of attention can be cross-modally decoupled in order to flexibly react according to the environmental demands, and that the efficiency of this selective decoupling unfolds in time. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Aging and the interaction of sensory cortical function and structure.
Peiffer, Ann M; Hugenschmidt, Christina E; Maldjian, Joseph A; Casanova, Ramon; Srikanth, Ryali; Hayasaka, Satoru; Burdette, Jonathan H; Kraft, Robert A; Laurienti, Paul J
2009-01-01
Even the healthiest older adults experience changes in cognitive and sensory function. Studies show that older adults have reduced neural responses to sensory information. However, it is well known that sensory systems do not act in isolation but function cooperatively to either enhance or suppress neural responses to individual environmental stimuli. Very little research has been dedicated to understanding how aging affects the interactions between sensory systems, especially cross-modal deactivations or the ability of one sensory system (e.g., audition) to suppress the neural responses in another sensory system cortex (e.g., vision). Such cross-modal interactions have been implicated in attentional shifts between sensory modalities and could account for increased distractibility in older adults. To assess age-related changes in cross-modal deactivations, functional MRI studies were performed in 61 adults between 18 and 80 years old during simple auditory and visual discrimination tasks. Results within visual cortex confirmed previous findings of decreased responses to visual stimuli for older adults. Age-related changes in the visual cortical response to auditory stimuli were, however, much more complex and suggested an alteration with age in the functional interactions between the senses. Ventral visual cortical regions exhibited cross-modal deactivations in younger but not older adults, whereas more dorsal aspects of visual cortex were suppressed in older but not younger adults. These differences in deactivation also remained after adjusting for age-related reductions in brain volume of sensory cortex. Thus, functional differences in cortical activity between older and younger adults cannot solely be accounted for by differences in gray matter volume. (c) 2007 Wiley-Liss, Inc.
Heath, Matthew; Gillen, Caitlin; Samani, Ashna
2016-03-01
Antisaccades are a nonstandard task requiring a response mirror-symmetrical to the location of a target. The completion of an antisaccade has been shown to delay the reaction time (RT) of a subsequent prosaccade, whereas the converse switch elicits a null RT cost (i.e., the unidirectional prosaccade switch-cost). The present study sought to determine whether the prosaccade switch-cost arises from low-level interference specific to the sensory features of a target (i.e., modality-dependent) or manifests via the high-level demands of dissociating the spatial relations between stimulus and response (i.e., modality-independent). Participants alternated between pro- and antisaccades wherein the target associated with the response alternated between visual and auditory modalities. Thus, the present design involved task-switch (i.e., switching from a pro- to antisaccade and vice versa) and modality-switch (i.e., switching from a visual to auditory target and vice versa) trials as well as their task- and modality-repetition counterparts. RTs were longer for modality-switch than modality-repetition trials. Notably, however, modality-switch trials did not nullify or lessen the unidirectional prosaccade switch-cost; that is, the magnitude of the RT cost for task-switch prosaccades was equivalent across modality-switch and modality-repetition trials. Thus, competitive interference within a sensory modality does not contribute to the unidirectional prosaccade switch-cost. Instead, the modality-independent findings evince that dissociating the spatial relations between stimulus and response instantiates a high-level and inertially persistent nonstandard task-set that impedes the planning of a subsequent prosaccade.
Cross-Modal Multivariate Pattern Analysis
Meyer, Kaspar; Kaplan, Jonas T.
2011-01-01
Multivariate pattern analysis (MVPA) is an increasingly popular method of analyzing functional magnetic resonance imaging (fMRI) data1-4. Typically, the method is used to identify a subject's perceptual experience from neural activity in certain regions of the brain. For instance, it has been employed to predict the orientation of visual gratings a subject perceives from activity in early visual cortices5 or, analogously, the content of speech from activity in early auditory cortices6. Here, we present an extension of the classical MVPA paradigm, according to which perceptual stimuli are not predicted within, but across sensory systems. Specifically, the method we describe addresses the question of whether stimuli that evoke memory associations in modalities other than the one through which they are presented induce content-specific activity patterns in the sensory cortices of those other modalities. For instance, seeing a muted video clip of a glass vase shattering on the ground automatically triggers in most observers an auditory image of the associated sound; is the experience of this image in the "mind's ear" correlated with a specific neural activity pattern in early auditory cortices? Furthermore, is this activity pattern distinct from the pattern that could be observed if the subject were, instead, watching a video clip of a howling dog? In two previous studies7,8, we were able to predict sound- and touch-implying video clips based on neural activity in early auditory and somatosensory cortices, respectively. Our results are in line with a neuroarchitectural framework proposed by Damasio9,10, according to which the experience of mental images that are based on memories - such as hearing the shattering sound of a vase in the "mind's ear" upon seeing the corresponding video clip - is supported by the re-construction of content-specific neural activity patterns in early sensory cortices. PMID:22105246
Dixon, Eric A.; Benham, Grant; Sturgeon, John A.; Mackey, Sean; Johnson, Kevin A.; Younger, Jarred
2016-01-01
Sensory hypersensitivity is one manifestation of the central sensitization that may underlie conditions such as fibromyalgia and chronic fatigue syndrome. We conducted five studies designed to develop and validate the Sensory Hypersensitive Scale (SHS); a 25-item self-report measure of sensory hypersensitivity. The SHS assesses both general sensitivity and modality-specific sensitivity (e.g. touch, taste, and hearing). 1202 participants (157 individuals with chronic pain) completed the SHS, which demonstrated an adequate overall internal reliability (Cronbach’s alpha) of 0.81, suggesting the tool can be used as a cross-modality assessment of sensitivity. SHS scores demonstrated only modest correlations (Pearson’s r) with depressive symptoms (0.19) and anxiety (0.28), suggesting a low level of overlap with psychiatric complaints. Overall SHS scores showed significant but relatively modest correlations (Pearson’s r) with three measures of sensory testing: cold pain tolerance (−0.34); heat pain tolerance (−0.285); heat pain threshold (−0.271). Women reported significantly higher scores on the SHS than did men, although gender-based differences were small. In a chronic pain sample, individuals with fibromyalgia syndrome demonstrated significantly higher SHS scores than did individuals with osteoarthritis or back pain. The SHS appears suitable as a screening measure for sensory hypersensitivity, though additional research is warranted to determine its suitability as a proxy for central sensitization. PMID:26873609
Domain general mechanisms of perceptual decision making in human cortex
Ho, Tiffany C.; Brown, Scott; Serences, John T.
2009-01-01
To successfully interact with objects in the environment, sensory evidence must be continuously acquired, interpreted, and used to guide appropriate motor responses. For example, when driving, a red light should motivate a motor command to depress the brake pedal. Single-unit recording studies have established that simple sensorimotor transformations are mediated by the same neurons that ultimately guide the behavioral response. However, it is also possible that these sensorimotor regions are the recipients of a modality independent decision signal that is computed elsewhere. Here, we used fMRI and human observers to show that the timecourse of activation in a subregion of the right insula is consistent with a role in accumulating sensory evidence independently from the required motor response modality (saccade vs. manual). Furthermore, a combination of computational modeling and simulations of the BOLD response suggests that this region is not simply recruited by general arousal or by the tonic maintenance of attention during the decision process. Our data thus raise the possibility that a modality-independent representation of sensory evidence may guide activity in effector-specific cortical areas prior to the initiation of a behavioral response. PMID:19587274
Hearing and music in unilateral spatial neglect neuro-rehabilitation.
Guilbert, Alma; Sylvain Clément; Moroni, Christine
2014-01-01
Unilateral spatial neglect (USN) is an attention deficit in the contralesional side of space which occurs after a cerebral stroke, mainly located in the right hemisphere. USN patients are disabled in all daily activities. USN is an important negative prognostic factor of functional recovery and of socio-professional reinsertion. Thus, patient rehabilitation is a major challenge. As this deficit has been described in many sensory modalities (including hearing), many sensory and poly-sensory rehabilitation methods have been proposed to USN patients. They are mainly based on visual, tactile modalities and on motor abilities. However, these methods appear to be quite task-specific and difficult to transfer to functional activities. Very few studies have focused on the hearing modality and even fewer studies have been conducted in music as a way of improving spatial attention. Therefore, more research on such retraining needs is neccessary in order to make reliable conclusions on its efficiency in long-term rehabilitation. Nevertheless, some evidence suggests that music could be a promising tool to enhance spatial attention and to rehabilitate USN patients. In fact, music is a material closely linked to space, involving common anatomical and functional networks. The present paper aims firstly at briefly reviewing the different procedures of sensory retraining proposed in USN, including auditory retraining, and their limits. Secondly, it aims to present the recent scientific evidence that makes music a good candidate for USN patients' neuro-rehabilitation.
ERIC Educational Resources Information Center
Foss-Feig, Jennifer H.; Heacock, Jessica L.; Cascio, Carissa J.
2012-01-01
Autism spectrum disorders (ASD) are often associated with aberrant responses to sensory stimuli, which are thought to contribute to the social, communication, and repetitive behavior deficits that define ASD. However, there are few studies that separate aberrant sensory responses by individual sensory modality to assess modality-specific…
Thomas, Theresa Currier; Stockhausen, Ellen Magee; Law, L Matthew; Khodadad, Aida; Lifshitz, Jonathan
2017-01-01
As rehabilitation strategies advance as therapeutic interventions, the modality and onset of rehabilitation after traumatic brain injury (TBI) are critical to optimize treatment. Our laboratory has detected and characterized a late-onset, long-lasting sensory hypersensitivity to whisker stimulation in diffuse brain-injured rats; a deficit that is comparable to visual or auditory sensory hypersensitivity in humans with an acquired brain injury. We hypothesize that the modality and onset of rehabilitation therapies will differentially influence sensory hypersensitivity in response to the Whisker Nuisance Task (WNT) as well as WNT-induced corticosterone (CORT) stress response in diffuse brain-injured rats and shams. After midline fluid percussion brain injury (FPI) or sham surgery, rats were assigned to one of four rehabilitative interventions: (1) whisker sensory deprivation during week one or (2) week two or (3) whisker stimulation during week one or (4) week two. At 28 days following FPI and sham procedures, sensory hypersensitivity was assessed using the WNT. Plasma CORT was evaluated immediately following the WNT (aggravated levels) and prior to the pre-determined endpoint 24 hours later (non-aggravated levels). Deprivation therapy during week two elicited significantly greater sensory hypersensitivity to the WNT compared to week one (p < 0.05), and aggravated CORT levels in FPI rats were significantly lower than sham levels. Stimulation therapy during week one resulted in low levels of sensory hypersensitivity to the WNT, similar to deprivation therapy and naïve controls, however, non-aggravated CORT levels in FPI rats were significantly higher than sham. These data indicate that modality and onset of sensory rehabilitation can differentially influence FPI and sham rats, having a lasting impact on behavioral and stress responses to the WNT, emphasizing the necessity for continued evaluation of modality and onset of rehabilitation after TBI.
Development of sensory systems in zebrafish (Danio rerio)
NASA Technical Reports Server (NTRS)
Moorman, S. J.
2001-01-01
Zebrafish possess all of the classic sensory modalities: taste, tactile, smell, balance, vision, and hearing. For each sensory system, this article provides a brief overview of the system in the adult zebrafish followed by a more detailed overview of the development of the system. By far the majority of studies performed in each of the sensory systems of the zebrafish have involved some aspect of molecular biology or genetics. Although molecular biology and genetics are not major foci of the paper, brief discussions of some of the mutant strains of zebrafish that have developmental defects in each specific sensory system are included. The development of the sensory systems is only a small sampling of the work being done using zebrafish and provides a mere glimpse of the potential of this model for the study of vertebrate development, physiology, and human disease.
Auditory and visual connectivity gradients in frontoparietal cortex
Hellyer, Peter J.; Wise, Richard J. S.; Leech, Robert
2016-01-01
Abstract A frontoparietal network of brain regions is often implicated in both auditory and visual information processing. Although it is possible that the same set of multimodal regions subserves both modalities, there is increasing evidence that there is a differentiation of sensory function within frontoparietal cortex. Magnetic resonance imaging (MRI) in humans was used to investigate whether different frontoparietal regions showed intrinsic biases in connectivity with visual or auditory modalities. Structural connectivity was assessed with diffusion tractography and functional connectivity was tested using functional MRI. A dorsal–ventral gradient of function was observed, where connectivity with visual cortex dominates dorsal frontal and parietal connections, while connectivity with auditory cortex dominates ventral frontal and parietal regions. A gradient was also observed along the posterior–anterior axis, although in opposite directions in prefrontal and parietal cortices. The results suggest that the location of neural activity within frontoparietal cortex may be influenced by these intrinsic biases toward visual and auditory processing. Thus, the location of activity in frontoparietal cortex may be influenced as much by stimulus modality as the cognitive demands of a task. It was concluded that stimulus modality was spatially encoded throughout frontal and parietal cortices, and was speculated that such an arrangement allows for top–down modulation of modality‐specific information to occur within higher‐order cortex. This could provide a potentially faster and more efficient pathway by which top–down selection between sensory modalities could occur, by constraining modulations to within frontal and parietal regions, rather than long‐range connections to sensory cortices. Hum Brain Mapp 38:255–270, 2017. © 2016 Wiley Periodicals, Inc. PMID:27571304
Lebib, Riadh; Papo, David; de Bode, Stella; Baudonnière, Pierre Marie
2003-05-08
We investigated the existence of a cross-modal sensory gating reflected by the modulation of an early electrophysiological index, the P50 component. We analyzed event-related brain potentials elicited by audiovisual speech stimuli manipulated along two dimensions: congruency and discriminability. The results showed that the P50 was attenuated when visual and auditory speech information were redundant (i.e. congruent), in comparison with this same event-related potential component elicited with discrepant audiovisual dubbing. When hard to discriminate, however, bimodal incongruent speech stimuli elicited a similar pattern of P50 attenuation. We concluded to the existence of a visual-to-auditory cross-modal sensory gating phenomenon. These results corroborate previous findings revealing a very early audiovisual interaction during speech perception. Finally, we postulated that the sensory gating system included a cross-modal dimension.
Sensory impairments of the lower limb after stroke: a pooled analysis of individual patient data.
Tyson, Sarah F; Crow, J Lesley; Connell, Louise; Winward, Charlotte; Hillier, Susan
2013-01-01
To obtain more generalizable information on the frequency and factors influencing sensory impairment after stroke and their relationship to mobility and function. A pooled analysis of individual data of stroke survivors (N = 459); mean (SD) age = 67.2 (14.8) years, 54% male, mean (SD) time since stroke = 22.33 (63.1) days, 50% left-sided weakness. Where different measurement tools were used, data were recorded. Descriptive statistics described frequency of sensory impairments, kappa coefficients investigated relationships between sensory modalities, binary logistic regression explored the factors influencing sensory impairments, and linear regression assessed the impact of sensory impairments on activity limitations. Most patients' sensation was intact (55%), and individual sensory modalities were highly associated (κ = 0.60, P < .001). Weakness and neglect influenced sensory impairment (P < .001), but demographics, stroke pathology, and spasticity did not. Sensation influenced independence in activities of daily living, mobility, and balance but less strongly than weakness. Pooled individual data analysis showed sensation of the lower limb is grossly preserved in most stroke survivors but, when present, it affects function. Sensory modalities are highly interrelated; interventions that treat the motor system during functional tasks may be as effective at treating the sensory system as sensory retraining alone.
Optimality in mono- and multisensory map formation.
Bürck, Moritz; Friedel, Paul; Sichert, Andreas B; Vossen, Christine; van Hemmen, J Leo
2010-07-01
In the struggle for survival in a complex and dynamic environment, nature has developed a multitude of sophisticated sensory systems. In order to exploit the information provided by these sensory systems, higher vertebrates reconstruct the spatio-temporal environment from each of the sensory systems they have at their disposal. That is, for each modality the animal computes a neuronal representation of the outside world, a monosensory neuronal map. Here we present a universal framework that allows to calculate the specific layout of the involved neuronal network by means of a general mathematical principle, viz., stochastic optimality. In order to illustrate the use of this theoretical framework, we provide a step-by-step tutorial of how to apply our model. In so doing, we present a spatial and a temporal example of optimal stimulus reconstruction which underline the advantages of our approach. That is, given a known physical signal transmission and rudimental knowledge of the detection process, our approach allows to estimate the possible performance and to predict neuronal properties of biological sensory systems. Finally, information from different sensory modalities has to be integrated so as to gain a unified perception of reality for further processing, e.g., for distinct motor commands. We briefly discuss concepts of multimodal interaction and how a multimodal space can evolve by alignment of monosensory maps.
Ehrenfeld, Stephan; Butz, Martin V
2013-02-01
Humans show admirable capabilities in movement planning and execution. They can perform complex tasks in various contexts, using the available sensory information very effectively. Body models and continuous body state estimations appear necessary to realize such capabilities. We introduce the Modular Modality Frame (MMF) model, which maintains a highly distributed, modularized body model continuously updating, modularized probabilistic body state estimations over time. Modularization is realized with respect to modality frames, that is, sensory modalities in particular frames of reference and with respect to particular body parts. We evaluate MMF performance on a simulated, nine degree of freedom arm in 3D space. The results show that MMF is able to maintain accurate body state estimations despite high sensor and motor noise. Moreover, by comparing the sensory information available in different modality frames, MMF can identify faulty sensory measurements on the fly. In the near future, applications to lightweight robot control should be pursued. Moreover, MMF may be enhanced with neural encodings by introducing neural population codes and learning techniques. Finally, more dexterous goal-directed behavior should be realized by exploiting the available redundant state representations.
The effects of selective and divided attention on sensory precision and integration.
Odegaard, Brian; Wozny, David R; Shams, Ladan
2016-02-12
In our daily lives, our capacity to selectively attend to stimuli within or across sensory modalities enables enhanced perception of the surrounding world. While previous research on selective attention has studied this phenomenon extensively, two important questions still remain unanswered: (1) how selective attention to a single modality impacts sensory integration processes, and (2) the mechanism by which selective attention improves perception. We explored how selective attention impacts performance in both a spatial task and a temporal numerosity judgment task, and employed a Bayesian Causal Inference model to investigate the computational mechanism(s) impacted by selective attention. We report three findings: (1) in the spatial domain, selective attention improves precision of the visual sensory representations (which were relatively precise), but not the auditory sensory representations (which were fairly noisy); (2) in the temporal domain, selective attention improves the sensory precision in both modalities (both of which were fairly reliable to begin with); (3) in both tasks, selective attention did not exert a significant influence over the tendency to integrate sensory stimuli. Therefore, it may be postulated that a sensory modality must possess a certain inherent degree of encoding precision in order to benefit from selective attention. It also appears that in certain basic perceptual tasks, the tendency to integrate crossmodal signals does not depend significantly on selective attention. We conclude with a discussion of how these results relate to recent theoretical considerations of selective attention. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
The Functional Role of Neural Oscillations in Non-Verbal Emotional Communication
Symons, Ashley E.; El-Deredy, Wael; Schwartze, Michael; Kotz, Sonja A.
2016-01-01
Effective interpersonal communication depends on the ability to perceive and interpret nonverbal emotional expressions from multiple sensory modalities. Current theoretical models propose that visual and auditory emotion perception involves a network of brain regions including the primary sensory cortices, the superior temporal sulcus (STS), and orbitofrontal cortex (OFC). However, relatively little is known about how the dynamic interplay between these regions gives rise to the perception of emotions. In recent years, there has been increasing recognition of the importance of neural oscillations in mediating neural communication within and between functional neural networks. Here we review studies investigating changes in oscillatory activity during the perception of visual, auditory, and audiovisual emotional expressions, and aim to characterize the functional role of neural oscillations in nonverbal emotion perception. Findings from the reviewed literature suggest that theta band oscillations most consistently differentiate between emotional and neutral expressions. While early theta synchronization appears to reflect the initial encoding of emotionally salient sensory information, later fronto-central theta synchronization may reflect the further integration of sensory information with internal representations. Additionally, gamma synchronization reflects facilitated sensory binding of emotional expressions within regions such as the OFC, STS, and, potentially, the amygdala. However, the evidence is more ambiguous when it comes to the role of oscillations within the alpha and beta frequencies, which vary as a function of modality (or modalities), presence or absence of predictive information, and attentional or task demands. Thus, the synchronization of neural oscillations within specific frequency bands mediates the rapid detection, integration, and evaluation of emotional expressions. Moreover, the functional coupling of oscillatory activity across multiples frequency bands supports a predictive coding model of multisensory emotion perception in which emotional facial and body expressions facilitate the processing of emotional vocalizations. PMID:27252638
The Functional Role of Neural Oscillations in Non-Verbal Emotional Communication.
Symons, Ashley E; El-Deredy, Wael; Schwartze, Michael; Kotz, Sonja A
2016-01-01
Effective interpersonal communication depends on the ability to perceive and interpret nonverbal emotional expressions from multiple sensory modalities. Current theoretical models propose that visual and auditory emotion perception involves a network of brain regions including the primary sensory cortices, the superior temporal sulcus (STS), and orbitofrontal cortex (OFC). However, relatively little is known about how the dynamic interplay between these regions gives rise to the perception of emotions. In recent years, there has been increasing recognition of the importance of neural oscillations in mediating neural communication within and between functional neural networks. Here we review studies investigating changes in oscillatory activity during the perception of visual, auditory, and audiovisual emotional expressions, and aim to characterize the functional role of neural oscillations in nonverbal emotion perception. Findings from the reviewed literature suggest that theta band oscillations most consistently differentiate between emotional and neutral expressions. While early theta synchronization appears to reflect the initial encoding of emotionally salient sensory information, later fronto-central theta synchronization may reflect the further integration of sensory information with internal representations. Additionally, gamma synchronization reflects facilitated sensory binding of emotional expressions within regions such as the OFC, STS, and, potentially, the amygdala. However, the evidence is more ambiguous when it comes to the role of oscillations within the alpha and beta frequencies, which vary as a function of modality (or modalities), presence or absence of predictive information, and attentional or task demands. Thus, the synchronization of neural oscillations within specific frequency bands mediates the rapid detection, integration, and evaluation of emotional expressions. Moreover, the functional coupling of oscillatory activity across multiples frequency bands supports a predictive coding model of multisensory emotion perception in which emotional facial and body expressions facilitate the processing of emotional vocalizations.
Li, P; Chai, G H; Zhu, K H; Lan, N; Sui, X H
2015-01-01
Tactile sensory feedback plays a key role in accomplishing the dexterous manipulation of prosthetic hands for the amputees, and the non-invasive transcutaneous electrical nerve stimulation (TENS) of the phantom finger perception (PFP) area would be an effective way to realize sensory feedback clinically. In order to realize the high-spatial-resolution tactile sensory feedback in the PFP region, we investigated the effects of electrode size and spacing on the tactile sensations for potentially optimizing the surface electrode array configuration. Six forearm-amputated subjects were recruited in the psychophysical studies. With the diameter of the circular electrode increasing from 3 mm to 12 mm, the threshold current intensity was enhanced correspondingly under different sensory modalities. The smaller electrode could potentially lead to high sensation spatial resolution. Whereas, the smaller the electrode, the less the number of sensory modalities. For an Φ-3 mm electrode, it is even hard for the subject to perceive any perception modalities under normal stimulating current. In addition, the two-electrode discrimination distance (TEDD) in the phantom thumb perception area decreased with electrode size decreasing in two directions of parallel or perpendicular to the forearm. No significant difference of TEDD existed along the two directions. Studies in this paper would guide the configuration optimization of the TENS electrode array for potential high spatial-resolution sensory feedback.
Reliability-Weighted Integration of Audiovisual Signals Can Be Modulated by Top-down Attention
Noppeney, Uta
2018-01-01
Abstract Behaviorally, it is well established that human observers integrate signals near-optimally weighted in proportion to their reliabilities as predicted by maximum likelihood estimation. Yet, despite abundant behavioral evidence, it is unclear how the human brain accomplishes this feat. In a spatial ventriloquist paradigm, participants were presented with auditory, visual, and audiovisual signals and reported the location of the auditory or the visual signal. Combining psychophysics, multivariate functional MRI (fMRI) decoding, and models of maximum likelihood estimation (MLE), we characterized the computational operations underlying audiovisual integration at distinct cortical levels. We estimated observers’ behavioral weights by fitting psychometric functions to participants’ localization responses. Likewise, we estimated the neural weights by fitting neurometric functions to spatial locations decoded from regional fMRI activation patterns. Our results demonstrate that low-level auditory and visual areas encode predominantly the spatial location of the signal component of a region’s preferred auditory (or visual) modality. By contrast, intraparietal sulcus forms spatial representations by integrating auditory and visual signals weighted by their reliabilities. Critically, the neural and behavioral weights and the variance of the spatial representations depended not only on the sensory reliabilities as predicted by the MLE model but also on participants’ modality-specific attention and report (i.e., visual vs. auditory). These results suggest that audiovisual integration is not exclusively determined by bottom-up sensory reliabilities. Instead, modality-specific attention and report can flexibly modulate how intraparietal sulcus integrates sensory signals into spatial representations to guide behavioral responses (e.g., localization and orienting). PMID:29527567
Grouping and Segregation of Sensory Events by Actions in Temporal Audio-Visual Recalibration.
Ikumi, Nara; Soto-Faraco, Salvador
2016-01-01
Perception in multi-sensory environments involves both grouping and segregation of events across sensory modalities. Temporal coincidence between events is considered a strong cue to resolve multisensory perception. However, differences in physical transmission and neural processing times amongst modalities complicate this picture. This is illustrated by cross-modal recalibration, whereby adaptation to audio-visual asynchrony produces shifts in perceived simultaneity. Here, we examined whether voluntary actions might serve as a temporal anchor to cross-modal recalibration in time. Participants were tested on an audio-visual simultaneity judgment task after an adaptation phase where they had to synchronize voluntary actions with audio-visual pairs presented at a fixed asynchrony (vision leading or vision lagging). Our analysis focused on the magnitude of cross-modal recalibration to the adapted audio-visual asynchrony as a function of the nature of the actions during adaptation, putatively fostering cross-modal grouping or, segregation. We found larger temporal adjustments when actions promoted grouping than segregation of sensory events. However, a control experiment suggested that additional factors, such as attention to planning/execution of actions, could have an impact on recalibration effects. Contrary to the view that cross-modal temporal organization is mainly driven by external factors related to the stimulus or environment, our findings add supporting evidence for the idea that perceptual adjustments strongly depend on the observer's inner states induced by motor and cognitive demands.
Grouping and Segregation of Sensory Events by Actions in Temporal Audio-Visual Recalibration
Ikumi, Nara; Soto-Faraco, Salvador
2017-01-01
Perception in multi-sensory environments involves both grouping and segregation of events across sensory modalities. Temporal coincidence between events is considered a strong cue to resolve multisensory perception. However, differences in physical transmission and neural processing times amongst modalities complicate this picture. This is illustrated by cross-modal recalibration, whereby adaptation to audio-visual asynchrony produces shifts in perceived simultaneity. Here, we examined whether voluntary actions might serve as a temporal anchor to cross-modal recalibration in time. Participants were tested on an audio-visual simultaneity judgment task after an adaptation phase where they had to synchronize voluntary actions with audio-visual pairs presented at a fixed asynchrony (vision leading or vision lagging). Our analysis focused on the magnitude of cross-modal recalibration to the adapted audio-visual asynchrony as a function of the nature of the actions during adaptation, putatively fostering cross-modal grouping or, segregation. We found larger temporal adjustments when actions promoted grouping than segregation of sensory events. However, a control experiment suggested that additional factors, such as attention to planning/execution of actions, could have an impact on recalibration effects. Contrary to the view that cross-modal temporal organization is mainly driven by external factors related to the stimulus or environment, our findings add supporting evidence for the idea that perceptual adjustments strongly depend on the observer's inner states induced by motor and cognitive demands. PMID:28154529
Lakie, Martin; Loram, Ian D
2006-01-01
Ten subjects balanced their own body or a mechanically equivalent unstable inverted pendulum by hand, through a compliant spring linkage. Their balancing process was always characterized by repeated small reciprocating hand movements. These bias adjustments were an observable sign of intermittent alterations in neural output. On average, the adjustments occurred at intervals of ∼400 ms. To generate appropriate stabilizing bias adjustments, sensory information about body or load movement is needed. Subjects used visual, vestibular or proprioceptive sensation alone and in combination to perform the tasks. We first ask, is the time between adjustments (bias duration) sensory specific? Vision is associated with slow responses. Other senses involved with balance are known to be faster. Our second question is; does bias duration depend on sensory abundance? An appropriate bias adjustment cannot occur until unplanned motion is unambiguously perceived (a sensory threshold). The addition of more sensory data should therefore expedite action, decreasing the mean bias adjustment duration. Statistical analysis showed that (1) the mean bias adjustment duration was remarkably independent of the sensory modality and (2) the addition of one or two sensory modalities made a small, but significant, decrease in the mean bias adjustment duration. Thus, a threshold effect can alter only a very minor part of the bias duration. The bias adjustment duration in manual balancing must reflect something more than visual sensation and perceptual thresholds; our suggestion is that it is a common central motor planning process. We predict that similar processes may be identified in the control of standing. PMID:16959857
Multistability in perception: binding sensory modalities, an overview.
Schwartz, Jean-Luc; Grimault, Nicolas; Hupé, Jean-Michel; Moore, Brian C J; Pressnitzer, Daniel
2012-04-05
This special issue presents research concerning multistable perception in different sensory modalities. Multistability occurs when a single physical stimulus produces alternations between different subjective percepts. Multistability was first described for vision, where it occurs, for example, when different stimuli are presented to the two eyes or for certain ambiguous figures. It has since been described for other sensory modalities, including audition, touch and olfaction. The key features of multistability are: (i) stimuli have more than one plausible perceptual organization; (ii) these organizations are not compatible with each other. We argue here that most if not all cases of multistability are based on competition in selecting and binding stimulus information. Binding refers to the process whereby the different attributes of objects in the environment, as represented in the sensory array, are bound together within our perceptual systems, to provide a coherent interpretation of the world around us. We argue that multistability can be used as a method for studying binding processes within and across sensory modalities. We emphasize this theme while presenting an outline of the papers in this issue. We end with some thoughts about open directions and avenues for further research.
Multistability in perception: binding sensory modalities, an overview
Schwartz, Jean-Luc; Grimault, Nicolas; Hupé, Jean-Michel; Moore, Brian C. J.; Pressnitzer, Daniel
2012-01-01
This special issue presents research concerning multistable perception in different sensory modalities. Multistability occurs when a single physical stimulus produces alternations between different subjective percepts. Multistability was first described for vision, where it occurs, for example, when different stimuli are presented to the two eyes or for certain ambiguous figures. It has since been described for other sensory modalities, including audition, touch and olfaction. The key features of multistability are: (i) stimuli have more than one plausible perceptual organization; (ii) these organizations are not compatible with each other. We argue here that most if not all cases of multistability are based on competition in selecting and binding stimulus information. Binding refers to the process whereby the different attributes of objects in the environment, as represented in the sensory array, are bound together within our perceptual systems, to provide a coherent interpretation of the world around us. We argue that multistability can be used as a method for studying binding processes within and across sensory modalities. We emphasize this theme while presenting an outline of the papers in this issue. We end with some thoughts about open directions and avenues for further research. PMID:22371612
Neural mechanisms of selective attention in the somatosensory system.
Gomez-Ramirez, Manuel; Hysaj, Kristjana; Niebur, Ernst
2016-09-01
Selective attention allows organisms to extract behaviorally relevant information while ignoring distracting stimuli that compete for the limited resources of their central nervous systems. Attention is highly flexible, and it can be harnessed to select information based on sensory modality, within-modality feature(s), spatial location, object identity, and/or temporal properties. In this review, we discuss the body of work devoted to understanding mechanisms of selective attention in the somatosensory system. In particular, we describe the effects of attention on tactile behavior and corresponding neural activity in somatosensory cortex. Our focus is on neural mechanisms that select tactile stimuli based on their location on the body (somatotopic-based attention) or their sensory feature (feature-based attention). We highlight parallels between selection mechanisms in touch and other sensory systems and discuss several putative neural coding schemes employed by cortical populations to signal the behavioral relevance of sensory inputs. Specifically, we contrast the advantages and disadvantages of using a gain vs. spike-spike correlation code for representing attended sensory stimuli. We favor a neural network model of tactile attention that is composed of frontal, parietal, and subcortical areas that controls somatosensory cells encoding the relevant stimulus features to enable preferential processing throughout the somatosensory hierarchy. Our review is based on data from noninvasive electrophysiological and imaging data in humans as well as single-unit recordings in nonhuman primates. Copyright © 2016 the American Physiological Society.
Neural mechanisms of selective attention in the somatosensory system
Hysaj, Kristjana; Niebur, Ernst
2016-01-01
Selective attention allows organisms to extract behaviorally relevant information while ignoring distracting stimuli that compete for the limited resources of their central nervous systems. Attention is highly flexible, and it can be harnessed to select information based on sensory modality, within-modality feature(s), spatial location, object identity, and/or temporal properties. In this review, we discuss the body of work devoted to understanding mechanisms of selective attention in the somatosensory system. In particular, we describe the effects of attention on tactile behavior and corresponding neural activity in somatosensory cortex. Our focus is on neural mechanisms that select tactile stimuli based on their location on the body (somatotopic-based attention) or their sensory feature (feature-based attention). We highlight parallels between selection mechanisms in touch and other sensory systems and discuss several putative neural coding schemes employed by cortical populations to signal the behavioral relevance of sensory inputs. Specifically, we contrast the advantages and disadvantages of using a gain vs. spike-spike correlation code for representing attended sensory stimuli. We favor a neural network model of tactile attention that is composed of frontal, parietal, and subcortical areas that controls somatosensory cells encoding the relevant stimulus features to enable preferential processing throughout the somatosensory hierarchy. Our review is based on data from noninvasive electrophysiological and imaging data in humans as well as single-unit recordings in nonhuman primates. PMID:27334956
Karim, M Rezaul; Moore, Adrian W
2011-11-07
Nervous system development requires the correct specification of neuron position and identity, followed by accurate neuron class-specific dendritic development and axonal wiring. Recently the dendritic arborization (DA) sensory neurons of the Drosophila larval peripheral nervous system (PNS) have become powerful genetic models in which to elucidate both general and class-specific mechanisms of neuron differentiation. There are four main DA neuron classes (I-IV)(1). They are named in order of increasing dendrite arbor complexity, and have class-specific differences in the genetic control of their differentiation(2-10). The DA sensory system is a practical model to investigate the molecular mechanisms behind the control of dendritic morphology(11-13) because: 1) it can take advantage of the powerful genetic tools available in the fruit fly, 2) the DA neuron dendrite arbor spreads out in only 2 dimensions beneath an optically clear larval cuticle making it easy to visualize with high resolution in vivo, 3) the class-specific diversity in dendritic morphology facilitates a comparative analysis to find key elements controlling the formation of simple vs. highly branched dendritic trees, and 4) dendritic arbor stereotypical shapes of different DA neurons facilitate morphometric statistical analyses. DA neuron activity modifies the output of a larval locomotion central pattern generator(14-16). The different DA neuron classes have distinct sensory modalities, and their activation elicits different behavioral responses(14,16-20). Furthermore different classes send axonal projections stereotypically into the Drosophila larval central nervous system in the ventral nerve cord (VNC)(21). These projections terminate with topographic representations of both DA neuron sensory modality and the position in the body wall of the dendritic field(7,22,23). Hence examination of DA axonal projections can be used to elucidate mechanisms underlying topographic mapping(7,22,23), as well as the wiring of a simple circuit modulating larval locomotion(14-17). We present here a practical guide to generate and analyze genetic mosaics(24) marking DA neurons via MARCM (Mosaic Analysis with a Repressible Cell Marker)(1,10,25) and Flp-out(22,26,27) techniques (summarized in Fig. 1).
Disentangling Linguistic Modality Effects in Semantic Processing
ERIC Educational Resources Information Center
Moita, Mara; Nunes, Maria Vânia
2017-01-01
Sensory systems are essential for perceiving and conceptualizing our semantic knowledge about the world and the way we interact with it. Despite studies reporting neural changes to compensate for the absence of a given sensory modality, studies focusing on the assessment of semantic processing reveal poor performances by deaf individuals when…
Accommodating Students' Sensory Learning Modalities in Online Formats
ERIC Educational Resources Information Center
Allison, Barbara N.; Rehm, Marsha L.
2016-01-01
Online classes have become a popular and viable method of educating students in both K-12 settings and higher education, including in family and consumer sciences (FCS) programs. Online learning dramatically affects the way students learn. This article addresses how online learning can accommodate the sensory learning modalities (sight, hearing,…
Automatic selective attention as a function of sensory modality in aging.
Guerreiro, Maria J S; Adam, Jos J; Van Gerven, Pascal W M
2012-03-01
It was recently hypothesized that age-related differences in selective attention depend on sensory modality (Guerreiro, M. J. S., Murphy, D. R., & Van Gerven, P. W. M. (2010). The role of sensory modality in age-related distraction: A critical review and a renewed view. Psychological Bulletin, 136, 975-1022. doi:10.1037/a0020731). So far, this hypothesis has not been tested in automatic selective attention. The current study addressed this issue by investigating age-related differences in automatic spatial cueing effects (i.e., facilitation and inhibition of return [IOR]) across sensory modalities. Thirty younger (mean age = 22.4 years) and 25 older adults (mean age = 68.8 years) performed 4 left-right target localization tasks, involving all combinations of visual and auditory cues and targets. We used stimulus onset asynchronies (SOAs) of 100, 500, 1,000, and 1,500 ms between cue and target. The results showed facilitation (shorter reaction times with valid relative to invalid cues at shorter SOAs) in the unimodal auditory and in both cross-modal tasks but not in the unimodal visual task. In contrast, there was IOR (longer reaction times with valid relative to invalid cues at longer SOAs) in both unimodal tasks but not in either of the cross-modal tasks. Most important, these spatial cueing effects were independent of age. The results suggest that the modality hypothesis of age-related differences in selective attention does not extend into the realm of automatic selective attention.
How well do you see what you hear? The acuity of visual-to-auditory sensory substitution
Haigh, Alastair; Brown, David J.; Meijer, Peter; Proulx, Michael J.
2013-01-01
Sensory substitution devices (SSDs) aim to compensate for the loss of a sensory modality, typically vision, by converting information from the lost modality into stimuli in a remaining modality. “The vOICe” is a visual-to-auditory SSD which encodes images taken by a camera worn by the user into “soundscapes” such that experienced users can extract information about their surroundings. Here we investigated how much detail was resolvable during the early induction stages by testing the acuity of blindfolded sighted, naïve vOICe users. Initial performance was well above chance. Participants who took the test twice as a form of minimal training showed a marked improvement on the second test. Acuity was slightly but not significantly impaired when participants wore a camera and judged letter orientations “live”. A positive correlation was found between participants' musical training and their acuity. The relationship between auditory expertise via musical training and the lack of a relationship with visual imagery, suggests that early use of a SSD draws primarily on the mechanisms of the sensory modality being used rather than the one being substituted. If vision is lost, audition represents the sensory channel of highest bandwidth of those remaining. The level of acuity found here, and the fact it was achieved with very little experience in sensory substitution by naïve users is promising. PMID:23785345
2016-01-01
The mammalian neocortex contains many distinct inhibitory neuronal populations to balance excitatory neurotransmission. A correct excitation/inhibition equilibrium is crucial for normal brain development, functioning, and controlling lifelong cortical plasticity. Knowledge about how the inhibitory network contributes to brain plasticity however remains incomplete. Somatostatin- (SST-) interneurons constitute a large neocortical subpopulation of interneurons, next to parvalbumin- (PV-) and vasoactive intestinal peptide- (VIP-) interneurons. Unlike the extensively studied PV-interneurons, acknowledged as key components in guiding ocular dominance plasticity, the contribution of SST-interneurons is less understood. Nevertheless, SST-interneurons are ideally situated within cortical networks to integrate unimodal or cross-modal sensory information processing and therefore likely to be important mediators of experience-dependent plasticity. The lack of knowledge on SST-interneurons partially relates to the wide variety of distinct subpopulations present in the sensory neocortex. This review informs on those SST-subpopulations hitherto described based on anatomical, molecular, or electrophysiological characteristics and whose functional roles can be attributed based on specific cortical wiring patterns. A possible role for these subpopulations in experience-dependent plasticity will be discussed, emphasizing on learning-induced plasticity and on unimodal and cross-modal plasticity upon sensory loss. This knowledge will ultimately contribute to guide brain plasticity into well-defined directions to restore sensory function and promote lifelong learning. PMID:27403348
Cross-modal and modality-specific expectancy effects between pain and disgust
Sharvit, Gil; Vuilleumier, Patrik; Delplanque, Sylvain; Corradi-Dell’ Acqua, Corrado
2015-01-01
Pain sensitivity increases when a noxious stimulus is preceded by cues predicting higher intensity. However, it is unclear whether the modulation of nociception by expectancy is sensory-specific (“modality based”) or reflects the aversive-affective consequence of the upcoming event (“unpleasantness”), potentially common with other negative events. Here we compared expectancy effects for pain and disgust by using different, but equally unpleasant, nociceptive (thermal) and olfactory stimulations. Indeed both pain and disgust are aversive, associated with threat to the organism, and processed in partly overlapping brain networks. Participants saw cues predicting the unpleasantness (high/low) and the modality (pain/disgust) of upcoming thermal or olfactory stimulations, and rated the associated unpleasantness after stimuli delivery. Results showed that identical thermal stimuli were perceived as more unpleasant when preceded by cues threatening about high (as opposed to low) pain. A similar expectancy effect was found for olfactory disgust. Critically, cross-modal expectancy effects were observed on inconsistent trials when thermal stimuli were preceded by high-disgust cues or olfactory stimuli preceded by high-pain cues. However, these effects were stronger in consistent than inconsistent conditions. Taken together, our results suggest that expectation of an unpleasant event elicits representations of both its modality-specific properties and its aversive consequences. PMID:26631975
Role of orientation reference selection in motion sickness
NASA Technical Reports Server (NTRS)
Peterka, Robert J.; Black, F. Owen
1990-01-01
Three areas related to human orientation control are investigated: (1) reflexes associated with the control of eye movements and posture; (2) the perception of body rotation and position with respect to gravity; and (3) the strategies used to resolve sensory conflict situations which arise when different sensory systems provide orientation cues which are not consistent with one another or with previous experience. Of particular interest is the possibility that a subject may be able to ignore an inaccurate sensory modality in favor of one or more other sensory modalities which do provide accurate orientation reference information. This process is referred as sensory selection. This proposal will attempt to quantify subject's sensory selection abilities and determine if this ability confers some immunity to the development of motion sickness symptoms.
ERIC Educational Resources Information Center
Kultti, Anne; Pramling, Niklas
2015-01-01
This article proposes a conceptualization of teaching and learning in early childhood education, as the coordination of perspectives held by children and teachers through engaging different sensory modalities in the learning process. It takes a sociocultural theoretical perspective. An empirical example from a routine mealtime situation is…
The Impact of Multimedia Effect on Science Learning: Evidence from Eye Movements
ERIC Educational Resources Information Center
She, Hsiao-Ching; Chen, Yi-Zen
2009-01-01
This study examined how middle school students constructed their understanding of the mitosis and meiosis processes at a molecular level through multimedia learning materials presented in different interaction and sensory modality modes. A two (interaction modes: animation/simulation) by two (sensory modality modes: narration/on-screen text)…
ERIC Educational Resources Information Center
Stampoltzis, Aglaia; Antonopoulou, Ekaterini; Zenakou, Elena; Kouvava, Sofia
2010-01-01
Introduction: Dyslexia has been shown to affect the learning ability of individuals who experience difficulties in processing written information and developing effective study skills. Method: In the present study we assessed the relationship between dyslexia, the learning sensory modalities and educational characteristics in 20 dyslexic and 40…
Méndez-Balbuena, Ignacio; Huidobro, Nayeli; Silva, Mayte; Flores, Amira; Trenado, Carlos; Quintanar, Luis; Arias-Carrión, Oscar; Kristeva, Rumyana; Manjarrez, Elias
2015-10-01
The present investigation documents the electrophysiological occurrence of multisensory stochastic resonance in the human visual pathway elicited by tactile noise. We define multisensory stochastic resonance of brain evoked potentials as the phenomenon in which an intermediate level of input noise of one sensory modality enhances the brain evoked response of another sensory modality. Here we examined this phenomenon in visual evoked potentials (VEPs) modulated by the addition of tactile noise. Specifically, we examined whether a particular level of mechanical Gaussian noise applied to the index finger can improve the amplitude of the VEP. We compared the amplitude of the positive P100 VEP component between zero noise (ZN), optimal noise (ON), and high mechanical noise (HN). The data disclosed an inverted U-like graph for all the subjects, thus demonstrating the occurrence of a multisensory stochastic resonance in the P100 VEP. Copyright © 2015 the American Physiological Society.
Perspectives on Sensory Processing Disorder: A Call for Translational Research
Miller, Lucy J.; Nielsen, Darci M.; Schoen, Sarah A.; Brett-Green, Barbara A.
2009-01-01
This article explores the convergence of two fields, which have similar theoretical origins: a clinical field originally known as sensory integration and a branch of neuroscience that conducts research in an area also called sensory integration. Clinically, the term was used to identify a pattern of dysfunction in children and adults, as well as a related theory, assessment, and treatment method for children who have atypical responses to ordinary sensory stimulation. Currently the term for the disorder is sensory processing disorder (SPD). In neuroscience, the term sensory integration refers to converging information in the brain from one or more sensory domains. A recent subspecialty in neuroscience labeled multisensory integration (MSI) refers to the neural process that occurs when sensory input from two or more different sensory modalities converge. Understanding the specific meanings of the term sensory integration intended by the clinical and neuroscience fields and the term MSI in neuroscience is critical. A translational research approach would improve exploration of crucial research questions in both the basic science and clinical science. Refinement of the conceptual model of the disorder and the related treatment approach would help prioritize which specific hypotheses should be studied in both the clinical and neuroscience fields. The issue is how we can facilitate a translational approach between researchers in the two fields. Multidisciplinary, collaborative studies would increase knowledge of brain function and could make a significant contribution to alleviating the impairments of individuals with SPD and their families. PMID:19826493
Material Encounters and Media Events: What Kind of Mathematics Can a Body Do?
ERIC Educational Resources Information Center
de Freitas, Elizabeth
2016-01-01
This paper contributes to research on the material dimensions of teaching and learning mathematics, arguing that perception is not sensory integration or synthesis of multi-modal information, but rather a speculative investment in specific material encounters. This approach entails sociopolitical consequences for how we work with dis/ability in…
Mesulam, M M
1998-06-01
Sensory information undergoes extensive associative elaboration and attentional modulation as it becomes incorporated into the texture of cognition. This process occurs along a core synaptic hierarchy which includes the primary sensory, upstream unimodal, downstream unimodal, heteromodal, paralimbic and limbic zones of the cerebral cortex. Connections from one zone to another are reciprocal and allow higher synaptic levels to exert a feedback (top-down) influence upon earlier levels of processing. Each cortical area provides a nexus for the convergence of afferents and divergence of efferents. The resultant synaptic organization supports parallel as well as serial processing, and allows each sensory event to initiate multiple cognitive and behavioural outcomes. Upstream sectors of unimodal association areas encode basic features of sensation such as colour, motion, form and pitch. More complex contents of sensory experience such as objects, faces, word-forms, spatial locations and sound sequences become encoded within downstream sectors of unimodal areas by groups of coarsely tuned neurons. The highest synaptic levels of sensory-fugal processing are occupied by heteromodal, paralimbic and limbic cortices, collectively known as transmodal areas. The unique role of these areas is to bind multiple unimodal and other transmodal areas into distributed but integrated multimodal representations. Transmodal areas in the midtemporal cortex, Wernicke's area, the hippocampal-entorhinal complex and the posterior parietal cortex provide critical gateways for transforming perception into recognition, word-forms into meaning, scenes and events into experiences, and spatial locations into targets for exploration. All cognitive processes arise from analogous associative transformations of similar sets of sensory inputs. The differences in the resultant cognitive operation are determined by the anatomical and physiological properties of the transmodal node that acts as the critical gateway for the dominant transformation. Interconnected sets of transmodal nodes provide anatomical and computational epicentres for large-scale neurocognitive networks. In keeping with the principles of selectively distributed processing, each epicentre of a large-scale network displays a relative specialization for a specific behavioural component of its principal neurospychological domain. The destruction of transmodal epicentres causes global impairments such as multimodal anomia, neglect and amnesia, whereas their selective disconnection from relevant unimodal areas elicits modality-specific impairments such as prosopagnosia, pure word blindness and category-specific anomias. The human brain contains at least five anatomically distinct networks. The network for spatial awareness is based on transmodal epicentres in the posterior parietal cortex and the frontal eye fields; the language network on epicentres in Wernicke's and Broca's areas; the explicit memory/emotion network on epicentres in the hippocampal-entorhinal complex and the amygdala; the face-object recognition network on epicentres in the midtemporal and temporopolar cortices; and the working memory-executive function network on epicentres in the lateral prefrontal cortex and perhaps the posterior parietal cortex. Individual sensory modalities give rise to streams of processing directed to transmodal nodes belonging to each of these networks. The fidelity of sensory channels is actively protected through approximately four synaptic levels of sensory-fugal processing. The modality-specific cortices at these four synaptic levels encode the most veridical representations of experience. Attentional, motivational and emotional modulations, including those related to working memory, novelty-seeking and mental imagery, become increasingly more pronounced within downstream components of unimodal areas, where they help to create a highly edited subjective version of the world. (ABSTRACT TRUNCATED)
Sensory Prioritization in Rats: Behavioral Performance and Neuronal Correlates.
Lee, Conrad C Y; Diamond, Mathew E; Arabzadeh, Ehsan
2016-03-16
Operating with some finite quantity of processing resources, an animal would benefit from prioritizing the sensory modality expected to provide key information in a particular context. The present study investigated whether rats dedicate attentional resources to the sensory modality in which a near-threshold event is more likely to occur. We manipulated attention by controlling the likelihood with which a stimulus was presented from one of two modalities. In a whisker session, 80% of trials contained a brief vibration stimulus applied to whiskers and the remaining 20% of trials contained a brief change of luminance. These likelihoods were reversed in a visual session. When a stimulus was presented in the high-likelihood context, detection performance increased and was faster compared with the same stimulus presented in the low-likelihood context. Sensory prioritization was also reflected in neuronal activity in the vibrissal area of primary somatosensory cortex: single units responded differentially to the whisker vibration stimulus when presented with higher probability compared with lower probability. Neuronal activity in the vibrissal cortex displayed signatures of multiplicative gain control and enhanced response to vibration stimuli during the whisker session. In conclusion, rats allocate priority to the more likely stimulus modality and the primary sensory cortex may participate in the redistribution of resources. Detection of low-amplitude events is critical to survival; for example, to warn prey of predators. To formulate a response, decision-making systems must extract minute neuronal signals from the sensory modality that provides key information. Here, we identify the behavioral and neuronal correlates of sensory prioritization in rats. Rats were trained to detect whisker vibrations or visual flickers. Stimuli were embedded in two contexts in which either visual or whisker modality was more likely to occur. When a stimulus was presented in the high-likelihood context, detection was faster and more reliable. Neuronal recording from the vibrissal cortex revealed enhanced representation of vibrations in the prioritized context. These results establish the rat as an alternative model organism to primates for studying attention. Copyright © 2016 the authors 0270-6474/16/363243-11$15.00/0.
Ravi, Sridhar; Garcia, Jair E; Wang, Chun; Dyer, Adrian G
2016-11-01
Bees navigate in complex environments using visual, olfactory and mechano-sensorial cues. In the lowest region of the atmosphere, the wind environment can be highly unsteady and bees employ fine motor-skills to enhance flight control. Recent work reveals sophisticated multi-modal processing of visual and olfactory channels by the bee brain to enhance foraging efficiency, but it currently remains unclear whether wind-induced mechano-sensory inputs are also integrated with visual information to facilitate decision making. Individual honeybees were trained in a linear flight arena with appetitive-aversive differential conditioning to use a context-setting cue of 3 m s -1 cross-wind direction to enable decisions about either a 'blue' or 'yellow' star stimulus being the correct alternative. Colour stimuli properties were mapped in bee-specific opponent-colour spaces to validate saliency, and to thus enable rapid reverse learning. Bees were able to integrate mechano-sensory and visual information to facilitate decisions that were significantly different to chance expectation after 35 learning trials. An independent group of bees were trained to find a single rewarding colour that was unrelated to the wind direction. In these trials, wind was not used as a context-setting cue and served only as a potential distracter in identifying the relevant rewarding visual stimuli. Comparison between respective groups shows that bees can learn to integrate visual and mechano-sensory information in a non-elemental fashion, revealing an unsuspected level of sensory processing in honeybees, and adding to the growing body of knowledge on the capacity of insect brains to use multi-modal sensory inputs in mediating foraging behaviour. © 2016. Published by The Company of Biologists Ltd.
Kanaya, Shoko; Kariya, Kenji; Fujisaki, Waka
2016-10-01
Certain systematic relationships are often assumed between information conveyed from multiple sensory modalities; for instance, a small figure and a high pitch may be perceived as more harmonious. This phenomenon, termed cross-modal correspondence, may result from correlations between multi-sensory signals learned in daily experience of the natural environment. If so, we would observe cross-modal correspondences not only in the perception of artificial stimuli but also in perception of natural objects. To test this hypothesis, we reanalyzed data collected previously in our laboratory examining perceptions of the material properties of wood using vision, audition, and touch. We compared participant evaluations of three perceptual properties (surface brightness, sharpness of sound, and smoothness) of the wood blocks obtained separately via vision, audition, and touch. Significant positive correlations were identified for all properties in the audition-touch comparison, and for two of the three properties regarding in the vision-touch comparison. By contrast, no properties exhibited significant positive correlations in the vision-audition comparison. These results suggest that we learn correlations between multi-sensory signals through experience; however, the strength of this statistical learning is apparently dependent on the particular combination of sensory modalities involved. © The Author(s) 2016.
Hearing regulates Drosophila aggression.
Versteven, Marijke; Vanden Broeck, Lies; Geurten, Bart; Zwarts, Liesbeth; Decraecker, Lisse; Beelen, Melissa; Göpfert, Martin C; Heinrich, Ralf; Callaerts, Patrick
2017-02-21
Aggression is a universal social behavior important for the acquisition of food, mates, territory, and social status. Aggression in Drosophila is context-dependent and can thus be expected to involve inputs from multiple sensory modalities. Here, we use mechanical disruption and genetic approaches in Drosophila melanogaster to identify hearing as an important sensory modality in the context of intermale aggressive behavior. We demonstrate that neuronal silencing and targeted knockdown of hearing genes in the fly's auditory organ elicit abnormal aggression. Further, we show that exposure to courtship or aggression song has opposite effects on aggression. Our data define the importance of hearing in the control of Drosophila intermale aggression and open perspectives to decipher how hearing and other sensory modalities are integrated at the neural circuit level.
Hearing regulates Drosophila aggression
Versteven, Marijke; Vanden Broeck, Lies; Geurten, Bart; Zwarts, Liesbeth; Decraecker, Lisse; Beelen, Melissa; Göpfert, Martin C.; Heinrich, Ralf; Callaerts, Patrick
2017-01-01
Aggression is a universal social behavior important for the acquisition of food, mates, territory, and social status. Aggression in Drosophila is context-dependent and can thus be expected to involve inputs from multiple sensory modalities. Here, we use mechanical disruption and genetic approaches in Drosophila melanogaster to identify hearing as an important sensory modality in the context of intermale aggressive behavior. We demonstrate that neuronal silencing and targeted knockdown of hearing genes in the fly’s auditory organ elicit abnormal aggression. Further, we show that exposure to courtship or aggression song has opposite effects on aggression. Our data define the importance of hearing in the control of Drosophila intermale aggression and open perspectives to decipher how hearing and other sensory modalities are integrated at the neural circuit level. PMID:28115690
Visual and acoustic communication in non-human animals: a comparison.
Rosenthal, G G; Ryan, M J
2000-09-01
The visual and auditory systems are two major sensory modalities employed in communication. Although communication in these two sensory modalities can serve analogous functions and evolve in response to similar selection forces, the two systems also operate under different constraints imposed by the environment and the degree to which these sensory modalities are recruited for non-communication functions. Also, the research traditions in each tend to differ, with studies of mechanisms of acoustic communication tending to take a more reductionist tack often concentrating on single signal parameters, and studies of visual communication tending to be more concerned with multivariate signal arrays in natural environments and higher level processing of such signals. Each research tradition would benefit by being more expansive in its approach.
Santangelo, Valerio
2018-01-01
Higher-order cognitive processes were shown to rely on the interplay between large-scale neural networks. However, brain networks involved with the capability to split attentional resource over multiple spatial locations and multiple stimuli or sensory modalities have been largely unexplored to date. Here I re-analyzed data from Santangelo et al. (2010) to explore the causal interactions between large-scale brain networks during divided attention. During fMRI scanning, participants monitored streams of visual and/or auditory stimuli in one or two spatial locations for detection of occasional targets. This design allowed comparing a condition in which participants monitored one stimulus/modality (either visual or auditory) in two spatial locations vs. a condition in which participants monitored two stimuli/modalities (both visual and auditory) in one spatial location. The analysis of the independent components (ICs) revealed that dividing attentional resources across two spatial locations necessitated a brain network involving the left ventro- and dorso-lateral prefrontal cortex plus the posterior parietal cortex, including the intraparietal sulcus (IPS) and the angular gyrus, bilaterally. The analysis of Granger causality highlighted that the activity of lateral prefrontal regions were predictive of the activity of all of the posteriors parietal nodes. By contrast, dividing attention across two sensory modalities necessitated a brain network including nodes belonging to the dorsal frontoparietal network, i.e., the bilateral frontal eye-fields (FEF) and IPS, plus nodes belonging to the salience network, i.e., the anterior cingulated cortex and the left and right anterior insular cortex (aIC). The analysis of Granger causality highlights a tight interdependence between the dorsal frontoparietal and salience nodes in trials requiring divided attention between different sensory modalities. The current findings therefore highlighted a dissociation among brain networks implicated during divided attention across spatial locations and sensory modalities, pointing out the importance of investigating effective connectivity of large-scale brain networks supporting complex behavior. PMID:29535614
Santangelo, Valerio
2018-01-01
Higher-order cognitive processes were shown to rely on the interplay between large-scale neural networks. However, brain networks involved with the capability to split attentional resource over multiple spatial locations and multiple stimuli or sensory modalities have been largely unexplored to date. Here I re-analyzed data from Santangelo et al. (2010) to explore the causal interactions between large-scale brain networks during divided attention. During fMRI scanning, participants monitored streams of visual and/or auditory stimuli in one or two spatial locations for detection of occasional targets. This design allowed comparing a condition in which participants monitored one stimulus/modality (either visual or auditory) in two spatial locations vs. a condition in which participants monitored two stimuli/modalities (both visual and auditory) in one spatial location. The analysis of the independent components (ICs) revealed that dividing attentional resources across two spatial locations necessitated a brain network involving the left ventro- and dorso-lateral prefrontal cortex plus the posterior parietal cortex, including the intraparietal sulcus (IPS) and the angular gyrus, bilaterally. The analysis of Granger causality highlighted that the activity of lateral prefrontal regions were predictive of the activity of all of the posteriors parietal nodes. By contrast, dividing attention across two sensory modalities necessitated a brain network including nodes belonging to the dorsal frontoparietal network, i.e., the bilateral frontal eye-fields (FEF) and IPS, plus nodes belonging to the salience network, i.e., the anterior cingulated cortex and the left and right anterior insular cortex (aIC). The analysis of Granger causality highlights a tight interdependence between the dorsal frontoparietal and salience nodes in trials requiring divided attention between different sensory modalities. The current findings therefore highlighted a dissociation among brain networks implicated during divided attention across spatial locations and sensory modalities, pointing out the importance of investigating effective connectivity of large-scale brain networks supporting complex behavior.
The costly filtering of potential distraction: evidence for a supramodal mechanism.
Marini, Francesco; Chelazzi, Leonardo; Maravita, Angelo
2013-08-01
When dealing with significant sensory stimuli, performance can be hampered by distracting events. Attention mechanisms lessen such negative effects, enabling selection of relevant information while blocking potential distraction. Recent work shows that preparatory brain activity, occurring before a critical stimulus, may reflect mechanisms of attentional control aimed to filter upcoming distracters. However, it is unknown whether the engagement of these filtering mechanisms to counteract distraction in itself taxes cognitive-brain systems, leading to performance costs. Here we address this question and, specifically, seek the behavioral signature of a mechanism for the filtering of potential distraction within and between sensory modalities. We show that, in potentially distracting contexts, a filtering mechanism is engaged to cope with forthcoming distraction, causing a dramatic behavioral cost in no-distracter trials during a speeded tactile discrimination task. We thus demonstrate an impaired processing caused by a potential, yet absent, distracter. This effect generalizes across different sensory modalities, such as vision and audition, and across different manipulations of the context, such as the distracter's sensory modality and pertinence to the task. Moreover, activation of the filtering mechanism relies on both strategic and reactive processes, as shown by its dynamic dependence on probabilistic and cross-trial contingencies. Crucially, across participants, the observed strategic cost is inversely related to the interference exerted by a distracter on distracter-present trials. These results attest to a mechanism for the monitoring and filtering of potential distraction in the human brain. Although its activation is indisputably beneficial when distraction occurs, it leads to robust costs when distraction is actually expected but currently absent. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Multisensory speech perception in autism spectrum disorder: From phoneme to whole-word perception.
Stevenson, Ryan A; Baum, Sarah H; Segers, Magali; Ferber, Susanne; Barense, Morgan D; Wallace, Mark T
2017-07-01
Speech perception in noisy environments is boosted when a listener can see the speaker's mouth and integrate the auditory and visual speech information. Autistic children have a diminished capacity to integrate sensory information across modalities, which contributes to core symptoms of autism, such as impairments in social communication. We investigated the abilities of autistic and typically-developing (TD) children to integrate auditory and visual speech stimuli in various signal-to-noise ratios (SNR). Measurements of both whole-word and phoneme recognition were recorded. At the level of whole-word recognition, autistic children exhibited reduced performance in both the auditory and audiovisual modalities. Importantly, autistic children showed reduced behavioral benefit from multisensory integration with whole-word recognition, specifically at low SNRs. At the level of phoneme recognition, autistic children exhibited reduced performance relative to their TD peers in auditory, visual, and audiovisual modalities. However, and in contrast to their performance at the level of whole-word recognition, both autistic and TD children showed benefits from multisensory integration for phoneme recognition. In accordance with the principle of inverse effectiveness, both groups exhibited greater benefit at low SNRs relative to high SNRs. Thus, while autistic children showed typical multisensory benefits during phoneme recognition, these benefits did not translate to typical multisensory benefit of whole-word recognition in noisy environments. We hypothesize that sensory impairments in autistic children raise the SNR threshold needed to extract meaningful information from a given sensory input, resulting in subsequent failure to exhibit behavioral benefits from additional sensory information at the level of whole-word recognition. Autism Res 2017. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. Autism Res 2017, 10: 1280-1290. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.
Zou, Min; Li, Shengguo; Klein, William H.; Xiang, Mengqing
2012-01-01
The sensory neurons of the dorsal root ganglia (DRG) must project accurately to their central targets to convey proprioceptive, nociceptive and mechanoreceptive information to the spinal cord. How these different sensory modalities and central connectivities are specified and coordinated still remains unclear. Given the expression of the POU homeodomain transcription factors Brn3a/Pou4f1 and Brn3b/Pou4f2 in DRG and spinal cord sensory neurons, we determined the subtype specification of DRG and spinal cord sensory neurons as well as DRG central projections in Brn3a and Brn3b single and double mutant mice. Inactivation of either or both genes causes no gross abnormalities in early spinal cord neurogenesis; however, in Brn3a single and Brn3a;Brn3b double mutant mice, sensory afferent axons from the DRG fail to form normal trajectories in the spinal cord. The TrkA+ afferents remain outside the dorsal horn and fail to extend into the spinal cord, while the projections of TrkC+ proprioceptive afferents into the ventral horn are also impaired. Moreover, Brn3a mutant DRGs are defective in sensory neuron specification, as marked by the excessive generation of TrkB+ and TrkC+ neurons as well as TrkA+/TrkB+ and TrkA+/TrkC+ double positive cells at early embryonic stages. At later stages in the mutant, TrkB+, TrkC+ and parvalbumin+ neurons diminish while there is a significant increase of CGRP+ and c-ret+ neurons. In addition, Brn3a mutant DRGs display a dramatic down-regulation of Runx1 expression, suggesting that the regulation of DRG sensory neuron specification by Brn3a is mediated in part by Runx1. Our results together demonstrate a critical role for Brn3a in generating DRG sensory neuron diversity and regulating sensory afferent projections to the central targets. PMID:22326227
Trumpp, Natalie M; Traub, Felix; Pulvermüller, Friedemann; Kiefer, Markus
2014-02-01
Classical theories of semantic memory assume that concepts are represented in a unitary amodal memory system. In challenging this classical view, pure or hybrid modality-specific theories propose that conceptual representations are grounded in the sensory-motor brain areas, which typically process sensory and action-related information. Although neuroimaging studies provided evidence for a functional-anatomical link between conceptual processing of sensory or action-related features and the sensory-motor brain systems, it has been argued that aspects of such sensory-motor activation may not directly reflect conceptual processing but rather strategic imagery or postconceptual elaboration. In the present ERP study, we investigated masked effects of acoustic and action-related conceptual features to probe unconscious automatic conceptual processing in isolation. Subliminal feature-specific ERP effects at frontocentral electrodes were observed, which differed with regard to polarity, topography, and underlying brain electrical sources in congruency with earlier findings under conscious viewing conditions. These findings suggest that conceptual acoustic and action representations can also be unconsciously accessed, thereby excluding any postconceptual strategic processes. This study therefore further substantiates a grounding of conceptual and semantic processing in action and perception.
Palama, Amaya; Malsert, Jennifer; Gentaz, Edouard
2018-01-01
The present study examined whether 6-month-old infants could transfer amodal information (i.e. independently of sensory modalities) from emotional voices to emotional faces. Thus, sequences of successive emotional stimuli (voice or face from one sensory modality -auditory- to another sensory modality -visual-), corresponding to a cross-modal transfer, were displayed to 24 infants. Each sequence presented an emotional (angry or happy) or neutral voice, uniquely, followed by the simultaneous presentation of two static emotional faces (angry or happy, congruous or incongruous with the emotional voice). Eye movements in response to the visual stimuli were recorded with an eye-tracker. First, results suggested no difference in infants' looking time to happy or angry face after listening to the neutral voice or the angry voice. Nevertheless, after listening to the happy voice, infants looked longer at the incongruent angry face (the mouth area in particular) than the congruent happy face. These results revealed that a cross-modal transfer (from auditory to visual modalities) is possible for 6-month-old infants only after the presentation of a happy voice, suggesting that they recognize this emotion amodally.
Perception of Scenes in Different Sensory Modalities: A Result of Modal Completion.
Gruber, Ronald R; Block, Richard A
2017-01-01
Dynamic perception includes amodal and modal completion, along with apparent movement. It fills temporal gaps for single objects. In 2 experiments, using 6 stimulus presentation conditions involving 3 sensory modalities, participants experienced 8-10 sequential stimuli (200 ms each) with interstimulus intervals (ISIs) of 0.25-7.0 s. Experiments focused on spatiotemporal completion (walking), featural completion (object changing), auditory, completion (falling bomb), and haptic changes (insect crawling). After each trial, participants judged whether they experienced the process of "happening " or whether they simply knew that the process must have occurred. The phenomenon was frequency independent, being reported at short ISIs but not at long ISIs. The phenomenon involves dynamic modal completion and possibly also conceptual processes.
McFadyen, Bradford J; Cantin, Jean-François; Swaine, Bonnie; Duchesneau, Guylaine; Doyon, Julien; Dumas, Denyse; Fait, Philippe
2009-09-01
To study the effects of sensory modality of simultaneous tasks during walking with and without obstacles after moderate to severe traumatic brain injury (TBI). Group comparison study. Gait analysis laboratory within a postacute rehabilitation facility. Volunteer sample (N=18). Persons with moderate to severe TBI (n=11) (9 men, 3 women; age, 37.56+/-13.79 y) and a comparison group (n=7) of subjects without neurologic problems matched on average for body mass index and age (4 men, 3 women; age, 39.19+/-17.35 y). Not applicable. Magnitudes and variability for walking speeds, foot clearance margins (ratio of foot clearance distance to obstacle height), and response reaction times (both direct and as a relative cost because of obstacle avoidance). The TBI group had well-recovered walking speeds and a general ability to avoid obstacles. However, these subjects did show lower trail limb toe clearances (P=.003) across all conditions. Response reaction times to the Stroop tasks were longer in general for the TBI group (P=.017), and this group showed significant increases in response reaction times for the visual modality within the more challenging obstacle avoidance task that was not observed for control subjects. A measure of multitask costs related to differences in response reaction times between obstructed and unobstructed trials also only showed increased attention costs for the visual over the auditory stimuli for the TBI group (P=.002). Mobility is a complex construct, and the present results provide preliminary findings that, even after good locomotor recovery, subjects with moderate to severe TBI show residual locomotor deficits in multitasking. Furthermore, our results suggest that sensory modality is important, and greater multitask costs occur during sensory competition (ie, visual interference).
Multi-Sensory, Multi-Modal Concepts for Information Understanding
2004-04-01
September 20022-2 Outline • The modern dilemma of knowledge acquisition • A vision for information access and understanding • Emerging concepts for...Multi-Sensory, Multi-Modal Concepts for Information Understanding David L. Hall, Ph.D. School of Information Sciences and Technology The... understanding . INTRODUCTION Historically, information displays for display and understanding of data fusion products have focused on the use of vision
Wahn, Basil; König, Peter
2015-01-01
Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage.
Antfolk, Christian; D'Alonzo, Marco; Controzzi, Marco; Lundborg, Göran; Rosén, Birgitta; Sebelius, Fredrik; Cipriani, Christian
2013-01-01
This work assesses the ability of transradial amputees to discriminate multi-site tactile stimuli in sensory discrimination tasks. It compares different sensory feedback modalities using an artificial hand prosthesis in: 1) a modality matched paradigm where pressure recorded on the five fingertips of the hand was fed back as pressure stimulation on five target points on the residual limb; and 2) a modality mismatched paradigm where the pressures were transformed into mechanical vibrations and fed back. Eight transradial amputees took part in the study and were divided in two groups based on the integrity of their phantom map; group A had a complete phantom map on the residual limb whereas group B had an incomplete or nonexisting map. The ability in localizing stimuli was compared with that of 10 healthy subjects using the vibration feedback and 11 healthy subjects using the pressure feedback (in a previous study), on their forearms, in similar experiments. Results demonstrate that pressure stimulation surpassed vibrotactile stimulation in multi-site sensory feedback discrimination. Furthermore, we demonstrate that subjects with a detailed phantom map had the best discrimination performance and even surpassed healthy participants for both feedback paradigms whereas group B had the worst performance overall. Finally, we show that placement of feedback devices on a complete phantom map improves multi-site sensory feedback discrimination, independently of the feedback modality.
Influence of aging on thermal and vibratory thresholds of quantitative sensory testing.
Lin, Yea-Huey; Hsieh, Song-Chou; Chao, Chi-Chao; Chang, Yang-Chyuan; Hsieh, Sung-Tsang
2005-09-01
Quantitative sensory testing has become a common approach to evaluate thermal and vibratory thresholds in various types of neuropathies. To understand the effect of aging on sensory perception, we measured warm, cold, and vibratory thresholds by performing quantitative sensory testing on a population of 484 normal subjects (175 males and 309 females), aged 48.61 +/- 14.10 (range 20-86) years. Sensory thresholds of the hand and foot were measured with two algorithms: the method of limits (Limits) and the method of level (Level). Thresholds measured by Limits are reaction-time-dependent, while those measured by Level are independent of reaction time. In addition, we explored (1) the correlations of thresholds between these two algorithms, (2) the effect of age on differences in thresholds between algorithms, and (3) differences in sensory thresholds between the two test sites. Age was consistently and significantly correlated with sensory thresholds of all tested modalities measured by both algorithms on multivariate regression analysis compared with other factors, including gender, body height, body weight, and body mass index. When thresholds were plotted against age, slopes differed between sensory thresholds of the hand and those of the foot: for the foot, slopes were steeper compared with those for the hand for each sensory modality. Sensory thresholds of both test sites measured by Level were highly correlated with those measured by Limits, and thresholds measured by Limits were higher than those measured by Level. Differences in sensory thresholds between the two algorithms were also correlated with age: thresholds of the foot were higher than those of the hand for each sensory modality. This difference in thresholds (measured with both Level and Limits) between the hand and foot was also correlated with age. These findings suggest that age is the most significant factor in determining sensory thresholds compared with the other factors of gender and anthropometric parameters, and this provides a foundation for investigating the neurobiologic significance of aging on the processing of sensory stimuli.
Auditory and visual cortex of primates: a comparison of two sensory systems
Rauschecker, Josef P.
2014-01-01
A comparative view of the brain, comparing related functions across species and sensory systems, offers a number of advantages. In particular, it allows separating the formal purpose of a model structure from its implementation in specific brains. Models of auditory cortical processing can be conceived by analogy to the visual cortex, incorporating neural mechanisms that are found in both the visual and auditory systems. Examples of such canonical features on the columnar level are direction selectivity, size/bandwidth selectivity, as well as receptive fields with segregated versus overlapping on- and off-sub-regions. On a larger scale, parallel processing pathways have been envisioned that represent the two main facets of sensory perception: 1) identification of objects and 2) processing of space. Expanding this model in terms of sensorimotor integration and control offers an overarching view of cortical function independent of sensory modality. PMID:25728177
Xia, Jing; Zhang, Wei; Jiang, Yizhou; Li, You; Chen, Qi
2018-05-16
Practice and experiences gradually shape the central nervous system, from the synaptic level to large-scale neural networks. In natural multisensory environment, even when inundated by streams of information from multiple sensory modalities, our brain does not give equal weight to different modalities. Rather, visual information more frequently receives preferential processing and eventually dominates consciousness and behavior, i.e., visual dominance. It remains unknown, however, the supra-modal and modality-specific practice effect during cross-modal selective attention, and moreover whether the practice effect shows similar modality preferences as the visual dominance effect in the multisensory environment. To answer the above two questions, we adopted a cross-modal selective attention paradigm in conjunction with the hybrid fMRI design. Behaviorally, visual performance significantly improved while auditory performance remained constant with practice, indicating that visual attention more flexibly adapted behavior with practice than auditory attention. At the neural level, the practice effect was associated with decreasing neural activity in the frontoparietal executive network and increasing activity in the default mode network, which occurred independently of the modality attended, i.e., the supra-modal mechanisms. On the other hand, functional decoupling between the auditory and the visual system was observed with the progress of practice, which varied as a function of the modality attended. The auditory system was functionally decoupled with both the dorsal and ventral visual stream during auditory attention while was decoupled only with the ventral visual stream during visual attention. To efficiently suppress the irrelevant visual information with practice, auditory attention needs to additionally decouple the auditory system from the dorsal visual stream. The modality-specific mechanisms, together with the behavioral effect, thus support the visual dominance model in terms of the practice effect during cross-modal selective attention. Copyright © 2018 Elsevier Ltd. All rights reserved.
Spontaneous cortical activity alternates between motifs defined by regional axonal projections
Mohajerani, Majid H.; Chan, Allen W.; Mohsenvand, Mostafa; LeDue, Jeffrey; Liu, Rui; McVea, David A.; Boyd, Jamie D.; Wang, Yu Tian; Reimers, Mark; Murphy, Timothy H.
2014-01-01
In lightly anaesthetized or awake adult mice using millisecond timescale voltage sensitive dye imaging, we show that a palette of sensory-evoked and hemisphere-wide activity motifs are represented in spontaneous activity. These motifs can reflect multiple modes of sensory processing including vision, audition, and touch. Similar cortical networks were found with direct cortical activation using channelrhodopsin-2. Regional analysis of activity spread indicated modality specific sources such as primary sensory areas, and a common posterior-medial cortical sink where sensory activity was extinguished within the parietal association area, and a secondary anterior medial sink within the cingulate/secondary motor cortices for visual stimuli. Correlation analysis between functional circuits and intracortical axonal projections indicated a common framework corresponding to long-range mono-synaptic connections between cortical regions. Maps of intracortical mono-synaptic structural connections predicted hemisphere-wide patterns of spontaneous and sensory-evoked depolarization. We suggest that an intracortical monosynaptic connectome shapes the ebb and flow of spontaneous cortical activity. PMID:23974708
A Systematic Review of Sensory Processing Interventions for Children with Autism Spectrum Disorders
ERIC Educational Resources Information Center
Case-Smith, Jane; Weaver, Lindy L.; Fristad, Mary A.
2015-01-01
Children with autism spectrum disorders often exhibit co-occurring sensory processing problems and receive interventions that target self-regulation. In current practice, sensory interventions apply different theoretic constructs, focus on different goals, use a variety of sensory modalities, and involve markedly disparate procedures. Previous…
Age-equivalent top-down modulation during cross-modal selective attention.
Guerreiro, Maria J S; Anguera, Joaquin A; Mishra, Jyoti; Van Gerven, Pascal W M; Gazzaley, Adam
2014-12-01
Selective attention involves top-down modulation of sensory cortical areas, such that responses to relevant information are enhanced whereas responses to irrelevant information are suppressed. Suppression of irrelevant information, unlike enhancement of relevant information, has been shown to be deficient in aging. Although these attentional mechanisms have been well characterized within the visual modality, little is known about these mechanisms when attention is selectively allocated across sensory modalities. The present EEG study addressed this issue by testing younger and older participants in three different tasks: Participants attended to the visual modality and ignored the auditory modality, attended to the auditory modality and ignored the visual modality, or passively perceived information presented through either modality. We found overall modulation of visual and auditory processing during cross-modal selective attention in both age groups. Top-down modulation of visual processing was observed as a trend toward enhancement of visual information in the setting of auditory distraction, but no significant suppression of visual distraction when auditory information was relevant. Top-down modulation of auditory processing, on the other hand, was observed as suppression of auditory distraction when visual stimuli were relevant, but no significant enhancement of auditory information in the setting of visual distraction. In addition, greater visual enhancement was associated with better recognition of relevant visual information, and greater auditory distractor suppression was associated with a better ability to ignore auditory distraction. There were no age differences in these effects, suggesting that when relevant and irrelevant information are presented through different sensory modalities, selective attention remains intact in older age.
Auditory peripersonal space in humans.
Farnè, Alessandro; Làdavas, Elisabetta
2002-10-01
In the present study we report neuropsychological evidence of the existence of an auditory peripersonal space representation around the head in humans and its characteristics. In a group of right brain-damaged patients with tactile extinction, we found that a sound delivered near the ipsilesional side of the head (20 cm) strongly extinguished a tactile stimulus delivered to the contralesional side of the head (cross-modal auditory-tactile extinction). By contrast, when an auditory stimulus was presented far from the head (70 cm), cross-modal extinction was dramatically reduced. This spatially specific cross-modal extinction was most consistently found (i.e., both in the front and back spaces) when a complex sound was presented, like a white noise burst. Pure tones produced spatially specific cross-modal extinction when presented in the back space, but not in the front space. In addition, the most severe cross-modal extinction emerged when sounds came from behind the head, thus showing that the back space is more sensitive than the front space to the sensory interaction of auditory-tactile inputs. Finally, when cross-modal effects were investigated by reversing the spatial arrangement of cross-modal stimuli (i.e., touch on the right and sound on the left), we found that an ipsilesional tactile stimulus, although inducing a small amount of cross-modal tactile-auditory extinction, did not produce any spatial-specific effect. Therefore, the selective aspects of cross-modal interaction found near the head cannot be explained by a competition between a damaged left spatial representation and an intact right spatial representation. Thus, consistent with neurophysiological evidence from monkeys, our findings strongly support the existence, in humans, of an integrated cross-modal system coding auditory and tactile stimuli near the body, that is, in the peripersonal space.
Gopalakrishnan, R; Burgess, R C; Plow, E B; Floden, D P; Machado, A G
2015-09-24
Pain anticipation plays a critical role in pain chronification and results in disability due to pain avoidance. It is important to understand how different sensory modalities (auditory, visual or tactile) may influence pain anticipation as different strategies could be applied to mitigate anticipatory phenomena and chronification. In this study, using a countdown paradigm, we evaluated with magnetoencephalography the neural networks associated with pain anticipation elicited by different sensory modalities in normal volunteers. When encountered with well-established cues that signaled pain, visual and somatosensory cortices engaged the pain neuromatrix areas early during the countdown process, whereas the auditory cortex displayed delayed processing. In addition, during pain anticipation, the visual cortex displayed independent processing capabilities after learning the contextual meaning of cues from associative and limbic areas. Interestingly, cross-modal activation was also evident and strong when visual and tactile cues signaled upcoming pain. Dorsolateral prefrontal cortex and mid-cingulate cortex showed significant activity during pain anticipation regardless of modality. Our results show pain anticipation is processed with great time efficiency by a highly specialized and hierarchical network. The highest degree of higher-order processing is modulated by context (pain) rather than content (modality) and rests within the associative limbic regions, corroborating their intrinsic role in chronification. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
Perceptual load interacts with stimulus processing across sensory modalities.
Klemen, J; Büchel, C; Rose, M
2009-06-01
According to perceptual load theory, processing of task-irrelevant stimuli is limited by the perceptual load of a parallel attended task if both the task and the irrelevant stimuli are presented to the same sensory modality. However, it remains a matter of debate whether the same principles apply to cross-sensory perceptual load and, more generally, what form cross-sensory attentional modulation in early perceptual areas takes in humans. Here we addressed these questions using functional magnetic resonance imaging. Participants undertook an auditory one-back working memory task of low or high perceptual load, while concurrently viewing task-irrelevant images at one of three object visibility levels. The processing of the visual and auditory stimuli was measured in the lateral occipital cortex (LOC) and auditory cortex (AC), respectively. Cross-sensory interference with sensory processing was observed in both the LOC and AC, in accordance with previous results of unisensory perceptual load studies. The present neuroimaging results therefore warrant the extension of perceptual load theory from a unisensory to a cross-sensory context: a validation of this cross-sensory interference effect through behavioural measures would consolidate the findings.
ERIC Educational Resources Information Center
Ryll, Stefan
2017-01-01
This quantitative research examines the perceptions of culinary arts/management educators and culinary industry practitioners on the future of online culinary arts education. Specifically pertaining to the recommended procedures by educators and chefs to judge and critique the quality of food products in terms sensory modalities, and what the key…
ERIC Educational Resources Information Center
Stephan, Denise Nadine; Koch, Iring
2010-01-01
Two experiments examined the role of compatibility of input and output (I-O) modality mappings in task switching. We define I-O modality compatibility in terms of similarity of stimulus modality and modality of response-related sensory consequences. Experiment 1 included switching between 2 compatible tasks (auditory-vocal vs. visual-manual) and…
Preservation of crossmodal selective attention in healthy aging
Hugenschmidt, Christina E.; Peiffer, Ann M.; McCoy, Thomas P.; Hayasaka, Satoru; Laurienti, Paul J.
2010-01-01
The goal of the present study was to determine if older adults benefited from attention to a specific sensory modality in a voluntary attention task and evidenced changes in voluntary or involuntary attention when compared to younger adults. Suppressing and enhancing effects of voluntary attention were assessed using two cued forced-choice tasks, one that asked participants to localize and one that asked them to categorize visual and auditory targets. Involuntary attention was assessed using the same tasks, but with no attentional cues. The effects of attention were evaluated using traditional comparisons of means and Cox proportional hazards models. All analyses showed that older adults benefited behaviorally from selective attention in both visual and auditory conditions, including robust suppressive effects of attention. Of note, the performance of the older adults was commensurate with that of younger adults in almost all analyses, suggesting that older adults can successfully engage crossmodal attention processes. Thus, age-related increases in distractibility across sensory modalities are likely due to mechanisms other than deficits in attentional processing. PMID:19404621
Galvez-Pol, A; Calvo-Merino, B; Capilla, A; Forster, B
2018-07-01
Working memory (WM) supports temporary maintenance of task-relevant information. This process is associated with persistent activity in the sensory cortex processing the information (e.g., visual stimuli activate visual cortex). However, we argue here that more multifaceted stimuli moderate this sensory-locked activity and recruit distinctive cortices. Specifically, perception of bodies recruits somatosensory cortex (SCx) beyond early visual areas (suggesting embodiment processes). Here we explore persistent activation in processing areas beyond the sensory cortex initially relevant to the modality of the stimuli. Using visual and somatosensory evoked-potentials in a visual WM task, we isolated different levels of visual and somatosensory involvement during encoding of body and non-body-related images. Persistent activity increased in SCx only when maintaining body images in WM, whereas visual/posterior regions' activity increased significantly when maintaining non-body images. Our results bridge WM and embodiment frameworks, supporting a dynamic WM process where the nature of the information summons specific processing resources. Copyright © 2018 Elsevier Inc. All rights reserved.
Schauder, Kimberly B.; Bennetto, Loisa
2016-01-01
Sensory processing differences have long been associated with autism spectrum disorder (ASD), and they have recently been added to the diagnostic criteria for the disorder. The focus on sensory processing in ASD research has increased substantially in the last decade. This research has been approached from two different perspectives: the first focuses on characterizing the symptoms that manifest in response to real world sensory stimulation, and the second focuses on the neural pathways and mechanisms underlying sensory processing. The purpose of this paper is to integrate the empirical literature on sensory processing in ASD from the last decade, including both studies characterizing sensory symptoms and those that investigate neural response to sensory stimuli. We begin with a discussion of definitions to clarify some of the inconsistencies in terminology that currently exist in the field. Next, the sensory symptoms literature is reviewed with a particular focus on developmental considerations and the relationship of sensory symptoms to other core features of the disorder. Then, the neuroscience literature is reviewed with a focus on methodological approaches and specific sensory modalities. Currently, these sensory symptoms and neuroscience perspectives are largely developing independently from each other leading to multiple, but separate, theories and methods, thus creating a multidisciplinary approach to sensory processing in ASD. In order to progress our understanding of sensory processing in ASD, it is now critical to integrate these two research perspectives and move toward an interdisciplinary approach. This will inevitably aid in a better understanding of the underlying biological basis of these symptoms and help realize the translational value through its application to early identification and treatment. The review ends with specific recommendations for future research to help bridge these two research perspectives in order to advance our understanding of sensory processing in ASD. PMID:27378838
D'Imperio, Daniela; Scandola, Michele; Gobbetto, Valeria; Bulgarelli, Cristina; Salgarello, Matteo; Avesani, Renato; Moro, Valentina
2017-10-01
Cross-modal interactions improve the processing of external stimuli, particularly when an isolated sensory modality is impaired. When information from different modalities is integrated, object recognition is facilitated probably as a result of bottom-up and top-down processes. The aim of this study was to investigate the potential effects of cross-modal stimulation in a case of simultanagnosia. We report a detailed analysis of clinical symptoms and an 18 F-fluorodeoxyglucose (FDG) brain positron emission tomography/computed tomography (PET/CT) study of a patient affected by Balint's syndrome, a rare and invasive visual-spatial disorder following bilateral parieto-occipital lesions. An experiment was conducted to investigate the effects of visual and nonvisual cues on performance in tasks involving the recognition of overlapping pictures. Four modalities of sensory cues were used: visual, tactile, olfactory, and auditory. Data from neuropsychological tests showed the presence of ocular apraxia, optic ataxia, and simultanagnosia. The results of the experiment indicate a positive effect of the cues on the recognition of overlapping pictures, not only in the identification of the congruent valid-cued stimulus (target) but also in the identification of the other, noncued stimuli. All the sensory modalities analyzed (except the auditory stimulus) were efficacious in terms of increasing visual recognition. Cross-modal integration improved the patient's ability to recognize overlapping figures. However, while in the visual unimodal modality both bottom-up (priming, familiarity effect, disengagement of attention) and top-down processes (mental representation and short-term memory, the endogenous orientation of attention) are involved, in the cross-modal integration it is semantic representations that mainly activate visual recognition processes. These results are potentially useful for the design of rehabilitation training for attentional and visual-perceptual deficits.
Milne, Alice E; Petkov, Christopher I; Wilson, Benjamin
2017-07-05
Language flexibly supports the human ability to communicate using different sensory modalities, such as writing and reading in the visual modality and speaking and listening in the auditory domain. Although it has been argued that nonhuman primate communication abilities are inherently multisensory, direct behavioural comparisons between human and nonhuman primates are scant. Artificial grammar learning (AGL) tasks and statistical learning experiments can be used to emulate ordering relationships between words in a sentence. However, previous comparative work using such paradigms has primarily investigated sequence learning within a single sensory modality. We used an AGL paradigm to evaluate how humans and macaque monkeys learn and respond to identically structured sequences of either auditory or visual stimuli. In the auditory and visual experiments, we found that both species were sensitive to the ordering relationships between elements in the sequences. Moreover, the humans and monkeys produced largely similar response patterns to the visual and auditory sequences, indicating that the sequences are processed in comparable ways across the sensory modalities. These results provide evidence that human sequence processing abilities stem from an evolutionarily conserved capacity that appears to operate comparably across the sensory modalities in both human and nonhuman primates. The findings set the stage for future neurobiological studies to investigate the multisensory nature of these sequencing operations in nonhuman primates and how they compare to related processes in humans. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Product perception from sensory stimuli: the case of vacuum cleaner.
Almeida e Silva, Caio Márcio; Okimoto, Maria Lúciar R L; Tanure, Raffaela Leane Zenni
2012-01-01
This paper discusses the importance of consideration of different sensory stimuli in the perception of the product. So we conducted an experiment that examined whether there is a difference between the perception of sensory stimuli from artificially isolated. The result is an analysis of the different sensory modalities, relating them to product an between them.
Fast transfer of crossmodal time interval training.
Chen, Lihan; Zhou, Xiaolin
2014-06-01
Sub-second time perception is essential for many important sensory and perceptual tasks including speech perception, motion perception, motor coordination, and crossmodal interaction. This study investigates to what extent the ability to discriminate sub-second time intervals acquired in one sensory modality can be transferred to another modality. To this end, we used perceptual classification of visual Ternus display (Ternus in Psychol Forsch 7:81-136, 1926) to implicitly measure participants' interval perception in pre- and posttests and implemented an intra- or crossmodal sub-second interval discrimination training protocol in between the tests. The Ternus display elicited either an "element motion" or a "group motion" percept, depending on the inter-stimulus interval between the two visual frames. The training protocol required participants to explicitly compare the interval length between a pair of visual, auditory, or tactile stimuli with a standard interval or to implicitly perceive the length of visual, auditory, or tactile intervals by completing a non-temporal task (discrimination of auditory pitch or tactile intensity). Results showed that after fast explicit training of interval discrimination (about 15 min), participants improved their ability to categorize the visual apparent motion in Ternus displays, although the training benefits were mild for visual timing training. However, the benefits were absent for implicit interval training protocols. This finding suggests that the timing ability in one modality can be rapidly acquired and used to improve timing-related performance in another modality and that there may exist a central clock for sub-second temporal processing, although modality-specific perceptual properties may constrain the functioning of this clock.
Concepts and Categories: A Cognitive Neuropsychological Perspective
Mahon, Bradford Z.; Caramazza, Alfonso
2010-01-01
One of the most provocative and exciting issues in cognitive science is how neural specificity for semantic categories of common objects arises in the functional architecture of the brain. More than two decades of research on the neuropsychological phenomenon of category-specific semantic deficits has generated detailed claims about the organization and representation of conceptual knowledge. More recently, researchers have sought to test hypotheses developed on the basis of neuropsychological evidence with functional imaging. From those two fields, the empirical generalization emerges that object domain and sensory modality jointly constrain the organization of knowledge in the brain. At the same time, research within the embodied cognition framework has highlighted the need to articulate how information is communicated between the sensory and motor systems, and processes that represent and generalize abstract information. Those developments point toward a new approach for understanding category specificity in terms of the coordinated influences of diverse regions and cognitive systems. PMID:18767921
Cross-modal plasticity in developmental and age-related hearing loss: Clinical implications.
Glick, Hannah; Sharma, Anu
2017-01-01
This review explores cross-modal cortical plasticity as a result of auditory deprivation in populations with hearing loss across the age spectrum, from development to adulthood. Cross-modal plasticity refers to the phenomenon when deprivation in one sensory modality (e.g. the auditory modality as in deafness or hearing loss) results in the recruitment of cortical resources of the deprived modality by intact sensory modalities (e.g. visual or somatosensory systems). We discuss recruitment of auditory cortical resources for visual and somatosensory processing in deafness and in lesser degrees of hearing loss. We describe developmental cross-modal re-organization in the context of congenital or pre-lingual deafness in childhood and in the context of adult-onset, age-related hearing loss, with a focus on how cross-modal plasticity relates to clinical outcomes. We provide both single-subject and group-level evidence of cross-modal re-organization by the visual and somatosensory systems in bilateral, congenital deafness, single-sided deafness, adults with early-stage, mild-moderate hearing loss, and individual adult and pediatric patients exhibit excellent and average speech perception with hearing aids and cochlear implants. We discuss a framework in which changes in cortical resource allocation secondary to hearing loss results in decreased intra-modal plasticity in auditory cortex, accompanied by increased cross-modal recruitment of auditory cortices by the other sensory systems, and simultaneous compensatory activation of frontal cortices. The frontal cortices, as we will discuss, play an important role in mediating cognitive compensation in hearing loss. Given the wide range of variability in behavioral performance following audiological intervention, changes in cortical plasticity may play a valuable role in the prediction of clinical outcomes following intervention. Further, the development of new technologies and rehabilitation strategies that incorporate brain-based biomarkers may help better serve hearing impaired populations across the lifespan. Copyright © 2016 Elsevier B.V. All rights reserved.
Thalamic control of sensory selection in divided attention.
Wimmer, Ralf D; Schmitt, L Ian; Davidson, Thomas J; Nakajima, Miho; Deisseroth, Karl; Halassa, Michael M
2015-10-29
How the brain selects appropriate sensory inputs and suppresses distractors is unknown. Given the well-established role of the prefrontal cortex (PFC) in executive function, its interactions with sensory cortical areas during attention have been hypothesized to control sensory selection. To test this idea and, more generally, dissect the circuits underlying sensory selection, we developed a cross-modal divided-attention task in mice that allowed genetic access to this cognitive process. By optogenetically perturbing PFC function in a temporally precise window, the ability of mice to select appropriately between conflicting visual and auditory stimuli was diminished. Equivalent sensory thalamocortical manipulations showed that behaviour was causally dependent on PFC interactions with the sensory thalamus, not sensory cortex. Consistent with this notion, we found neurons of the visual thalamic reticular nucleus (visTRN) to exhibit PFC-dependent changes in firing rate predictive of the modality selected. visTRN activity was causal to performance as confirmed by bidirectional optogenetic manipulations of this subnetwork. Using a combination of electrophysiology and intracellular chloride photometry, we demonstrated that visTRN dynamically controls visual thalamic gain through feedforward inhibition. Our experiments introduce a new subcortical model of sensory selection, in which the PFC biases thalamic reticular subnetworks to control thalamic sensory gain, selecting appropriate inputs for further processing.
Hearing shapes our perception of time: temporal discrimination of tactile stimuli in deaf people.
Bolognini, Nadia; Cecchetto, Carlo; Geraci, Carlo; Maravita, Angelo; Pascual-Leone, Alvaro; Papagno, Costanza
2012-02-01
Confronted with the loss of one type of sensory input, we compensate using information conveyed by other senses. However, losing one type of sensory information at specific developmental times may lead to deficits across all sensory modalities. We addressed the effect of auditory deprivation on the development of tactile abilities, taking into account changes occurring at the behavioral and cortical level. Congenitally deaf and hearing individuals performed two tactile tasks, the first requiring the discrimination of the temporal duration of touches and the second requiring the discrimination of their spatial length. Compared with hearing individuals, deaf individuals were impaired only in tactile temporal processing. To explore the neural substrate of this difference, we ran a TMS experiment. In deaf individuals, the auditory association cortex was involved in temporal and spatial tactile processing, with the same chronometry as the primary somatosensory cortex. In hearing participants, the involvement of auditory association cortex occurred at a later stage and selectively for temporal discrimination. The different chronometry in the recruitment of the auditory cortex in deaf individuals correlated with the tactile temporal impairment. Thus, early hearing experience seems to be crucial to develop an efficient temporal processing across modalities, suggesting that plasticity does not necessarily result in behavioral compensation.
BINDING, SPATIAL ATTENTION AND PERCEPTUAL AWARENESS
Robertson, Lynn C.
2012-01-01
The world is experienced as a unified whole, but sensory systems do not deliver it to the brain in this way. Signals from different sensory modalities are initially registered in separate brain areas —even within a modality, features of the sensory mosaic such as colour, size, shape and motion are fragmented and registered in specialized areas of the cortex. How does this information become bound together in experience? Findings from the study of abnormal binding — for example, after stroke — and unusual binding — as in synaesthesia — might help us to understand the cognitive and neural mechanisms that contribute to solving this ‘binding problem’. PMID:12563280
Pre-Motor Response Time Benefits in Multi-Modal Displays
2013-11-12
when animals are presented with stimuli from two sensory modalities as compared with stimulation from only one modality. The combinations of two...modality attention and orientation behaviors (see also Wallace, Meredith, & Stein, 609 !998). Multi-modal stimulation in the world is not always...perceptually when the stimuli are congruent. In another study, Craig (2006) had participants judge the direction of apparent motion by stimulating
Local and Global Cross-Modal Influences between Vision and Hearing, Tasting, Smelling, or Touching
ERIC Educational Resources Information Center
Forster, Jens
2011-01-01
It is suggested that the distinction between global versus local processing styles exists across sensory modalities. Activation of one-way of processing in one modality should affect processing styles in a different modality. In 12 studies, auditory, haptic, gustatory or olfactory global versus local processing was induced, and participants were…
Sound Symbolism in Infancy: Evidence for Sound-Shape Cross-Modal Correspondences in 4-Month-Olds
ERIC Educational Resources Information Center
Ozturk, Ozge; Krehm, Madelaine; Vouloumanos, Athena
2013-01-01
Perceptual experiences in one modality are often dependent on activity from other sensory modalities. These cross-modal correspondences are also evident in language. Adults and toddlers spontaneously and consistently map particular words (e.g., "kiki") to particular shapes (e.g., angular shapes). However, the origins of these systematic mappings…
Fort, Alexandra; Delpuech, Claude; Pernier, Jacques; Giard, Marie-Hélène
2002-10-01
Very recently, a number of neuroimaging studies in humans have begun to investigate the question of how the brain integrates information from different sensory modalities to form unified percepts. Already, intermodal neural processing appears to depend on the modalities of inputs or the nature (speech/non-speech) of information to be combined. Yet, the variety of paradigms, stimuli and technics used make it difficult to understand the relationships between the factors operating at the perceptual level and the underlying physiological processes. In a previous experiment, we used event-related potentials to describe the spatio-temporal organization of audio-visual interactions during a bimodal object recognition task. Here we examined the network of cross-modal interactions involved in simple detection of the same objects. The objects were defined either by unimodal auditory or visual features alone, or by the combination of the two features. As expected, subjects detected bimodal stimuli more rapidly than either unimodal stimuli. Combined analysis of potentials, scalp current densities and dipole modeling revealed several interaction patterns within the first 200 micro s post-stimulus: in occipito-parietal visual areas (45-85 micro s), in deep brain structures, possibly the superior colliculus (105-140 micro s), and in right temporo-frontal regions (170-185 micro s). These interactions differed from those found during object identification in sensory-specific areas and possibly in the superior colliculus, indicating that the neural operations governing multisensory integration depend crucially on the nature of the perceptual processes involved.
A pain in the bud? Implications of cross-modal sensitivity for pain experience.
Perkins, Monica; de Bruyne, Marien; Giummarra, Melita J
2016-11-01
There is growing evidence that enhanced sensitivity to painful clinical procedures and chronic pain are related to greater sensitivity to other sensory inputs, such as bitter taste. We examined cross-modal sensitivities in two studies. Study 1 assessed associations between bitter taste sensitivity, pain tolerance, and fear of pain in 48 healthy young adults. Participants were classified as non-tasters, tasters and super-tasters using a bitter taste test (6-n-propythiouracil; PROP). The latter group had significantly higher fear of pain (Fear of Pain Questionnaire) than tasters (p=.036, effect size r = .48). There was only a trend for an association between bitter taste intensity ratings and intensity of pain at the point of pain tolerance in a cold pressor test (p=.04). In Study 2, 40 healthy young adults completed the Adolescent/Adult Sensory Profile before rating intensity and unpleasantness of innocuous (33 °C), moderate (41 °C), and high intensity (44 °C) thermal pain stimulations. The sensory-sensitivity subscale was positively correlated with both intensity and unpleasantness ratings. Canonical correlation showed that only sensitivity to audition and touch (not taste/smell) were associated with intensity of moderate and high (not innocuous) thermal stimuli. Together these findings suggest that there are cross-modal associations predominantly between sensitivity to exteroceptive inputs (i.e., taste, touch, sound) and the affective dimensions of pain, including noxious heat and intolerable cold pain, in healthy adults. These cross-modal sensitivities may arise due to greater psychological aversion to salient sensations, or from shared neural circuitry for processing disparate sensory modalities.
The transformation of multi-sensory experiences into memories during sleep.
Rothschild, Gideon
2018-03-26
Our everyday lives present us with a continuous stream of multi-modal sensory inputs. While most of this information is soon forgotten, sensory information associated with salient experiences can leave long-lasting memories in our minds. Extensive human and animal research has established that the hippocampus is critically involved in this process of memory formation and consolidation. However, the underlying mechanistic details are still only partially understood. Specifically, the hippocampus has often been suggested to encode information during experience, temporarily store it, and gradually transfer this information to the cortex during sleep. In rodents, ample evidence has supported this notion in the context of spatial memory, yet whether this process adequately describes the consolidation of multi-sensory experiences into memories is unclear. Here, focusing on rodent studies, I examine how multi-sensory experiences are consolidated into long term memories by hippocampal and cortical circuits during sleep. I propose that in contrast to the classical model of memory consolidation, the cortex is a "fast learner" that has a rapid and instructive role in shaping hippocampal-dependent memory consolidation. The proposed model may offer mechanistic insight into memory biasing using sensory cues during sleep. Copyright © 2018 Elsevier Inc. All rights reserved.
Appraisal of unimodal cues during agonistic interactions in Maylandia zebra
Ben Ammar, Imen; Fernandez, Marie S.A.; Boyer, Nicolas; Attia, Joël; Fonseca, Paulo J.; Amorim, M. Clara P.; Beauchaud, Marilyn
2017-01-01
Communication is essential during social interactions including animal conflicts and it is often a complex process involving multiple sensory channels or modalities. To better understand how different modalities interact during communication, it is fundamental to study the behavioural responses to both the composite multimodal signal and each unimodal component with adequate experimental protocols. Here we test how an African cichlid, which communicates with multiple senses, responds to different sensory stimuli in a social relevant scenario. We tested Maylandia zebra males with isolated chemical (urine or holding water coming both from dominant males), visual (real opponent or video playback) and acoustic (agonistic sounds) cues during agonistic interactions. We showed that (1) these fish relied mostly on the visual modality, showing increased aggressiveness in response to the sight of a real contestant but no responses to urine or agonistic sounds presented separately, (2) video playback in our study did not appear appropriate to test the visual modality and needs more technical prospecting, (3) holding water provoked territorial behaviours and seems to be promising for the investigation into the role of the chemical channel in this species. Our findings suggest that unimodal signals are non-redundant but how different sensory modalities interplay during communication remains largely unknown in fish. PMID:28785523
ERIC Educational Resources Information Center
Vermeulen, Nicolas; Mermillod, Martial; Godefroid, Jimmy; Corneille, Olivier
2009-01-01
This study shows that sensory priming facilitates reports of same-modality concepts in an attentional blink paradigm. Participants had to detect and report two target words (T1 and T2) presented for 53 ms each among a series of nonwords distractors at a frequency of up to 19 items per second. SOA between target words was set to 53 ms or 213 ms,…
Development of a Bayesian Estimator for Audio-Visual Integration: A Neurocomputational Study
Ursino, Mauro; Crisafulli, Andrea; di Pellegrino, Giuseppe; Magosso, Elisa; Cuppini, Cristiano
2017-01-01
The brain integrates information from different sensory modalities to generate a coherent and accurate percept of external events. Several experimental studies suggest that this integration follows the principle of Bayesian estimate. However, the neural mechanisms responsible for this behavior, and its development in a multisensory environment, are still insufficiently understood. We recently presented a neural network model of audio-visual integration (Neural Computation, 2017) to investigate how a Bayesian estimator can spontaneously develop from the statistics of external stimuli. Model assumes the presence of two unimodal areas (auditory and visual) topologically organized. Neurons in each area receive an input from the external environment, computed as the inner product of the sensory-specific stimulus and the receptive field synapses, and a cross-modal input from neurons of the other modality. Based on sensory experience, synapses were trained via Hebbian potentiation and a decay term. Aim of this work is to improve the previous model, including a more realistic distribution of visual stimuli: visual stimuli have a higher spatial accuracy at the central azimuthal coordinate and a lower accuracy at the periphery. Moreover, their prior probability is higher at the center, and decreases toward the periphery. Simulations show that, after training, the receptive fields of visual and auditory neurons shrink to reproduce the accuracy of the input (both at the center and at the periphery in the visual case), thus realizing the likelihood estimate of unimodal spatial position. Moreover, the preferred positions of visual neurons contract toward the center, thus encoding the prior probability of the visual input. Finally, a prior probability of the co-occurrence of audio-visual stimuli is encoded in the cross-modal synapses. The model is able to simulate the main properties of a Bayesian estimator and to reproduce behavioral data in all conditions examined. In particular, in unisensory conditions the visual estimates exhibit a bias toward the fovea, which increases with the level of noise. In cross modal conditions, the SD of the estimates decreases when using congruent audio-visual stimuli, and a ventriloquism effect becomes evident in case of spatially disparate stimuli. Moreover, the ventriloquism decreases with the eccentricity. PMID:29046631
Sigalov, Nadine; Maidenbaum, Shachar; Amedi, Amir
2016-03-01
Cognitive neuroscience has long attempted to determine the ways in which cortical selectivity develops, and the impact of nature vs. nurture on it. Congenital blindness (CB) offers a unique opportunity to test this question as the brains of blind individuals develop without visual experience. Here we approach this question through the reading network. Several areas in the visual cortex have been implicated as part of the reading network, and one of the main ones among them is the VWFA, which is selective to the form of letters and words. But what happens in the CB brain? On the one hand, it has been shown that cross-modal plasticity leads to the recruitment of occipital areas, including the VWFA, for linguistic tasks. On the other hand, we have recently demonstrated VWFA activity for letters in contrast to other visual categories when the information is provided via other senses such as touch or audition. Which of these tasks is more dominant? By which mechanism does the CB brain process reading? Using fMRI and visual-to-auditory sensory substitution which transfers the topographical features of the letters we compare reading with semantic and scrambled conditions in a group of CB. We found activation in early auditory and visual cortices during the early processing phase (letter), while the later phase (word) showed VWFA and bilateral dorsal-intraparietal activations for words. This further supports the notion that many visual regions in general, even early visual areas, also maintain a predilection for task processing even when the modality is variable and in spite of putative lifelong linguistic cross-modal plasticity. Furthermore, we find that the VWFA is recruited preferentially for letter and word form, while it was not recruited, and even exhibited deactivation, for an immediately subsequent semantic task suggesting that despite only short sensory substitution experience orthographic task processing can dominate semantic processing in the VWFA. On a wider scope, this implies that at least in some cases cross-modal plasticity which enables the recruitment of areas for new tasks may be dominated by sensory independent task specific activation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Bender, Stephan; Behringer, Stephanie; Freitag, Christine M; Resch, Franz; Weisbrod, Matthias
2010-12-01
To elucidate the contributions of modality-dependent post-processing in auditory, motor and visual cortical areas to short-term memory. We compared late negative waves (N700) during the post-processing of single lateralized stimuli which were separated by long intertrial intervals across the auditory, motor and visual modalities. Tasks either required or competed with attention to post-processing of preceding events, i.e. active short-term memory maintenance. N700 indicated that cortical post-processing exceeded short movements as well as short auditory or visual stimuli for over half a second without intentional short-term memory maintenance. Modality-specific topographies pointed towards sensory (respectively motor) generators with comparable time-courses across the different modalities. Lateralization and amplitude of auditory/motor/visual N700 were enhanced by active short-term memory maintenance compared to attention to current perceptions or passive stimulation. The memory-related N700 increase followed the characteristic time-course and modality-specific topography of the N700 without intentional memory-maintenance. Memory-maintenance-related lateralized negative potentials may be related to a less lateralised modality-dependent post-processing N700 component which occurs also without intentional memory maintenance (automatic memory trace or effortless attraction of attention). Encoding to short-term memory may involve controlled attention to modality-dependent post-processing. Similar short-term memory processes may exist in the auditory, motor and visual systems. Copyright © 2010 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Prefrontal contributions to visual selective attention.
Squire, Ryan F; Noudoost, Behrad; Schafer, Robert J; Moore, Tirin
2013-07-08
The faculty of attention endows us with the capacity to process important sensory information selectively while disregarding information that is potentially distracting. Much of our understanding of the neural circuitry underlying this fundamental cognitive function comes from neurophysiological studies within the visual modality. Past evidence suggests that a principal function of the prefrontal cortex (PFC) is selective attention and that this function involves the modulation of sensory signals within posterior cortices. In this review, we discuss recent progress in identifying the specific prefrontal circuits controlling visual attention and its neural correlates within the primate visual system. In addition, we examine the persisting challenge of precisely defining how behavior should be affected when attentional function is lost.
Sreenivasan, Varun; Kyriakatos, Alexandros; Mateo, Celine; Jaeger, Dieter; Petersen, Carl C.H.
2016-01-01
Abstract. The spatial organization of mouse frontal cortex is poorly understood. Here, we used voltage-sensitive dye to image electrical activity in the dorsal cortex of awake head-restrained mice. Whisker-deflection evoked the earliest sensory response in a localized region of primary somatosensory cortex and visual stimulation evoked the earliest responses in a localized region of primary visual cortex. Over the next milliseconds, the initial sensory response spread within the respective primary sensory cortex and into the surrounding higher order sensory cortices. In addition, secondary hotspots in the frontal cortex were evoked by whisker and visual stimulation, with the frontal hotspot for whisker deflection being more anterior and lateral compared to the frontal hotspot evoked by visual stimulation. Investigating axonal projections, we found that the somatosensory whisker cortex and the visual cortex directly innervated frontal cortex, with visual cortex axons innervating a region medial and posterior to the innervation from somatosensory cortex, consistent with the location of sensory responses in frontal cortex. In turn, the axonal outputs of these two frontal cortical areas innervate distinct regions of striatum, superior colliculus, and brainstem. Sensory input, therefore, appears to map onto modality-specific regions of frontal cortex, perhaps participating in distinct sensorimotor transformations, and directing distinct motor outputs. PMID:27921067
Donohue, Sarah E; Todisco, Alexandra E; Woldorff, Marty G
2013-04-01
Neuroimaging work on multisensory conflict suggests that the relevant modality receives enhanced processing in the face of incongruency. However, the degree of stimulus processing in the irrelevant modality and the temporal cascade of the attentional modulations in either the relevant or irrelevant modalities are unknown. Here, we employed an audiovisual conflict paradigm with a sensory probe in the task-irrelevant modality (vision) to gauge the attentional allocation to that modality. ERPs were recorded as participants attended to and discriminated spoken auditory letters while ignoring simultaneous bilateral visual letter stimuli that were either fully congruent, fully incongruent, or partially incongruent (one side incongruent, one congruent) with the auditory stimulation. Half of the audiovisual letter stimuli were followed 500-700 msec later by a bilateral visual probe stimulus. As expected, ERPs to the audiovisual stimuli showed an incongruency ERP effect (fully incongruent versus fully congruent) of an enhanced, centrally distributed, negative-polarity wave starting ∼250 msec. More critically here, the sensory ERP components to the visual probes were larger when they followed fully incongruent versus fully congruent multisensory stimuli, with these enhancements greatest on fully incongruent trials with the slowest RTs. In addition, on the slowest-response partially incongruent trials, the P2 sensory component to the visual probes was larger contralateral to the preceding incongruent visual stimulus. These data suggest that, in response to conflicting multisensory stimulus input, the initial cognitive effect is a capture of attention by the incongruent irrelevant-modality input, pulling neural processing resources toward that modality, resulting in rapid enhancement, rather than rapid suppression, of that input.
Compensatory Plasticity in the Deaf Brain: Effects on Perception of Music
Good, Arla; Reed, Maureen J.; Russo, Frank A.
2014-01-01
When one sense is unavailable, sensory responsibilities shift and processing of the remaining modalities becomes enhanced to compensate for missing information. This shift, referred to as compensatory plasticity, results in a unique sensory experience for individuals who are deaf, including the manner in which music is perceived. This paper evaluates the neural, behavioural and cognitive evidence for compensatory plasticity following auditory deprivation and considers how this manifests in a unique experience of music that emphasizes visual and vibrotactile modalities. PMID:25354235
Enhanced audio-visual interactions in the auditory cortex of elderly cochlear-implant users.
Schierholz, Irina; Finke, Mareike; Schulte, Svenja; Hauthal, Nadine; Kantzke, Christoph; Rach, Stefan; Büchner, Andreas; Dengler, Reinhard; Sandmann, Pascale
2015-10-01
Auditory deprivation and the restoration of hearing via a cochlear implant (CI) can induce functional plasticity in auditory cortical areas. How these plastic changes affect the ability to integrate combined auditory (A) and visual (V) information is not yet well understood. In the present study, we used electroencephalography (EEG) to examine whether age, temporary deafness and altered sensory experience with a CI can affect audio-visual (AV) interactions in post-lingually deafened CI users. Young and elderly CI users and age-matched NH listeners performed a speeded response task on basic auditory, visual and audio-visual stimuli. Regarding the behavioral results, a redundant signals effect, that is, faster response times to cross-modal (AV) than to both of the two modality-specific stimuli (A, V), was revealed for all groups of participants. Moreover, in all four groups, we found evidence for audio-visual integration. Regarding event-related responses (ERPs), we observed a more pronounced visual modulation of the cortical auditory response at N1 latency (approximately 100 ms after stimulus onset) in the elderly CI users when compared with young CI users and elderly NH listeners. Thus, elderly CI users showed enhanced audio-visual binding which may be a consequence of compensatory strategies developed due to temporary deafness and/or degraded sensory input after implantation. These results indicate that the combination of aging, sensory deprivation and CI facilitates the coupling between the auditory and the visual modality. We suggest that this enhancement in multisensory interactions could be used to optimize auditory rehabilitation, especially in elderly CI users, by the application of strong audio-visually based rehabilitation strategies after implant switch-on. Copyright © 2015 Elsevier B.V. All rights reserved.
Cerebellar contributions to motor timing: a PET study of auditory and visual rhythm reproduction.
Penhune, V B; Zattore, R J; Evans, A C
1998-11-01
The perception and production of temporal patterns, or rhythms, is important for both music and speech. However, the way in which the human brain achieves accurate timing of perceptual input and motor output is as yet little understood. Central control of both motor timing and perceptual timing across modalities has been linked to both the cerebellum and the basal ganglia (BG). The present study was designed to test the hypothesized central control of temporal processing and to examine the roles of the cerebellum, BG, and sensory association areas. In this positron emission tomography (PET) activation paradigm, subjects reproduced rhythms of increasing temporal complexity that were presented separately in the auditory and visual modalities. The results provide support for a supramodal contribution of the lateral cerebellar cortex and cerebellar vermis to the production of a timed motor response, particularly when it is complex and/or novel. The results also give partial support to the involvement of BG structures in motor timing, although this may be more directly related to implementation of the motor response than to timing per se. Finally, sensory association areas and the ventrolateral frontal cortex were found to be involved in modality-specific encoding and retrieval of the temporal stimuli. Taken together, these results point to the participation of a number of neural structures in the production of a timed motor response from an external stimulus. The role of the cerebellum in timing is conceptualized not as a clock or counter but simply as the structure that provides the necessary circuitry for the sensory system to extract temporal information and for the motor system to learn to produce a precisely timed response.
Halfwerk, Wouter; Slabbekoorn, Hans
2015-01-01
Anthropogenic sensory pollution is affecting ecosystems worldwide. Human actions generate acoustic noise, emanate artificial light and emit chemical substances. All of these pollutants are known to affect animals. Most studies on anthropogenic pollution address the impact of pollutants in unimodal sensory domains. High levels of anthropogenic noise, for example, have been shown to interfere with acoustic signals and cues. However, animals rely on multiple senses, and pollutants often co-occur. Thus, a full ecological assessment of the impact of anthropogenic activities requires a multimodal approach. We describe how sensory pollutants can co-occur and how covariance among pollutants may differ from natural situations. We review how animals combine information that arrives at their sensory systems through different modalities and outline how sensory conditions can interfere with multimodal perception. Finally, we describe how sensory pollutants can affect the perception, behaviour and endocrinology of animals within and across sensory modalities. We conclude that sensory pollution can affect animals in complex ways due to interactions among sensory stimuli, neural processing and behavioural and endocrinal feedback. We call for more empirical data on covariance among sensory conditions, for instance, data on correlated levels in noise and light pollution. Furthermore, we encourage researchers to test animal responses to a full-factorial set of sensory pollutants in the presence or the absence of ecologically important signals and cues. We realize that such approach is often time and energy consuming, but we think this is the only way to fully understand the multimodal impact of sensory pollution on animal performance and perception. PMID:25904319
Kang, Chang-ku; Moon, Jong-yeol; Lee, Sang-im; Jablonski, Piotr G.
2013-01-01
Many moths have wing patterns that resemble bark of trees on which they rest. The wing patterns help moths to become camouflaged and to avoid predation because the moths are able to assume specific body orientations that produce a very good match between the pattern on the bark and the pattern on the wings. Furthermore, after landing on a bark moths are able to perceive stimuli that correlate with their crypticity and are able to re-position their bodies to new more cryptic locations and body orientations. However, the proximate mechanisms, i.e. how a moth finds an appropriate resting position and orientation, are poorly studied. Here, we used a geometrid moth Jankowskia fuscaria to examine i) whether a choice of resting orientation by moths depends on the properties of natural background, and ii) what sensory cues moths use. We studied moths’ behavior on natural (a tree log) and artificial backgrounds, each of which was designed to mimic one of the hypothetical cues that moths may perceive on a tree trunk (visual pattern, directional furrow structure, and curvature). We found that moths mainly used structural cues from the background when choosing their resting position and orientation. Our findings highlight the possibility that moths use information from one type of sensory modality (structure of furrows is probably detected through tactile channel) to achieve crypticity in another sensory modality (visual). This study extends our knowledge of how behavior, sensory systems and morphology of animals interact to produce crypsis. PMID:24205118
Kang, Chang-Ku; Moon, Jong-Yeol; Lee, Sang-Im; Jablonski, Piotr G
2013-01-01
Many moths have wing patterns that resemble bark of trees on which they rest. The wing patterns help moths to become camouflaged and to avoid predation because the moths are able to assume specific body orientations that produce a very good match between the pattern on the bark and the pattern on the wings. Furthermore, after landing on a bark moths are able to perceive stimuli that correlate with their crypticity and are able to re-position their bodies to new more cryptic locations and body orientations. However, the proximate mechanisms, i.e. how a moth finds an appropriate resting position and orientation, are poorly studied. Here, we used a geometrid moth Jankowskia fuscaria to examine i) whether a choice of resting orientation by moths depends on the properties of natural background, and ii) what sensory cues moths use. We studied moths' behavior on natural (a tree log) and artificial backgrounds, each of which was designed to mimic one of the hypothetical cues that moths may perceive on a tree trunk (visual pattern, directional furrow structure, and curvature). We found that moths mainly used structural cues from the background when choosing their resting position and orientation. Our findings highlight the possibility that moths use information from one type of sensory modality (structure of furrows is probably detected through tactile channel) to achieve crypticity in another sensory modality (visual). This study extends our knowledge of how behavior, sensory systems and morphology of animals interact to produce crypsis.
Bermejo, Fernando; Di Paolo, Ezequiel A.; Hüg, Mercedes X.; Arias, Claudia
2015-01-01
The sensorimotor approach proposes that perception is constituted by the mastery of lawful sensorimotor regularities or sensorimotor contingencies (SMCs), which depend on specific bodily characteristics and on actions possibilities that the environment enables and constrains. Sensory substitution devices (SSDs) provide the user information about the world typically corresponding to one sensory modality through the stimulation of another modality. We investigate how perception emerges in novice adult participants equipped with vision-to-auditory SSDs while solving a simple geometrical shape recognition task. In particular, we examine the distinction between apparatus-related SMCs (those originating mostly in properties of the perceptual system) and object-related SMCs (those mostly connected with the perceptual task). We study the sensorimotor strategies employed by participants in three experiments with three different SSDs: a minimalist head-mounted SSD, a traditional, also head-mounted SSD (the vOICe) and an enhanced, hand-held echolocation device. Motor activity and fist-person data are registered and analyzed. Results show that participants are able to quickly learn the necessary skills to distinguish geometric shapes. Comparing the sensorimotor strategies utilized with each SSD we identify differential features of the sensorimotor patterns attributable mostly to the device, which account for the emergence of apparatus-based SMCs. These relate to differences in sweeping strategies between SSDs. We identify, also, components related to the emergence of object-related SMCs. These relate mostly to exploratory movements around the border of a shape. The study provides empirical support for SMC theory and discusses considerations about the nature of perception in sensory substitution. PMID:26106340
Sharma, Anu; Campbell, Julia; Cardon, Garrett
2015-02-01
Cortical development is dependent on extrinsic stimulation. As such, sensory deprivation, as in congenital deafness, can dramatically alter functional connectivity and growth in the auditory system. Cochlear implants ameliorate deprivation-induced delays in maturation by directly stimulating the central nervous system, and thereby restoring auditory input. The scenario in which hearing is lost due to deafness and then reestablished via a cochlear implant provides a window into the development of the central auditory system. Converging evidence from electrophysiologic and brain imaging studies of deaf animals and children fitted with cochlear implants has allowed us to elucidate the details of the time course for auditory cortical maturation under conditions of deprivation. Here, we review how the P1 cortical auditory evoked potential (CAEP) provides useful insight into sensitive period cut-offs for development of the primary auditory cortex in deaf children fitted with cochlear implants. Additionally, we present new data on similar sensitive period dynamics in higher-order auditory cortices, as measured by the N1 CAEP in cochlear implant recipients. Furthermore, cortical re-organization, secondary to sensory deprivation, may take the form of compensatory cross-modal plasticity. We provide new case-study evidence that cross-modal re-organization, in which intact sensory modalities (i.e., vision and somatosensation) recruit cortical regions associated with deficient sensory modalities (i.e., auditory) in cochlear implanted children may influence their behavioral outcomes with the implant. Improvements in our understanding of developmental neuroplasticity in the auditory system should lead to harnessing central auditory plasticity for superior clinical technique. Copyright © 2014 Elsevier B.V. All rights reserved.
How previous experience shapes perception in different sensory modalities
Snyder, Joel S.; Schwiedrzik, Caspar M.; Vitela, A. Davi; Melloni, Lucia
2015-01-01
What has transpired immediately before has a strong influence on how sensory stimuli are processed and perceived. In particular, temporal context can have contrastive effects, repelling perception away from the interpretation of the context stimulus, and attractive effects (TCEs), whereby perception repeats upon successive presentations of the same stimulus. For decades, scientists have documented contrastive and attractive temporal context effects mostly with simple visual stimuli. But both types of effects also occur in other modalities, e.g., audition and touch, and for stimuli of varying complexity, raising the possibility that context effects reflect general computational principles of sensory systems. Neuroimaging shows that contrastive and attractive context effects arise from neural processes in different areas of the cerebral cortex, suggesting two separate operations with distinct functional roles. Bayesian models can provide a functional account of both context effects, whereby prior experience adjusts sensory systems to optimize perception of future stimuli. PMID:26582982
Sekiguchi, Yusuke; Honda, Keita; Ishiguro, Akio
2016-01-01
Sensory impairments caused by neurological or physical disorders hamper kinesthesia, making rehabilitation difficult. In order to overcome this problem, we proposed and developed a novel biofeedback prosthesis called Auditory Foot for transforming sensory modalities, in which the sensor prosthesis transforms plantar sensations to auditory feedback signals. This study investigated the short-term effect of the auditory feedback prosthesis on walking in stroke patients with hemiparesis. To evaluate the effect, we compared four conditions of auditory feedback from plantar sensors at the heel and fifth metatarsal. We found significant differences in the maximum hip extension angle and ankle plantar flexor moment on the affected side during the stance phase, between conditions with and without auditory feedback signals. These results indicate that our sensory prosthesis could enhance walking performance in stroke patients with hemiparesis, resulting in effective short-term rehabilitation. PMID:27547456
A New Conceptualization of Human Visual Sensory-Memory
Öğmen, Haluk; Herzog, Michael H.
2016-01-01
Memory is an essential component of cognition and disorders of memory have significant individual and societal costs. The Atkinson–Shiffrin “modal model” forms the foundation of our understanding of human memory. It consists of three stores: Sensory Memory (SM), whose visual component is called iconic memory, Short-Term Memory (STM; also called working memory, WM), and Long-Term Memory (LTM). Since its inception, shortcomings of all three components of the modal model have been identified. While the theories of STM and LTM underwent significant modifications to address these shortcomings, models of the iconic memory remained largely unchanged: A high capacity but rapidly decaying store whose contents are encoded in retinotopic coordinates, i.e., according to how the stimulus is projected on the retina. The fundamental shortcoming of iconic memory models is that, because contents are encoded in retinotopic coordinates, the iconic memory cannot hold any useful information under normal viewing conditions when objects or the subject are in motion. Hence, half-century after its formulation, it remains an unresolved problem whether and how the first stage of the modal model serves any useful function and how subsequent stages of the modal model receive inputs from the environment. Here, we propose a new conceptualization of human visual sensory memory by introducing an additional component whose reference-frame consists of motion-grouping based coordinates rather than retinotopic coordinates. We review data supporting this new model and discuss how it offers solutions to the paradoxes of the traditional model of sensory memory. PMID:27375519
A New Conceptualization of Human Visual Sensory-Memory.
Öğmen, Haluk; Herzog, Michael H
2016-01-01
Memory is an essential component of cognition and disorders of memory have significant individual and societal costs. The Atkinson-Shiffrin "modal model" forms the foundation of our understanding of human memory. It consists of three stores: Sensory Memory (SM), whose visual component is called iconic memory, Short-Term Memory (STM; also called working memory, WM), and Long-Term Memory (LTM). Since its inception, shortcomings of all three components of the modal model have been identified. While the theories of STM and LTM underwent significant modifications to address these shortcomings, models of the iconic memory remained largely unchanged: A high capacity but rapidly decaying store whose contents are encoded in retinotopic coordinates, i.e., according to how the stimulus is projected on the retina. The fundamental shortcoming of iconic memory models is that, because contents are encoded in retinotopic coordinates, the iconic memory cannot hold any useful information under normal viewing conditions when objects or the subject are in motion. Hence, half-century after its formulation, it remains an unresolved problem whether and how the first stage of the modal model serves any useful function and how subsequent stages of the modal model receive inputs from the environment. Here, we propose a new conceptualization of human visual sensory memory by introducing an additional component whose reference-frame consists of motion-grouping based coordinates rather than retinotopic coordinates. We review data supporting this new model and discuss how it offers solutions to the paradoxes of the traditional model of sensory memory.
ERIC Educational Resources Information Center
Chuang, Tsung-Yen; Kuo, Ming-Shiou
2016-01-01
Children with Sensory Integration Dysfunction (SID, also known as Sensory Processing Disorder, SPD) are also learners with disabilities with regard to responding adequately to the demands made by a learning environment. With problems of organizing and processing the sensation information coming from body modalities, children with SID (CwSID)…
Convergent and invariant object representations for sight, sound, and touch.
Man, Kingson; Damasio, Antonio; Meyer, Kaspar; Kaplan, Jonas T
2015-09-01
We continuously perceive objects in the world through multiple sensory channels. In this study, we investigated the convergence of information from different sensory streams within the cerebral cortex. We presented volunteers with three common objects via three different modalities-sight, sound, and touch-and used multivariate pattern analysis of functional magnetic resonance imaging data to map the cortical regions containing information about the identity of the objects. We could reliably predict which of the three stimuli a subject had seen, heard, or touched from the pattern of neural activity in the corresponding early sensory cortices. Intramodal classification was also successful in large portions of the cerebral cortex beyond the primary areas, with multiple regions showing convergence of information from two or all three modalities. Using crossmodal classification, we also searched for brain regions that would represent objects in a similar fashion across different modalities of presentation. We trained a classifier to distinguish objects presented in one modality and then tested it on the same objects presented in a different modality. We detected audiovisual invariance in the right temporo-occipital junction, audiotactile invariance in the left postcentral gyrus and parietal operculum, and visuotactile invariance in the right postcentral and supramarginal gyri. Our maps of multisensory convergence and crossmodal generalization reveal the underlying organization of the association cortices, and may be related to the neural basis for mental concepts. © 2015 Wiley Periodicals, Inc.
Developmentally defined forebrain circuits regulate appetitive and aversive olfactory learning.
Muthusamy, Nagendran; Zhang, Xuying; Johnson, Caroline A; Yadav, Prem N; Ghashghaei, H Troy
2017-01-01
Postnatal and adult neurogenesis are region- and modality-specific, but the significance of developmentally distinct neuronal populations remains unclear. We demonstrate that chemogenetic inactivation of a subset of forebrain and olfactory neurons generated at birth disrupts responses to an aversive odor. In contrast, novel appetitive odor learning is sensitive to inactivation of adult-born neurons, revealing that developmentally defined sets of neurons may differentially participate in hedonic aspects of sensory learning.
Weiss, Peter H; Zilles, Karl; Fink, Gereon R
2005-12-01
In synesthesia, stimulation of one sensory modality (e.g., hearing) triggers a percept in another, non-stimulated sensory modality (e.g., vision). Likewise, perception of a form (e.g., a letter) may induce a color percept (i.e., grapheme-color synesthesia). To date, the neural mechanisms underlying synesthesia remain to be elucidated. We disclosed by fMRI, while controlling for surface color processing, enhanced activity in the left intraparietal cortex during the experience of grapheme-color synesthesia (n = 9). In contrast, the perception of surface color per se activated the color centers in the fusiform gyrus bilaterally. The data support theoretical accounts that grapheme-color synesthesia may originate from enhanced cross-modal binding of form and color. A mismatch of surface color and grapheme induced synesthetically felt color additionally activated the left dorsolateral prefrontal cortex (DLPFC). This suggests that cognitive control processes become active to resolve the perceptual conflict resulting from synesthesia.
Predictive Coding or Evidence Accumulation? False Inference and Neuronal Fluctuations
Friston, Karl J.; Kleinschmidt, Andreas
2010-01-01
Perceptual decisions can be made when sensory input affords an inference about what generated that input. Here, we report findings from two independent perceptual experiments conducted during functional magnetic resonance imaging (fMRI) with a sparse event-related design. The first experiment, in the visual modality, involved forced-choice discrimination of coherence in random dot kinematograms that contained either subliminal or periliminal motion coherence. The second experiment, in the auditory domain, involved free response detection of (non-semantic) near-threshold acoustic stimuli. We analysed fluctuations in ongoing neural activity, as indexed by fMRI, and found that neuronal activity in sensory areas (extrastriate visual and early auditory cortex) biases perceptual decisions towards correct inference and not towards a specific percept. Hits (detection of near-threshold stimuli) were preceded by significantly higher activity than both misses of identical stimuli or false alarms, in which percepts arise in the absence of appropriate sensory input. In accord with predictive coding models and the free-energy principle, this observation suggests that cortical activity in sensory brain areas reflects the precision of prediction errors and not just the sensory evidence or prediction errors per se. PMID:20369004
The role of primary auditory and visual cortices in temporal processing: A tDCS approach.
Mioni, G; Grondin, S; Forgione, M; Fracasso, V; Mapelli, D; Stablum, F
2016-10-15
Many studies showed that visual stimuli are frequently experienced as shorter than equivalent auditory stimuli. These findings suggest that timing is distributed across many brain areas and that "different clocks" might be involved in temporal processing. The aim of this study is to investigate, with the application of tDCS over V1 and A1, the specific role of primary sensory cortices (either visual or auditory) in temporal processing. Forty-eight University students were included in the study. Twenty-four participants were stimulated over A1 and 24 participants were stimulated over V1. Participants performed time bisection tasks, in the visual and the auditory modalities, involving standard durations lasting 300ms (short) and 900ms (long). When tDCS was delivered over A1, no effect of stimulation was observed on perceived duration but we observed higher temporal variability under anodic stimulation compared to sham and higher variability in the visual compared to the auditory modality. When tDCS was delivered over V1, an under-estimation of perceived duration and higher variability was observed in the visual compared to the auditory modality. Our results showed more variability of visual temporal processing under tDCS stimulation. These results suggest a modality independent role of A1 in temporal processing and a modality specific role of V1 in the processing of temporal intervals in the visual modality. Copyright © 2016 Elsevier B.V. All rights reserved.
Hmx1 is required for the normal development of somatosensory neurons in the geniculate ganglion
Quina, Lely A.; Tempest, Lynne; Hsu, Yun-Wei A.; Cox, Timothy C.; Turner, Eric E.
2012-01-01
Hmx1 is a variant homeodomain transcription factor expressed in the developing sensory nervous system, retina, and craniofacial mesenchyme. Recently, mutations at the Hmx1 locus have been linked to craniofacial defects in humans, rats, and mice, but its role in nervous system development is largely unknown. Here we show that Hmx1 is expressed in a subset of sensory neurons in the cranial and dorsal root ganglia which does not correspond to any specific sensory modality. Sensory neurons in the dorsal root and trigeminal ganglia of Hmx1dm/dm mouse embryos have no detectable Hmx1 protein, yet they undergo neurogenesis and express sensory subtype markers normally, demonstrating that Hmx1 is not globally required for the specification of sensory neurons from neural crest precursors. Loss of Hmx1 expression has no obvious effect on the early development of the trigeminal (V), superior (IX/X), or dorsal root ganglia neurons in which it is expressed, but results in marked defects in the geniculate (VII) ganglion. Hmx1dm/dm mouse embryos possess only a vestigial posterior auricular nerve, and general somatosensory neurons in the geniculate ganglion are greatly reduced by mid-gestation. Although Hmx1 is expressed in geniculate neurons prior to cell cycle exit, it does not appear to be required for neurogenesis, and the loss of geniculate neurons is likely to be the result of increased cell death. Fate mapping of neural crest-derived tissues indicates that Hmx1-expressing somatosensory neurons at different axial levels may be derived from either the neural crest or the neurogenic placodes. PMID:22586713
Halfwerk, Wouter; Slabbekoorn, Hans
2015-04-01
Anthropogenic sensory pollution is affecting ecosystems worldwide. Human actions generate acoustic noise, emanate artificial light and emit chemical substances. All of these pollutants are known to affect animals. Most studies on anthropogenic pollution address the impact of pollutants in unimodal sensory domains. High levels of anthropogenic noise, for example, have been shown to interfere with acoustic signals and cues. However, animals rely on multiple senses, and pollutants often co-occur. Thus, a full ecological assessment of the impact of anthropogenic activities requires a multimodal approach. We describe how sensory pollutants can co-occur and how covariance among pollutants may differ from natural situations. We review how animals combine information that arrives at their sensory systems through different modalities and outline how sensory conditions can interfere with multimodal perception. Finally, we describe how sensory pollutants can affect the perception, behaviour and endocrinology of animals within and across sensory modalities. We conclude that sensory pollution can affect animals in complex ways due to interactions among sensory stimuli, neural processing and behavioural and endocrinal feedback. We call for more empirical data on covariance among sensory conditions, for instance, data on correlated levels in noise and light pollution. Furthermore, we encourage researchers to test animal responses to a full-factorial set of sensory pollutants in the presence or the absence of ecologically important signals and cues. We realize that such approach is often time and energy consuming, but we think this is the only way to fully understand the multimodal impact of sensory pollution on animal performance and perception. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Is it me? Self-recognition bias across sensory modalities and its relationship to autistic traits.
Chakraborty, Anya; Chakrabarti, Bhismadev
2015-01-01
Atypical self-processing is an emerging theme in autism research, suggested by lower self-reference effect in memory, and atypical neural responses to visual self-representations. Most research on physical self-processing in autism uses visual stimuli. However, the self is a multimodal construct, and therefore, it is essential to test self-recognition in other sensory modalities as well. Self-recognition in the auditory modality remains relatively unexplored and has not been tested in relation to autism and related traits. This study investigates self-recognition in auditory and visual domain in the general population and tests if it is associated with autistic traits. Thirty-nine neurotypical adults participated in a two-part study. In the first session, individual participant's voice was recorded and face was photographed and morphed respectively with voices and faces from unfamiliar identities. In the second session, participants performed a 'self-identification' task, classifying each morph as 'self' voice (or face) or an 'other' voice (or face). All participants also completed the Autism Spectrum Quotient (AQ). For each sensory modality, slope of the self-recognition curve was used as individual self-recognition metric. These two self-recognition metrics were tested for association between each other, and with autistic traits. Fifty percent 'self' response was reached for a higher percentage of self in the auditory domain compared to the visual domain (t = 3.142; P < 0.01). No significant correlation was noted between self-recognition bias across sensory modalities (τ = -0.165, P = 0.204). Higher recognition bias for self-voice was observed in individuals higher in autistic traits (τ AQ = 0.301, P = 0.008). No such correlation was observed between recognition bias for self-face and autistic traits (τ AQ = -0.020, P = 0.438). Our data shows that recognition bias for physical self-representation is not related across sensory modalities. Further, individuals with higher autistic traits were better able to discriminate self from other voices, but this relation was not observed with self-face. A narrow self-other overlap in the auditory domain seen in individuals with high autistic traits could arise due to enhanced perceptual processing of auditory stimuli often observed in individuals with autism.
Crossmodal Connections of Primary Sensory Cortices Largely Vanish During Normal Aging
Henschke, Julia U.; Ohl, Frank W.; Budinger, Eike
2018-01-01
During aging, human response times (RTs) to unisensory and crossmodal stimuli decrease. However, the elderly benefit more from crossmodal stimulus representations than younger people. The underlying short-latency multisensory integration process is mediated by direct crossmodal connections at the level of primary sensory cortices. We investigate the age-related changes of these connections using a rodent model (Mongolian gerbil), retrograde tracer injections into the primary auditory (A1), somatosensory (S1), and visual cortex (V1), and immunohistochemistry for markers of apoptosis (Caspase-3), axonal plasticity (Growth associated protein 43, GAP 43), and a calcium-binding protein (Parvalbumin, PV). In adult animals, primary sensory cortices receive a substantial number of direct thalamic inputs from nuclei of their matched, but also from nuclei of non-matched sensory modalities. There are also direct intracortical connections among primary sensory cortices and connections with secondary sensory cortices of other modalities. In very old animals, the crossmodal connections strongly decrease in number or vanish entirely. This is likely due to a retraction of the projection neuron axonal branches rather than ongoing programmed cell death. The loss of crossmodal connections is also accompanied by changes in anatomical correlates of inhibition and excitation in the sensory thalamus and cortex. Together, the loss and restructuring of crossmodal connections during aging suggest a shift of multisensory processing from primary cortices towards other sensory brain areas in elderly individuals. PMID:29551970
Crossmodal Connections of Primary Sensory Cortices Largely Vanish During Normal Aging.
Henschke, Julia U; Ohl, Frank W; Budinger, Eike
2018-01-01
During aging, human response times (RTs) to unisensory and crossmodal stimuli decrease. However, the elderly benefit more from crossmodal stimulus representations than younger people. The underlying short-latency multisensory integration process is mediated by direct crossmodal connections at the level of primary sensory cortices. We investigate the age-related changes of these connections using a rodent model (Mongolian gerbil), retrograde tracer injections into the primary auditory (A1), somatosensory (S1), and visual cortex (V1), and immunohistochemistry for markers of apoptosis (Caspase-3), axonal plasticity (Growth associated protein 43, GAP 43), and a calcium-binding protein (Parvalbumin, PV). In adult animals, primary sensory cortices receive a substantial number of direct thalamic inputs from nuclei of their matched, but also from nuclei of non-matched sensory modalities. There are also direct intracortical connections among primary sensory cortices and connections with secondary sensory cortices of other modalities. In very old animals, the crossmodal connections strongly decrease in number or vanish entirely. This is likely due to a retraction of the projection neuron axonal branches rather than ongoing programmed cell death. The loss of crossmodal connections is also accompanied by changes in anatomical correlates of inhibition and excitation in the sensory thalamus and cortex. Together, the loss and restructuring of crossmodal connections during aging suggest a shift of multisensory processing from primary cortices towards other sensory brain areas in elderly individuals.
Wang, Danying; Clouter, Andrew; Chen, Qiaoyu; Shapiro, Kimron L; Hanslmayr, Simon
2018-06-13
Episodic memories are rich in sensory information and often contain integrated information from different sensory modalities. For instance, we can store memories of a recent concert with visual and auditory impressions being integrated in one episode. Theta oscillations have recently been implicated in playing a causal role synchronizing and effectively binding the different modalities together in memory. However, an open question is whether momentary fluctuations in theta synchronization predict the likelihood of associative memory formation for multisensory events. To address this question we entrained the visual and auditory cortex at theta frequency (4 Hz) and in a synchronous or asynchronous manner by modulating the luminance and volume of movies and sounds at 4 Hz, with a phase offset at 0° or 180°. EEG activity from human subjects (both sexes) was recorded while they memorized the association between a movie and a sound. Associative memory performance was significantly enhanced in the 0° compared to the 180° condition. Source-level analysis demonstrated that the physical stimuli effectively entrained their respective cortical areas with a corresponding phase offset. The findings suggested a successful replication of a previous study (Clouter et al., 2017). Importantly, the strength of entrainment during encoding correlated with the efficacy of associative memory such that small phase differences between visual and auditory cortex predicted a high likelihood of correct retrieval in a later recall test. These findings suggest that theta oscillations serve a specific function in the episodic memory system: Binding the contents of different modalities into coherent memory episodes. SIGNIFICANCE STATEMENT How multi-sensory experiences are bound to form a coherent episodic memory representation is one of the fundamental questions in human episodic memory research. Evidence from animal literature suggests that the relative timing between an input and theta oscillations in the hippocampus is crucial for memory formation. We precisely controlled the timing between visual and auditory stimuli and the neural oscillations at 4 Hz using a multisensory entrainment paradigm. Human associative memory formation depends on coincident timing between sensory streams processed by the corresponding brain regions. We provide evidence for a significant role of relative timing of neural theta activity in human episodic memory on a single trial level, which reveals a crucial mechanism underlying human episodic memory. Copyright © 2018 the authors.
D'Alonzo, Marco; Dosen, Strahinja; Cipriani, Christian; Farina, Dario
2014-03-01
Electro- or vibro-tactile stimulations were used in the past to provide sensory information in many different applications ranging from human manual control to prosthetics. The two modalities were used separately in the past, and we hypothesized that a hybrid vibro-electrotactile (HyVE) stimulation could provide two afferent streams that are independently perceived by a subject, although delivered in parallel and through the same skin location. We conducted psychophysical experiments where healthy subjects were asked to recognize the intensities of electroand vibro-tactile stimuli during hybrid and single modality stimulations. The results demonstrated that the subjects were able to discriminate the features of the two modalities within the hybrid stimulus, and that the cross-modality interaction was limited enough to allow better transmission of discrete information (messages) using hybrid versus singlemodality coding. The percentages of successful recognitions (mean ± standard deviation) for nine messages were 56 ± 11 % and 72 ± 8 % for two hybrid coding schemes, compared to 29 ±7 % for vibrotactile and 44 ± 4 % for electrotactile coding. The HyVE can be therefore an attractivesolution in numerous application for providing sensory feedbackin prostheses and rehabilitation, and it could be used to increase the resolution of a single variable or to simultaneously feedback two different variables.
The stuff that dreams aren't made of: why wake-state and dream-state sensory experiences differ.
Symons, D
1993-06-01
It is adaptive for individuals to be continuously alert and responsive to external stimuli (such as the sound and odor of an approaching predator or the cry of an infant), even during sleep. Natural selection thus has disfavored the occurrence during sleep of hallucinations that compromise external vigilance. In the great majority of mammalian species, including Homo sapiens, closed eyes and immobility are basic aspects of sleep. Therefore, (a) visual and movement sensory modalities (except kinesthesis) do not provide the sleeper with accurate information about the external environment or the sleeper's relationship to that environment; (b) the sleeper's forebrain "vigilance mechanism" does not monitor these modalities; hence (c) visual and movement hallucinations--similar or identical to percepts--can occur during sleep without compromising vigilance. In contrast, the other sensory modalities do provide the sleeper with a continuous flow of information about the external environment or the sleeper's relationship to that environment, and these modalities are monitored by the vigilance mechanism. Hallucinations of kinesthesis, pain, touch, warmth, cold, odor, and sound thus would compromise vigilance, and their occurrence during sleep has been disfavored by natural selection. This vigilance hypothesis generates novel predictions about dream phenomenology and REM-state neurophysiology and has implications for the general study of imagery.
Tal, Zohar; Geva, Ran; Amedi, Amir
2016-01-01
Recent evidence from blind participants suggests that visual areas are task-oriented and sensory modality input independent rather than sensory-specific to vision. Specifically, visual areas are thought to retain their functional selectivity when using non-visual inputs (touch or sound) even without having any visual experience. However, this theory is still controversial since it is not clear whether this also characterizes the sighted brain, and whether the reported results in the sighted reflect basic fundamental a-modal processes or are an epiphenomenon to a large extent. In the current study, we addressed these questions using a series of fMRI experiments aimed to explore visual cortex responses to passive touch on various body parts and the coupling between the parietal and visual cortices as manifested by functional connectivity. We show that passive touch robustly activated the object selective parts of the lateral–occipital (LO) cortex while deactivating almost all other occipital–retinotopic-areas. Furthermore, passive touch responses in the visual cortex were specific to hand and upper trunk stimulations. Psychophysiological interaction (PPI) analysis suggests that LO is functionally connected to the hand area in the primary somatosensory homunculus (S1), during hand and shoulder stimulations but not to any of the other body parts. We suggest that LO is a fundamental hub that serves as a node between visual-object selective areas and S1 hand representation, probably due to the critical evolutionary role of touch in object recognition and manipulation. These results might also point to a more general principle suggesting that recruitment or deactivation of the visual cortex by other sensory input depends on the ecological relevance of the information conveyed by this input to the task/computations carried out by each area or network. This is likely to rely on the unique and differential pattern of connectivity for each visual area with the rest of the brain. PMID:26673114
Multisensory Integration and Behavioral Plasticity in Sharks from Different Ecological Niches
Gardiner, Jayne M.; Atema, Jelle; Hueter, Robert E.; Motta, Philip J.
2014-01-01
The underwater sensory world and the sensory systems of aquatic animals have become better understood in recent decades, but typically have been studied one sense at a time. A comprehensive analysis of multisensory interactions during complex behavioral tasks has remained a subject of discussion without experimental evidence. We set out to generate a general model of multisensory information extraction by aquatic animals. For our model we chose to analyze the hierarchical, integrative, and sometimes alternate use of various sensory systems during the feeding sequence in three species of sharks that differ in sensory anatomy and behavioral ecology. By blocking senses in different combinations, we show that when some of their normal sensory cues were unavailable, sharks were often still capable of successfully detecting, tracking and capturing prey by switching to alternate sensory modalities. While there were significant species differences, odor was generally the first signal detected, leading to upstream swimming and wake tracking. Closer to the prey, as more sensory cues became available, the preferred sensory modalities varied among species, with vision, hydrodynamic imaging, electroreception, and touch being important for orienting to, striking at, and capturing the prey. Experimental deprivation of senses showed how sharks exploit the many signals that comprise their sensory world, each sense coming into play as they provide more accurate information during the behavioral sequence of hunting. The results may be applicable to aquatic hunting in general and, with appropriate modification, to other types of animal behavior. PMID:24695492
Tsuruda, Jennifer M; Page, Robert E
2009-12-14
In honey bees, the sensory system can be measured by touching sugar water to the antennae, eliciting the extension of the proboscis. The proboscis extension response (PER) [6,13] is closely associated with complex behavioral traits involving foraging and learning [30-32,34-36,43-49]. Bees specializing in pollen foraging are more responsive to low concentrations of sucrose solution and, as a consequence, perform better in associative learning assays [4,43,46-48]. An important unanswered question is whether sensory-motor differences between pollen and nectar specialists are restricted to the gustatory modality or whether pollen foragers are in general more sensitive to sensory stimuli associated with foraging. We used an assay designed to test responsiveness to varying intensities of light [11] and tested responsiveness to varying concentrations of sucrose in wild-type pollen and non-pollen foragers and bees artificially-selected for differences in pollen-hoarding behavior [27]. Workers of the high pollen-hoarding strain are more likely to specialize on collecting pollen. In wild-type bees, pollen foragers were more responsive to sucrose and light than non-pollen foragers. In the selected strains, high pollen-hoarding pre-foragers were more responsive to sucrose and light than low pollen-hoarding pre-foragers. These PER and light assays demonstrate a positive relationship between the gustatory and visual sensory modalities with respect to foraging behavior and genotype. We propose that light responsiveness, in addition to sucrose responsiveness, is a component of a pollen-hoarding behavioral syndrome - a suite of traits that covary with hoarding behavior [51,52] - previously described for honey bees [14,37,41]. We suggest that the modulation of the sensory system may be partially constrained by the interdependent modulation of multiple sensory modalities associated with hoarding and foraging.
Choy, Eunice E Hang; Cheung, Him
2017-11-01
Temporal and spatial representations have been consistently shown to be inextricably intertwined. However, the exact nature of time-space mapping remains unknown. On the one hand, the conceptual metaphor theory postulates unilateral, asymmetric mapping of time onto space, that is, time is perceived in spatial terms but the perception of space is relatively independent of time. On the other hand, a theory of magnitude assumes bilateral and symmetric interactions between temporal and spatial perceptions. In the present paper, we argue that the concepts of linguistic asymmetry, egocentric anchoring, and sensory modality provide potential explanations for why evidences favoring both asymmetry and symmetry have been obtained. We first examine the asymmetry model and suggest that language plays a critical role in it. Next, we discuss the symmetry model in relation to egocentric anchoring and sensory modality. We conclude that since these three factors may jointly account for some conflicting past results regarding the strength and directionality of time-space mapping, they should be taken into serious consideration in future test designs.
Thigmotaxis Mediates Trail Odour Disruption.
Stringer, Lloyd D; Corn, Joshua E; Sik Roh, Hyun; Jiménez-Pérez, Alfredo; Manning, Lee-Anne M; Harper, Aimee R; Suckling, David M
2017-05-10
Disruption of foraging using oversupply of ant trail pheromones is a novel pest management application under investigation. It presents an opportunity to investigate the interaction of sensory modalities by removal of one of the modes. Superficially similar to sex pheromone-based mating disruption in moths, ant trail pheromone disruption lacks an equivalent mechanistic understanding of how the ants respond to an oversupply of their trail pheromone. Since significant compromise of one sensory modality essential for trail following (chemotaxis) has been demonstrated, we hypothesised that other sensory modalities such as thigmotaxis could act to reduce the impact on olfactory disruption of foraging behaviour. To test this, we provided a physical stimulus of thread to aid trailing by Argentine ants otherwise under disruptive pheromone concentrations. Trail following success was higher using a physical cue. While trail integrity reduced under continuous over-supply of trail pheromone delivered directly on the thread, provision of a physical cue in the form of thread slightly improved trail following and mediated trail disruption from high concentrations upwind. Our results indicate that ants are able to use physical structures to reduce but not eliminate the effects of trail pheromone disruption.
Convergence of multimodal sensory pathways to the mushroom body calyx in Drosophila melanogaster
Yagi, Ryosuke; Mabuchi, Yuta; Mizunami, Makoto; Tanaka, Nobuaki K.
2016-01-01
Detailed structural analyses of the mushroom body which plays critical roles in olfactory learning and memory revealed that it is directly connected with multiple primary sensory centers in Drosophila. Connectivity patterns between the mushroom body and primary sensory centers suggest that each mushroom body lobe processes information on different combinations of multiple sensory modalities. This finding provides a novel focus of research by Drosophila genetics for perception of the external world by integrating multisensory signals. PMID:27404960
Neural substrate of initiation of cross-modal working memory retrieval.
Zhang, Yangyang; Hu, Yang; Guan, Shuchen; Hong, Xiaolong; Wang, Zhaoxin; Li, Xianchun
2014-01-01
Cross-modal working memory requires integrating stimuli from different modalities and it is associated with co-activation of distributed networks in the brain. However, how brain initiates cross-modal working memory retrieval remains not clear yet. In the present study, we developed a cued matching task, in which the necessity for cross-modal/unimodal memory retrieval and its initiation time were controlled by a task cue appeared in the delay period. Using functional magnetic resonance imaging (fMRI), significantly larger brain activations were observed in the left lateral prefrontal cortex (l-LPFC), left superior parietal lobe (l-SPL), and thalamus in the cued cross-modal matching trials (CCMT) compared to those in the cued unimodal matching trials (CUMT). However, no significant differences in the brain activations prior to task cue were observed for sensory stimulation in the l-LPFC and l-SPL areas. Although thalamus displayed differential responses to the sensory stimulation between two conditions, the differential responses were not the same with responses to the task cues. These results revealed that the frontoparietal-thalamus network participated in the initiation of cross-modal working memory retrieval. Secondly, the l-SPL and thalamus showed differential activations between maintenance and working memory retrieval, which might be associated with the enhanced demand for cognitive resources.
Selective Attention and Sensory Modality in Aging: Curses and Blessings.
Van Gerven, Pascal W M; Guerreiro, Maria J S
2016-01-01
The notion that selective attention is compromised in older adults as a result of impaired inhibitory control is well established. Yet it is primarily based on empirical findings covering the visual modality. Auditory and especially, cross-modal selective attention are remarkably underexposed in the literature on aging. In the past 5 years, we have attempted to fill these voids by investigating performance of younger and older adults on equivalent tasks covering all four combinations of visual or auditory target, and visual or auditory distractor information. In doing so, we have demonstrated that older adults are especially impaired in auditory selective attention with visual distraction. This pattern of results was not mirrored by the results from our psychophysiological studies, however, in which both enhancement of target processing and suppression of distractor processing appeared to be age equivalent. We currently conclude that: (1) age-related differences of selective attention are modality dependent; (2) age-related differences of selective attention are limited; and (3) it remains an open question whether modality-specific age differences in selective attention are due to impaired distractor inhibition, impaired target enhancement, or both. These conclusions put the longstanding inhibitory deficit hypothesis of aging in a new perspective.
Brooks, Cassandra J.; Chan, Yu Man; Anderson, Andrew J.; McKendrick, Allison M.
2018-01-01
Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information. PMID:29867415
Brooks, Cassandra J; Chan, Yu Man; Anderson, Andrew J; McKendrick, Allison M
2018-01-01
Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information.
Sensory Substitution and Multimodal Mental Imagery.
Nanay, Bence
2017-09-01
Many philosophers use findings about sensory substitution devices in the grand debate about how we should individuate the senses. The big question is this: Is "vision" assisted by (tactile) sensory substitution really vision? Or is it tactile perception? Or some sui generis novel form of perception? My claim is that sensory substitution assisted "vision" is neither vision nor tactile perception, because it is not perception at all. It is mental imagery: visual mental imagery triggered by tactile sensory stimulation. But it is a special form of mental imagery that is triggered by corresponding sensory stimulation in a different sense modality, which I call "multimodal mental imagery."
Curtindale, Lori; Laurie-Rose, Cynthia; Bennett-Murphy, Laura; Hull, Sarah
2007-05-01
Applying optimal stimulation theory, the present study explored the development of sustained attention as a dynamic process. It examined the interaction of modality and temperament over time in children and adults. Second-grade children and college-aged adults performed auditory and visual vigilance tasks. Using the Carey temperament questionnaires (S. C. McDevitt & W. B. Carey, 1995), the authors classified participants according to temperament composites of reactivity and task orientation. In a preliminary study, tasks were equated across age and modality using d' matching procedures. In the main experiment, 48 children and 48 adults performed these calibrated tasks. The auditory task proved more difficult for both children and adults. Intermodal relations changed with age: Performance across modality was significantly correlated for children but not for adults. Although temperament did not significantly predict performance in adults, it did for children. The temperament effects observed in children--specifically in those with the composite of reactivity--occurred in connection with the auditory task and in a manner consistent with theoretical predictions derived from optimal stimulation theory. Copyright (c) 2007 APA, all rights reserved.
Sugiyama, Taisei; Liew, Sook-Lei
2017-01-01
Modifying sensory aspects of the learning environment can influence motor behavior. Although the effects of sensory manipulations on motor behavior have been widely studied, there still remains a great deal of variability across the field in terms of how sensory information has been manipulated or applied. Here, the authors briefly review and integrate the literature from each sensory modality to gain a better understanding of how sensory manipulations can best be used to enhance motor behavior. Then, they discuss 2 emerging themes from this literature that are important for translating sensory manipulation research into effective interventions. Finally, the authors provide future research directions that may lead to enhanced efficacy of sensory manipulations for motor learning and rehabilitation.
Smell or vision? The use of different sensory modalities in predator discrimination.
Fischer, Stefan; Oberhummer, Evelyne; Cunha-Saraiva, Filipa; Gerber, Nina; Taborsky, Barbara
2017-01-01
Theory predicts that animals should adjust their escape responses to the perceived predation risk. The information animals obtain about potential predation risk may differ qualitatively depending on the sensory modality by which a cue is perceived. For instance, olfactory cues may reveal better information about the presence or absence of threats, whereas visual information can reliably transmit the position and potential attack distance of a predator. While this suggests a differential use of information perceived through the two sensory channels, the relative importance of visual vs. olfactory cues when distinguishing between different predation threats is still poorly understood. Therefore, we exposed individuals of the cooperatively breeding cichlid Neolamprologus pulcher to a standardized threat stimulus combined with either predator or non-predator cues presented either visually or chemically. We predicted that flight responses towards a threat stimulus are more pronounced if cues of dangerous rather than harmless heterospecifics are presented and that N. pulcher , being an aquatic species, relies more on olfaction when discriminating between dangerous and harmless heterospecifics. N. pulcher responded faster to the threat stimulus, reached a refuge faster and entered a refuge more likely when predator cues were perceived. Unexpectedly, the sensory modality used to perceive the cues did not affect the escape response or the duration of the recovery phase. This suggests that N. pulcher are able to discriminate heterospecific cues with similar acuity when using vision or olfaction. We discuss that this ability may be advantageous in aquatic environments where the visibility conditions strongly vary over time. The ability to rapidly discriminate between dangerous predators and harmless heterospecifics is crucial for the survival of prey animals. In seasonally fluctuating environment, sensory conditions may change over the year and may make the use of multiple sensory modalities for heterospecific discrimination highly beneficial. Here we compared the efficacy of visual and olfactory senses in the discrimination ability of the cooperatively breeding cichlid Neolamprologus pulcher . We presented individual fish with visual or olfactory cues of predators or harmless heterospecifics and recorded their flight response. When exposed to predator cues, individuals responded faster, reached a refuge faster and were more likely to enter the refuge. Unexpectedly, the olfactory and visual senses seemed to be equally efficient in this discrimination task, suggesting that seasonal variation of water conditions experienced by N. pulcher may necessitate the use of multiple sensory channels for the same task.
Performance Evaluation of Passive Haptic Feedback for Tactile HMI Design in CAVEs.
Lassagne, Antoine; Kemeny, Andras; Posselt, Javier; Merienne, Frederic
2018-01-01
This article presents a comparison of different haptic systems, which are designed to simulate flat Human Machine Interfaces (HMIs) like touchscreens in virtual environments (VEs) such as CAVEs, and their respective performance. We compare a tangible passive transparent slate to a classic tablet and a sensory substitution system. These systems were tested during a controlled experiment. The performance and impressions from 20 subjects were collected to understand more about the modalities in the given context. The results show that the preferences of the subjects are strongly related to the use-cases and needs. In terms of performance, passive haptics proved to be significantly useful, acting as a space reference and a real-time continuous calibration system, allowing subjects to have lower execution durations and relative errors. Sensory substitution induced perception drifts during the experiment, causing significant performance disparities, demonstrating the low robustness of perception when spatial cues are insufficiently available. Our findings offer a better understanding on the nature of perception drifts and the need of strong multisensory spatial markers for such use-cases in CAVEs. The importance of a relevant haptic modality specifically designed to match a precise use-case is also emphasized.
Timescale- and Sensory Modality-Dependency of the Central Tendency of Time Perception.
Murai, Yuki; Yotsumoto, Yuko
2016-01-01
When individuals are asked to reproduce intervals of stimuli that are intermixedly presented at various times, longer intervals are often underestimated and shorter intervals overestimated. This phenomenon may be attributed to the central tendency of time perception, and suggests that our brain optimally encodes a stimulus interval based on current stimulus input and prior knowledge of the distribution of stimulus intervals. Two distinct systems are thought to be recruited in the perception of sub- and supra-second intervals. Sub-second timing is subject to local sensory processing, whereas supra-second timing depends on more centralized mechanisms. To clarify the factors that influence time perception, the present study investigated how both sensory modality and timescale affect the central tendency. In Experiment 1, participants were asked to reproduce sub- or supra-second intervals, defined by visual or auditory stimuli. In the sub-second range, the magnitude of the central tendency was significantly larger for visual intervals compared to auditory intervals, while visual and auditory intervals exhibited a correlated and comparable central tendency in the supra-second range. In Experiment 2, the ability to discriminate sub-second intervals in the reproduction task was controlled across modalities by using an interval discrimination task. Even when the ability to discriminate intervals was controlled, visual intervals exhibited a larger central tendency than auditory intervals in the sub-second range. In addition, the magnitude of the central tendency for visual and auditory sub-second intervals was significantly correlated. These results suggest that a common modality-independent mechanism is responsible for the supra-second central tendency, and that both the modality-dependent and modality-independent components of the timing system contribute to the central tendency in the sub-second range.
Recalibration of the Multisensory Temporal Window of Integration Results from Changing Task Demands
Mégevand, Pierre; Molholm, Sophie; Nayak, Ashabari; Foxe, John J.
2013-01-01
The notion of the temporal window of integration, when applied in a multisensory context, refers to the breadth of the interval across which the brain perceives two stimuli from different sensory modalities as synchronous. It maintains a unitary perception of multisensory events despite physical and biophysical timing differences between the senses. The boundaries of the window can be influenced by attention and past sensory experience. Here we examined whether task demands could also influence the multisensory temporal window of integration. We varied the stimulus onset asynchrony between simple, short-lasting auditory and visual stimuli while participants performed two tasks in separate blocks: a temporal order judgment task that required the discrimination of subtle auditory-visual asynchronies, and a reaction time task to the first incoming stimulus irrespective of its sensory modality. We defined the temporal window of integration as the range of stimulus onset asynchronies where performance was below 75% in the temporal order judgment task, as well as the range of stimulus onset asynchronies where responses showed multisensory facilitation (race model violation) in the reaction time task. In 5 of 11 participants, we observed audio-visual stimulus onset asynchronies where reaction time was significantly accelerated (indicating successful integration in this task) while performance was accurate in the temporal order judgment task (indicating successful segregation in that task). This dissociation suggests that in some participants, the boundaries of the temporal window of integration can adaptively recalibrate in order to optimize performance according to specific task demands. PMID:23951203
Daemi, Mehdi; Harris, Laurence R; Crawford, J Douglas
2016-01-01
Animals try to make sense of sensory information from multiple modalities by categorizing them into perceptions of individual or multiple external objects or internal concepts. For example, the brain constructs sensory, spatial representations of the locations of visual and auditory stimuli in the visual and auditory cortices based on retinal and cochlear stimulations. Currently, it is not known how the brain compares the temporal and spatial features of these sensory representations to decide whether they originate from the same or separate sources in space. Here, we propose a computational model of how the brain might solve such a task. We reduce the visual and auditory information to time-varying, finite-dimensional signals. We introduce controlled, leaky integrators as working memory that retains the sensory information for the limited time-course of task implementation. We propose our model within an evidence-based, decision-making framework, where the alternative plan units are saliency maps of space. A spatiotemporal similarity measure, computed directly from the unimodal signals, is suggested as the criterion to infer common or separate causes. We provide simulations that (1) validate our model against behavioral, experimental results in tasks where the participants were asked to report common or separate causes for cross-modal stimuli presented with arbitrary spatial and temporal disparities. (2) Predict the behavior in novel experiments where stimuli have different combinations of spatial, temporal, and reliability features. (3) Illustrate the dynamics of the proposed internal system. These results confirm our spatiotemporal similarity measure as a viable criterion for causal inference, and our decision-making framework as a viable mechanism for target selection, which may be used by the brain in cross-modal situations. Further, we suggest that a similar approach can be extended to other cognitive problems where working memory is a limiting factor, such as target selection among higher numbers of stimuli and selections among other modality combinations.
Ohshiro, Tomokazu; Angelaki, Dora E; DeAngelis, Gregory C
2017-07-19
Studies of multisensory integration by single neurons have traditionally emphasized empirical principles that describe nonlinear interactions between inputs from two sensory modalities. We previously proposed that many of these empirical principles could be explained by a divisive normalization mechanism operating in brain regions where multisensory integration occurs. This normalization model makes a critical diagnostic prediction: a non-preferred sensory input from one modality, which activates the neuron on its own, should suppress the response to a preferred input from another modality. We tested this prediction by recording from neurons in macaque area MSTd that integrate visual and vestibular cues regarding self-motion. We show that many MSTd neurons exhibit the diagnostic form of cross-modal suppression, whereas unisensory neurons in area MT do not. The normalization model also fits population responses better than a model based on subtractive inhibition. These findings provide strong support for a divisive normalization mechanism in multisensory integration. Copyright © 2017 Elsevier Inc. All rights reserved.
Neural Substrates for Verbal Working Memory in Deaf Signers: fMRI Study and Lesion Case Report
ERIC Educational Resources Information Center
Buchsbaum, Bradley; Pickell, Bert; Love, Tracy; Hatrak, Marla; Bellugi, Ursula; Hickok, Gregory
2005-01-01
The nature of the representations maintained in verbal working memory is a topic of debate. Some authors argue for a modality-dependent code, tied to particular sensory or motor systems. Others argue for a modality-neutral code. Sign language affords a unique perspective because it factors out the effects of modality. In an fMRI experiment, deaf…
Cross-Modal Attention Effects in the Vestibular Cortex during Attentive Tracking of Moving Objects.
Frank, Sebastian M; Sun, Liwei; Forster, Lisa; Tse, Peter U; Greenlee, Mark W
2016-12-14
The midposterior fundus of the Sylvian fissure in the human brain is central to the cortical processing of vestibular cues. At least two vestibular areas are located at this site: the parietoinsular vestibular cortex (PIVC) and the posterior insular cortex (PIC). It is now well established that activity in sensory systems is subject to cross-modal attention effects. Attending to a stimulus in one sensory modality enhances activity in the corresponding cortical sensory system, but simultaneously suppresses activity in other sensory systems. Here, we wanted to probe whether such cross-modal attention effects also target the vestibular system. To this end, we used a visual multiple-object tracking task. By parametrically varying the number of tracked targets, we could measure the effect of attentional load on the PIVC and the PIC while holding the perceptual load constant. Participants performed the tracking task during functional magnetic resonance imaging. Results show that, compared with passive viewing of object motion, activity during object tracking was suppressed in the PIVC and enhanced in the PIC. Greater attentional load, induced by increasing the number of tracked targets, was associated with a corresponding increase in the suppression of activity in the PIVC. Activity in the anterior part of the PIC decreased with increasing load, whereas load effects were absent in the posterior PIC. Results of a control experiment show that attention-induced suppression in the PIVC is stronger than any suppression evoked by the visual stimulus per se. Overall, our results suggest that attention has a cross-modal modulatory effect on the vestibular cortex during visual object tracking. In this study we investigate cross-modal attention effects in the human vestibular cortex. We applied the visual multiple-object tracking task because it is known to evoke attentional load effects on neural activity in visual motion-processing and attention-processing areas. Here we demonstrate a load-dependent effect of attention on the activation in the vestibular cortex, despite constant visual motion stimulation. We find that activity in the parietoinsular vestibular cortex is more strongly suppressed the greater the attentional load on the visual tracking task. These findings suggest cross-modal attentional modulation in the vestibular cortex. Copyright © 2016 the authors 0270-6474/16/3612720-09$15.00/0.
Neural Correlates of Sensory Substitution in Vestibular Pathways Following Complete Vestibular Loss
Sadeghi, Soroush G.; Minor, Lloyd B.; Cullen, Kathleen E.
2012-01-01
Sensory substitution is the term typically used in reference to sensory prosthetic devices designed to replace input from one defective modality with input from another modality. Such devices allow an alternative encoding of sensory information that is no longer directly provided by the defective modality in a purposeful and goal-directed manner. The behavioral recovery that follows complete vestibular loss is impressive and has long been thought to take advantage of a natural form of sensory substitution in which head motion information is no longer provided by vestibular inputs, but instead by extra-vestibular inputs such as proprioceptive and motor efference copy signals. Here we examined the neuronal correlates of this behavioral recovery after complete vestibular loss in alert behaving monkeys (Macaca mulata). We show for the first time that extra-vestibular inputs substitute for the vestibular inputs to stabilize gaze at the level of single neurons in the VOR premotor circuitry. The summed weighting of neck proprioceptive and efference copy information was sufficient to explain simultaneously observed behavioral improvements in gaze stability. Furthermore, by altering correspondence between intended and actual head movement we revealed a four-fold increase in the weight of neck motor efference copy signals consistent with the enhanced behavioral recovery observed when head movements are voluntary versus unexpected. Thus, taken together our results provide direct evidence that the substitution by extra-vestibular inputs in vestibular pathways provides a neural correlate for the improvements in gaze stability that are observed following the total loss of vestibular inputs. PMID:23077054
When kinesthetic information is neglected in learning a Novel bimanual rhythmic coordination.
Zhu, Qin; Mirich, Todd; Huang, Shaochen; Snapp-Childs, Winona; Bingham, Geoffrey P
2017-08-01
Many studies have shown that rhythmic interlimb coordination involves perception of the coupled limb movements, and different sensory modalities can be used. Using visual displays to inform the coupled bimanual movement, novel bimanual coordination patterns can be learned with practice. A recent study showed that similar learning occurred without vision when a coach provided manual guidance during practice. The information provided via the two different modalities may be same (amodal) or different (modality specific). If it is different, then learning with both is a dual task, and one source of information might be used in preference to the other in performing the task when both are available. In the current study, participants learned a novel 90° bimanual coordination pattern without or with visual information in addition to kinesthesis. In posttest, all participants were tested without and with visual information in addition to kinesthesis. When tested with visual information, all participants exhibited performance that was significantly improved by practice. When tested without visual information, participants who practiced using only kinesthetic information showed improvement, but those who practiced with visual information in addition showed remarkably less improvement. The results indicate that (1) the information is not amodal, (2) use of a single type of information was preferred, and (3) the preferred information was visual. We also hypothesized that older participants might be more likely to acquire dual task performance given their greater experience of the two sensory modes in combination, but results were replicated with both 20- and 50-year-olds.
Ricciardi, Emiliano; Handjaras, Giacomo; Pietrini, Pietro
2014-11-01
Since the early days, how we represent the world around us has been a matter of philosophical speculation. Over the last few decades, modern neuroscience, and specifically the development of methodologies for the structural and the functional exploration of the brain have made it possible to investigate old questions with an innovative approach. In this brief review, we discuss the main findings from a series of brain anatomical and functional studies conducted in sighted and congenitally blind individuals by our's and others' laboratories. Historically, research on the 'blind brain' has focused mainly on the cross-modal plastic changes that follow sensory deprivation. More recently, a novel line of research has been developed to determine to what extent visual experience is truly required to achieve a representation of the surrounding environment. Overall, the results of these studies indicate that most of the brain fine morphological and functional architecture is programmed to develop and function independently from any visual experience. Distinct cortical areas are able to process information in a supramodal fashion, that is, independently from the sensory modality that carries that information to the brain. These observations strongly support the hypothesis of a modality-independent, i.e. more abstract, cortical organization, and may contribute to explain how congenitally blind individuals may interact efficiently with an external world that they have never seen. © 2014 by the Society for Experimental Biology and Medicine.
Shifts in Audiovisual Processing in Healthy Aging.
Baum, Sarah H; Stevenson, Ryan
2017-09-01
The integration of information across sensory modalities into unified percepts is a fundamental sensory process upon which a multitude of cognitive processes are based. We review the body of literature exploring aging-related changes in audiovisual integration published over the last five years. Specifically, we review the impact of changes in temporal processing, the influence of the effectiveness of sensory inputs, the role of working memory, and the newer studies of intra-individual variability during these processes. Work in the last five years on bottom-up influences of sensory perception has garnered significant attention. Temporal processing, a driving factors of multisensory integration, has now been shown to decouple with multisensory integration in aging, despite their co-decline with aging. The impact of stimulus effectiveness also changes with age, where older adults show maximal benefit from multisensory gain at high signal-to-noise ratios. Following sensory decline, high working memory capacities have now been shown to be somewhat of a protective factor against age-related declines in audiovisual speech perception, particularly in noise. Finally, newer research is emerging focusing on the general intra-individual variability observed with aging. Overall, the studies of the past five years have replicated and expanded on previous work that highlights the role of bottom-up sensory changes with aging and their influence on audiovisual integration, as well as the top-down influence of working memory.
Rieucau, Guillaume; Boswell, Kevin M.; De Robertis, Alex; Macaulay, Gavin J.; Handegard, Nils Olav
2014-01-01
Aggregation is commonly thought to improve animals' security. Within aquatic ecosystems, group-living prey can learn about immediate threats using cues perceived directly from predators, or from collective behaviours, for example, by reacting to the escape behaviours of companions. Combining cues from different modalities may improve the accuracy of prey antipredatory decisions. In this study, we explored the sensory modalities that mediate collective antipredatory responses of herring (Clupea harengus) when in a large school (approximately 60 000 individuals). By conducting a simulated predator encounter experiment in a semi-controlled environment (a sea cage), we tested the hypothesis that the collective responses of herring are threat-sensitive. We investigated whether cues from potential threats obtained visually or from the perception of water displacement, used independently or in an additive way, affected the strength of the collective avoidance reactions. We modified the sensory nature of the simulated threat by exposing the herring to 4 predator models differing in shape and transparency. The collective vertical avoidance response was observed and quantified using active acoustics. The combination of sensory cues elicited the strongest avoidance reactions, suggesting that collective antipredator responses in herring are mediated by the sensory modalities involved during threat detection in an additive fashion. Thus, this study provides evidence for magnitude-graded threat responses in a large school of wild-caught herring which is consistent with the “threat-sensitive hypothesis”. PMID:24489778
Top-down modulation of visual and auditory cortical processing in aging.
Guerreiro, Maria J S; Eck, Judith; Moerel, Michelle; Evers, Elisabeth A T; Van Gerven, Pascal W M
2015-02-01
Age-related cognitive decline has been accounted for by an age-related deficit in top-down attentional modulation of sensory cortical processing. In light of recent behavioral findings showing that age-related differences in selective attention are modality dependent, our goal was to investigate the role of sensory modality in age-related differences in top-down modulation of sensory cortical processing. This question was addressed by testing younger and older individuals in several memory tasks while undergoing fMRI. Throughout these tasks, perceptual features were kept constant while attentional instructions were varied, allowing us to devise all combinations of relevant and irrelevant, visual and auditory information. We found no top-down modulation of auditory sensory cortical processing in either age group. In contrast, we found top-down modulation of visual cortical processing in both age groups, and this effect did not differ between age groups. That is, older adults enhanced cortical processing of relevant visual information and suppressed cortical processing of visual distractors during auditory attention to the same extent as younger adults. The present results indicate that older adults are capable of suppressing irrelevant visual information in the context of cross-modal auditory attention, and thereby challenge the view that age-related attentional and cognitive decline is due to a general deficits in the ability to suppress irrelevant information. Copyright © 2014 Elsevier B.V. All rights reserved.
NPY2-receptor variation modulates iconic memory processes.
Arning, Larissa; Stock, Ann-Kathrin; Kloster, Eugen; Epplen, Jörg T; Beste, Christian
2014-08-01
Sensory memory systems are modality-specific buffers that comprise information about external stimuli, which represent the earliest stage of information processing. While these systems have been the subject of cognitive neuroscience research for decades, little is known about the neurobiological basis of sensory memory. However, accumulating evidence suggests that the glutamatergic system and systems influencing glutamatergic neural transmission are important. In the current study we examine if functional promoter variations in neuropeptide Y (NPY) and its receptor gene NPY2R affect iconic memory processes using a partial report paradigm. We found that iconic memory decayed much faster in individuals carrying the rare promoter NPY2R G allele which is associated with increased expression of the Y2 receptor. Possibly this effect is due to altered presynaptic inhibition of glutamate release, known to be modulated by Y2 receptors. Altogether, our results provide evidence that the functionally relevant single nucleotide polymorphism (SNP) in the NPY2R promoter gene affect circumscribed processes of early sensory processing, i.e. only the stability of information in sensory memory buffers. This leads us to suggest that especially the stability of information in sensory memory buffers depends on glutamatergic neural transmission and factors modulating glutamatergic turnover. Copyright © 2014 Elsevier B.V. and ECNP. All rights reserved.
Flexibility in embodied language understanding.
Willems, Roel M; Casasanto, Daniel
2011-01-01
Do people use sensori-motor cortices to understand language? Here we review neurocognitive studies of language comprehension in healthy adults and evaluate their possible contributions to theories of language in the brain. We start by sketching the minimal predictions that an embodied theory of language understanding makes for empirical research, and then survey studies that have been offered as evidence for embodied semantic representations. We explore four debated issues: first, does activation of sensori-motor cortices during action language understanding imply that action semantics relies on mirror neurons? Second, what is the evidence that activity in sensori-motor cortices plays a functional role in understanding language? Third, to what extent do responses in perceptual and motor areas depend on the linguistic and extra-linguistic context? And finally, can embodied theories accommodate language about abstract concepts? Based on the available evidence, we conclude that sensori-motor cortices are activated during a variety of language comprehension tasks, for both concrete and abstract language. Yet, this activity depends on the context in which perception and action words are encountered. Although modality-specific cortical activity is not a sine qua non of language processing even for language about perception and action, sensori-motor regions of the brain appear to make functional contributions to the construction of meaning, and should therefore be incorporated into models of the neurocognitive architecture of language.
The failure to detect drug-induced sensory loss in standard preclinical studies.
Gauvin, David V; Abernathy, Matthew M; Tapp, Rachel L; Yoder, Joshua D; Dalton, Jill A; Baird, Theodore J
2015-01-01
Over the years a number of drugs have been approved for human use with limited signs of toxicity noted during preclinical risk assessment study designs but then show adverse events in compliant patients taking the drugs as prescribed within the first few years on the market. Loss or impairments in sensory systems, such as hearing, vision, taste, and smell have been reported to the FDA or have been described in the literature appearing in peer-reviewed scientific journals within the first five years of widespread use. This review highlights the interactive cross-modal compensation within sensory systems that can occur that reduces the likelihood of identifying these losses in less sentient animals used in standard preclinical toxicology and safety protocols. We provide some historical and experimental evidence to substantiate these sensory effects in and highlight the critical importance of detailed training of technicians on basic ethological, species-specific behaviors of all purpose-bred laboratory animals used in these study designs. We propose that the time, effort and cost of training technicians to be better able to identify and document very subtle changes in behavior will serve to increase the likelihood of early detection of biomarkers predictive of drug-induced sensory loss within current standard regulatory preclinical research protocols. Copyright © 2015 Elsevier Inc. All rights reserved.
de Carvalho Barbosa, Mariana; Kosturakis, Alyssa K; Eng, Cathy; Wendelschafer-Crabb, Gwen; Kennedy, William R; Simone, Donald A; Wang, Xin S; Cleeland, Charles S; Dougherty, Patrick M
2014-11-01
Peripheral neuropathy caused by cytotoxic chemotherapy, especially platins and taxanes, is a widespread problem among cancer survivors that is likely to continue to expand in the future. However, little work to date has focused on understanding this challenge. The goal in this study was to determine the impact of colorectal cancer and cumulative chemotherapeutic dose on sensory function to gain mechanistic insight into the subtypes of primary afferent fibers damaged by chemotherapy. Patients with colorectal cancer underwent quantitative sensory testing before and then prior to each cycle of oxaliplatin. These data were compared with those from 47 age- and sex-matched healthy volunteers. Patients showed significant subclinical deficits in sensory function before any therapy compared with healthy volunteers, and they became more pronounced in patients who received chemotherapy. Sensory modalities that involved large Aβ myelinated fibers and unmyelinated C fibers were most affected by chemotherapy, whereas sensory modalities conveyed by thinly myelinated Aδ fibers were less sensitive to chemotherapy. Patients with baseline sensory deficits went on to develop more symptom complaints during chemotherapy than those who had no baseline deficit. Patients who were tested again 6 to 12 months after chemotherapy presented with the most numbness and pain and also the most pronounced sensory deficits. Our results illuminate a mechanistic connection between the pattern of effects on sensory function and the nerve fiber types that appear to be most vulnerable to chemotherapy-induced toxicity, with implications for how to focus future work to ameloirate risks of peripheral neuropathy. ©2014 American Association for Cancer Research.
Harjunen, Ville J; Ahmed, Imtiaj; Jacucci, Giulio; Ravaja, Niklas; Spapé, Michiel M
2017-01-01
Earlier studies have revealed cross-modal visuo-tactile interactions in endogenous spatial attention. The current research used event-related potentials (ERPs) and virtual reality (VR) to identify how the visual cues of the perceiver's body affect visuo-tactile interaction in endogenous spatial attention and at what point in time the effect takes place. A bimodal oddball task with lateralized tactile and visual stimuli was presented in two VR conditions, one with and one without visible hands, and one VR-free control with hands in view. Participants were required to silently count one type of stimulus and ignore all other stimuli presented in irrelevant modality or location. The presence of hands was found to modulate early and late components of somatosensory and visual evoked potentials. For sensory-perceptual stages, the presence of virtual or real hands was found to amplify attention-related negativity on the somatosensory N140 and cross-modal interaction in somatosensory and visual P200. For postperceptual stages, an amplified N200 component was obtained in somatosensory and visual evoked potentials, indicating increased response inhibition in response to non-target stimuli. The effect of somatosensory, but not visual, N200 enhanced when the virtual hands were present. The findings suggest that bodily presence affects sustained cross-modal spatial attention between vision and touch and that this effect is specifically present in ERPs related to early- and late-sensory processing, as well as response inhibition, but do not affect later attention and memory-related P3 activity. Finally, the experiments provide commeasurable scenarios for the estimation of the signal and noise ratio to quantify effects related to the use of a head mounted display (HMD). However, despite valid a-priori reasons for fearing signal interference due to a HMD, we observed no significant drop in the robustness of our ERP measurements.
Harjunen, Ville J.; Ahmed, Imtiaj; Jacucci, Giulio; Ravaja, Niklas; Spapé, Michiel M.
2017-01-01
Earlier studies have revealed cross-modal visuo-tactile interactions in endogenous spatial attention. The current research used event-related potentials (ERPs) and virtual reality (VR) to identify how the visual cues of the perceiver’s body affect visuo-tactile interaction in endogenous spatial attention and at what point in time the effect takes place. A bimodal oddball task with lateralized tactile and visual stimuli was presented in two VR conditions, one with and one without visible hands, and one VR-free control with hands in view. Participants were required to silently count one type of stimulus and ignore all other stimuli presented in irrelevant modality or location. The presence of hands was found to modulate early and late components of somatosensory and visual evoked potentials. For sensory-perceptual stages, the presence of virtual or real hands was found to amplify attention-related negativity on the somatosensory N140 and cross-modal interaction in somatosensory and visual P200. For postperceptual stages, an amplified N200 component was obtained in somatosensory and visual evoked potentials, indicating increased response inhibition in response to non-target stimuli. The effect of somatosensory, but not visual, N200 enhanced when the virtual hands were present. The findings suggest that bodily presence affects sustained cross-modal spatial attention between vision and touch and that this effect is specifically present in ERPs related to early- and late-sensory processing, as well as response inhibition, but do not affect later attention and memory-related P3 activity. Finally, the experiments provide commeasurable scenarios for the estimation of the signal and noise ratio to quantify effects related to the use of a head mounted display (HMD). However, despite valid a-priori reasons for fearing signal interference due to a HMD, we observed no significant drop in the robustness of our ERP measurements. PMID:28275346
Zelic, Gregory; Mottet, Denis; Lagarde, Julien
2012-01-01
Recent behavioral neuroscience research revealed that elementary reactive behavior can be improved in the case of cross-modal sensory interactions thanks to underlying multisensory integration mechanisms. Can this benefit be generalized to an ongoing coordination of movements under severe physical constraints? We choose a juggling task to examine this question. A central issue well-known in juggling lies in establishing and maintaining a specific temporal coordination among balls, hands, eyes and posture. Here, we tested whether providing additional timing information about the balls and hands motions by using external sound and tactile periodic stimulations, the later presented at the wrists, improved the behavior of jugglers. One specific combination of auditory and tactile metronome led to a decrease of the spatiotemporal variability of the juggler's performance: a simple sound associated to left and right tactile cues presented antiphase to each other, which corresponded to the temporal pattern of hands movement in the juggling task. A contrario, no improvements were obtained in the case of other auditory and tactile combinations. We even found a degraded performance when tactile events were presented alone. The nervous system thus appears able to integrate in efficient way environmental information brought by different sensory modalities, but only if the information specified matches specific features of the coordination pattern. We discuss the possible implications of these results for the understanding of the neuronal integration process implied in audio-tactile interaction in the context of complex voluntary movement, and considering the well-known gating effect of movement on vibrotactile perception. PMID:22384211
Auditory to Visual Cross-Modal Adaptation for Emotion: Psychophysical and Neural Correlates.
Wang, Xiaodong; Guo, Xiaotao; Chen, Lin; Liu, Yijun; Goldberg, Michael E; Xu, Hong
2017-02-01
Adaptation is fundamental in sensory processing and has been studied extensively within the same sensory modality. However, little is known about adaptation across sensory modalities, especially in the context of high-level processing, such as the perception of emotion. Previous studies have shown that prolonged exposure to a face exhibiting one emotion, such as happiness, leads to contrastive biases in the perception of subsequently presented faces toward the opposite emotion, such as sadness. Such work has shown the importance of adaptation in calibrating face perception based on prior visual exposure. In the present study, we showed for the first time that emotion-laden sounds, like laughter, adapt the visual perception of emotional faces, that is, subjects more frequently perceived faces as sad after listening to a happy sound. Furthermore, via electroencephalography recordings and event-related potential analysis, we showed that there was a neural correlate underlying the perceptual bias: There was an attenuated response occurring at ∼ 400 ms to happy test faces and a quickened response to sad test faces, after exposure to a happy sound. Our results provide the first direct evidence for a behavioral cross-modal adaptation effect on the perception of facial emotion, and its neural correlate. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Neuro-Linguistic Programming, Matching Sensory Predicates, and Rapport.
ERIC Educational Resources Information Center
Schmedlen, George W.; And Others
A key task for the therapist in psychotherapy is to build trust and rapport with the client. Neuro-Linguistic Programming (NLP) practitioners believe that matching the sensory modality (representational system) of a client's predicates (verbs, adverbs, and adjectives) improves rapport. In this study, 16 volunteer subjects participated in two…
Learning of Sensory Sequences in Cerebellar Patients
ERIC Educational Resources Information Center
Frings, Markus; Boenisch, Raoul; Gerwig, Marcus; Diener, Hans-Christoph; Timmann, Dagmar
2004-01-01
A possible role of the cerebellum in detecting and recognizing event sequences has been proposed. The present study sought to determine whether patients with cerebellar lesions are impaired in the acquisition and discrimination of sequences of sensory stimuli of different modalities. A group of 26 cerebellar patients and 26 controls matched for…
Evidence for Diminished Multisensory Integration in Autism Spectrum Disorders
ERIC Educational Resources Information Center
Stevenson, Ryan A.; Siemann, Justin K.; Woynaroski, Tiffany G.; Schneider, Brittany C.; Eberly, Haley E.; Camarata, Stephen M.; Wallace, Mark T.
2014-01-01
Individuals with autism spectrum disorders (ASD) exhibit alterations in sensory processing, including changes in the integration of information across the different sensory modalities. In the current study, we used the sound-induced flash illusion to assess multisensory integration in children with ASD and typically-developing (TD) controls.…
Bedford, Felice L
2012-02-01
A new theory of mind-body interaction in healing is proposed based on considerations from the field of perception. It is suggested that the combined effect of visual imagery and mindful meditation on physical healing is simply another example of cross-modal adaptation in perception, much like adaptation to prism-displaced vision. It is argued that psychological interventions produce a conflict between the perceptual modalities of the immune system and vision (or touch), which leads to change in the immune system in order to realign the modalities. It is argued that mind-body interactions do not exist because of higher-order cognitive thoughts or beliefs influencing the body, but instead result from ordinary interactions between lower-level perceptual modalities that function to detect when sensory systems have made an error. The theory helps explain why certain illnesses may be more amenable to mind-body interaction, such as autoimmune conditions in which a sensory system (the immune system) has made an error. It also renders sensible erroneous changes, such as those brought about by "faith healers," as conflicts between modalities that are resolved in favor of the wrong modality. The present view provides one of very few psychological theories of how guided imagery and mindfulness meditation bring about positive physical change. Also discussed are issues of self versus non-self, pain, cancer, body schema, attention, consciousness, and, importantly, developing the concept that the immune system is a rightful perceptual modality. Recognizing mind-body healing as perceptual cross-modal adaptation implies that a century of cross-modal perception research is applicable to the immune system.
Information fusion via isocortex-based Area 37 modeling
NASA Astrophysics Data System (ADS)
Peterson, James K.
2004-08-01
A simplified model of information processing in the brain can be constructed using primary sensory input from two modalities (auditory and visual) and recurrent connections to the limbic subsystem. Information fusion would then occur in Area 37 of the temporal cortex. The creation of meta concepts from the low order primary inputs is managed by models of isocortex processing. Isocortex algorithms are used to model parietal (auditory), occipital (visual), temporal (polymodal fusion) cortex and the limbic system. Each of these four modules is constructed out of five cortical stacks in which each stack consists of three vertically oriented six layer isocortex models. The input to output training of each cortical model uses the OCOS (on center - off surround) and FFP (folded feedback pathway) circuitry of (Grossberg, 1) which is inherently a recurrent network type of learning characterized by the identification of perceptual groups. Models of this sort are thus closely related to cognitive models as it is difficult to divorce the sensory processing subsystems from the higher level processing in the associative cortex. The overall software architecture presented is biologically based and is presented as a potential architectural prototype for the development of novel sensory fusion strategies. The algorithms are motivated to some degree by specific data from projects on musical composition and autonomous fine art painting programs, but only in the sense that these projects use two specific types of auditory and visual cortex data. Hence, the architectures are presented for an artificial information processing system which utilizes two disparate sensory sources. The exact nature of the two primary sensory input streams is irrelevant.
Visual Cortex Plasticity: A Complex Interplay of Genetic and Environmental Influences
Maya-Vetencourt, José Fernando; Origlia, Nicola
2012-01-01
The central nervous system architecture is highly dynamic and continuously modified by sensory experience through processes of neuronal plasticity. Plasticity is achieved by a complex interplay of environmental influences and physiological mechanisms that ultimately activate intracellular signal transduction pathways regulating gene expression. In addition to the remarkable variety of transcription factors and their combinatorial interaction at specific gene promoters, epigenetic mechanisms that regulate transcription have emerged as conserved processes by which the nervous system accomplishes the induction of plasticity. Experience-dependent changes of DNA methylation patterns and histone posttranslational modifications are, in fact, recruited as targets of plasticity-associated signal transduction mechanisms. Here, we shall concentrate on structural and functional consequences of early sensory deprivation in the visual system and discuss how intracellular signal transduction pathways associated with experience regulate changes of chromatin structure and gene expression patterns that underlie these plastic phenomena. Recent experimental evidence for mechanisms of cross-modal plasticity following congenital or acquired sensory deprivation both in human and animal models will be considered as well. We shall also review different experimental strategies that can be used to achieve the recovery of sensory functions after long-term deprivation in humans. PMID:22852098
Yildirim, Ilker; Jacobs, Robert A
2015-06-01
If a person is trained to recognize or categorize objects or events using one sensory modality, the person can often recognize or categorize those same (or similar) objects and events via a novel modality. This phenomenon is an instance of cross-modal transfer of knowledge. Here, we study the Multisensory Hypothesis which states that people extract the intrinsic, modality-independent properties of objects and events, and represent these properties in multisensory representations. These representations underlie cross-modal transfer of knowledge. We conducted an experiment evaluating whether people transfer sequence category knowledge across auditory and visual domains. Our experimental data clearly indicate that we do. We also developed a computational model accounting for our experimental results. Consistent with the probabilistic language of thought approach to cognitive modeling, our model formalizes multisensory representations as symbolic "computer programs" and uses Bayesian inference to learn these representations. Because the model demonstrates how the acquisition and use of amodal, multisensory representations can underlie cross-modal transfer of knowledge, and because the model accounts for subjects' experimental performances, our work lends credence to the Multisensory Hypothesis. Overall, our work suggests that people automatically extract and represent objects' and events' intrinsic properties, and use these properties to process and understand the same (and similar) objects and events when they are perceived through novel sensory modalities.
Electrophysiological Correlates of Automatic Visual Change Detection in School-Age Children
ERIC Educational Resources Information Center
Clery, Helen; Roux, Sylvie; Besle, Julien; Giard, Marie-Helene; Bruneau, Nicole; Gomot, Marie
2012-01-01
Automatic stimulus-change detection is usually investigated in the auditory modality by studying Mismatch Negativity (MMN). Although the change-detection process occurs in all sensory modalities, little is known about visual deviance detection, particularly regarding the development of this brain function throughout childhood. The aim of the…
Perceptual Learning Style and Learning Proficiency: A Test of the Hypothesis
ERIC Educational Resources Information Center
Kratzig, Gregory P.; Arbuthnott, Katherine D.
2006-01-01
Given the potential importance of using modality preference with instruction, the authors tested whether learning style preference correlated with memory performance in each of 3 sensory modalities: visual, auditory, and kinesthetic. In Study 1, participants completed objective measures of pictorial, auditory, and tactile learning and learning…
Up by upwest: Is slope like north?
Weisberg, Steven M; Nardi, Daniele; Newcombe, Nora S; Shipley, Thomas F
2014-10-01
Terrain slope can be used to encode the location of a goal. However, this directional information may be encoded using a conceptual north (i.e., invariantly with respect to the environment), or in an observer-relative fashion (i.e., varying depending on the direction one faces when learning the goal). This study examines which representation is used, whether the sensory modality in which slope is encoded (visual, kinaesthetic, or both) influences representations, and whether use of slope varies for men and women. In a square room, with a sloped floor explicitly pointed out as the only useful cue, participants encoded the corner in which a goal was hidden. Without direct sensory access to slope cues, participants used a dial to point to the goal. For each trial, the goal was hidden uphill or downhill, and the participants were informed whether they faced uphill or downhill when pointing. In support of observer-relative representations, participants pointed more accurately and quickly when facing concordantly with the hiding position. There was no effect of sensory modality, providing support for functional equivalence. Sex did not interact with the findings on modality or reference frame, but spatial measures correlated with success on the slope task differently for each sex.
Ku, Yixuan; Zhao, Di; Hao, Ning; Hu, Yi; Bodner, Mark; Zhou, Yong-Di
2015-01-01
Both monkey neurophysiological and human EEG studies have shown that association cortices, as well as primary sensory cortical areas, play an essential role in sequential neural processes underlying cross-modal working memory. The present study aims to further examine causal and sequential roles of the primary sensory cortex and association cortex in cross-modal working memory. Individual MRI-based single-pulse transcranial magnetic stimulation (spTMS) was applied to bilateral primary somatosensory cortices (SI) and the contralateral posterior parietal cortex (PPC), while participants were performing a tactile-visual cross-modal delayed matching-to-sample task. Time points of spTMS were 300 ms, 600 ms, 900 ms after the onset of the tactile sample stimulus in the task. The accuracy of task performance and reaction time were significantly impaired when spTMS was applied to the contralateral SI at 300 ms. Significant impairment on performance accuracy was also observed when the contralateral PPC was stimulated at 600 ms. SI and PPC play sequential and distinct roles in neural processes of cross-modal associations and working memory. Copyright © 2015 Elsevier Inc. All rights reserved.
2012-01-01
Background A flexed neck posture leads to non-specific activation of the brain. Sensory evoked cerebral potentials and focal brain blood flow have been used to evaluate the activation of the sensory cortex. We investigated the effects of a flexed neck posture on the cerebral potentials evoked by visual, auditory and somatosensory stimuli and focal brain blood flow in the related sensory cortices. Methods Twelve healthy young adults received right visual hemi-field, binaural auditory and left median nerve stimuli while sitting with the neck in a resting and flexed (20° flexion) position. Sensory evoked potentials were recorded from the right occipital region, Cz in accordance with the international 10–20 system, and 2 cm posterior from C4, during visual, auditory and somatosensory stimulations. The oxidative-hemoglobin concentration was measured in the respective sensory cortex using near-infrared spectroscopy. Results Latencies of the late component of all sensory evoked potentials significantly shortened, and the amplitude of auditory evoked potentials increased when the neck was in a flexed position. Oxidative-hemoglobin concentrations in the left and right visual cortices were higher during visual stimulation in the flexed neck position. The left visual cortex is responsible for receiving the visual information. In addition, oxidative-hemoglobin concentrations in the bilateral auditory cortex during auditory stimulation, and in the right somatosensory cortex during somatosensory stimulation, were higher in the flexed neck position. Conclusions Visual, auditory and somatosensory pathways were activated by neck flexion. The sensory cortices were selectively activated, reflecting the modalities in sensory projection to the cerebral cortex and inter-hemispheric connections. PMID:23199306
Hayes, Keith C; Wolfe, Dalton L; Hsieh, Jane T; Potter, Patrick J; Krassioukov, Andrei; Durham, Carmen E
2002-11-01
To determine the degree of association among indices of preserved sensation derived from quantitative sensory testing (QST), somatosensory evoked potentials (SEPs), and the clinical characteristics of patients with spinal cord injury (SCI). A controlled correlational study of diverse measures of preserved sensory function. Regional SCI rehabilitation center in Ontario, Canada. Thirty-three patients with incomplete SCI and 14 able-bodied controls. Not applicable. QST measures of perceptual threshold for temperature and vibration, American Spinal Injury Association sensory scores (light touch, pinprick), and tibial nerve SEPs. There was a low degree of association (kappa) between QST results and sensory scores (|kappa|=.05-.44). QST measures yielded greater numbers of patients with SCI being classified as impaired, suggesting a greater sensitivity of QST to detect more subtle sensory deficits. QST measures of vibration threshold generally corresponded to the patients' SEP recordings. QST measures of modalities conveyed within the same tract were significantly (P<.05) correlated (|r|=.46-.84) in patients with SCI, but not in controls, whereas those modalities mediated by different pathways had lower and generally nonsignificant correlations (|r|=.05-.44) in both patients and controls. The low degree of association between QST measures and sensory scores is likely attributable to measurement limitations of both assessments, as well as various neuroanatomic and neuropathologic factors. QST provides more sensitive detection of preserved sensory function than does standard clinical examination in patients with incomplete SCI. Copyright 2002 by the American Congress of Rehabilitation Medicine and the American Academy of Physical Medicine and Rehabilitation
Visual Landmarks Facilitate Rodent Spatial Navigation in Virtual Reality Environments
ERIC Educational Resources Information Center
Youngstrom, Isaac A.; Strowbridge, Ben W.
2012-01-01
Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain…
ERIC Educational Resources Information Center
Dicks, Bella
2013-01-01
This paper presents findings from a qualitative UK study exploring the social practices of schoolchildren visiting an interactive science discovery centre. It is promoted as a place for "learning through doing", but the multi-modal, ethnographic methods adopted suggest that children were primarily engaged in (1) sensory pleasure-taking…
Naturalizing aesthetics: brain areas for aesthetic appraisal across sensory modalities.
Brown, Steven; Gao, Xiaoqing; Tisdelle, Loren; Eickhoff, Simon B; Liotti, Mario
2011-09-01
We present here the most comprehensive analysis to date of neuroaesthetic processing by reporting the results of voxel-based meta-analyses of 93 neuroimaging studies of positive-valence aesthetic appraisal across four sensory modalities. The results demonstrate that the most concordant area of activation across all four modalities is the right anterior insula, an area typically associated with visceral perception, especially of negative valence (disgust, pain, etc.). We argue that aesthetic processing is, at its core, the appraisal of the valence of perceived objects. This appraisal is in no way limited to artworks but is instead applicable to all types of perceived objects. Therefore, one way to naturalize aesthetics is to argue that such a system evolved first for the appraisal of objects of survival advantage, such as food sources, and was later co-opted in humans for the experience of artworks for the satisfaction of social needs. Copyright © 2011 Elsevier Inc. All rights reserved.
[Mental Imagery: Neurophysiology and Implications in Psychiatry].
Martínez, Nathalie Tamayo
2014-03-01
To provide an explanation about what mental imagery is and some implications in psychiatry. This article is a narrative literature review. There are many terms in which imagery representations are described in different fields of research. They are defined as perceptions in the absence of an external stimulus, and can be created in any sensory modality. Their neurophysiological substrate is almost the same as the one activated during sensory perception. There is no unified theory about its function, but it is possibly the way that our brain uses and manipulates the information to respond to the environment. Mental imagery is an everyday phenomenon, and when it occurs in specific patterns it can be a sign of mental disorders. Copyright © 2014 Asociación Colombiana de Psiquiatría. Publicado por Elsevier España. All rights reserved.
Distributed multisensory integration in a recurrent network model through supervised learning
NASA Astrophysics Data System (ADS)
Wang, He; Wong, K. Y. Michael
Sensory integration between different modalities has been extensively studied. It is suggested that the brain integrates signals from different modalities in a Bayesian optimal way. However, how the Bayesian rule is implemented in a neural network remains under debate. In this work we propose a biologically plausible recurrent network model, which can perform Bayesian multisensory integration after trained by supervised learning. Our model is composed of two modules, each for one modality. We assume that each module is a recurrent network, whose activity represents the posterior distribution of each stimulus. The feedforward input on each module is the likelihood of each modality. Two modules are integrated through cross-links, which are feedforward connections from the other modality, and reciprocal connections, which are recurrent connections between different modules. By stochastic gradient descent, we successfully trained the feedforward and recurrent coupling matrices simultaneously, both of which resembles the Mexican-hat. We also find that there are more than one set of coupling matrices that can approximate the Bayesian theorem well. Specifically, reciprocal connections and cross-links will compensate each other if one of them is removed. Even though trained with two inputs, the network's performance with only one input is in good accordance with what is predicted by the Bayesian theorem.
Emerging Role of Sensory Perception in Aging and Metabolism.
Riera, Celine E; Dillin, Andrew
2016-05-01
Sensory perception comprises gustatory (taste) and olfactory (smell) modalities as well as somatosensory (pain, heat, and tactile mechanosensory) inputs, which are detected by a multitude of sensory receptors. These sensory receptors are contained in specialized ciliated neurons where they detect changes in environmental conditions and participate in behavioral decisions ranging from food choice to avoiding harmful conditions, thus insuring basic survival in metazoans. Recent genetic studies, however, indicate that sensory perception plays additional physiological functions, notably influencing energy homeostatic processes and longevity through neuronal circuits originating from sensory tissues. Here we review how these findings are redefining metabolic signaling and establish a prominent role of sensory neuroendocrine processes in controlling health span and lifespan, with a goal of translating this knowledge towards managing age-associated diseases. Copyright © 2016. Published by Elsevier Ltd.
Interoception: the forgotten modality in perceptual grounding of abstract and concrete concepts.
Connell, Louise; Lynott, Dermot; Banks, Briony
2018-08-05
Conceptual representations are perceptually grounded, but when investigating which perceptual modalities are involved, researchers have typically restricted their consideration to vision, touch, hearing, taste and smell. However, there is another major modality of perceptual information that is distinct from these traditional five senses; that is, interoception, or sensations inside the body. In this paper, we use megastudy data (modality-specific ratings of perceptual strength for over 32 000 words) to explore how interoceptive information contributes to the perceptual grounding of abstract and concrete concepts. We report how interoceptive strength captures a distinct form of perceptual experience across the abstract-concrete spectrum, but is markedly more important to abstract concepts (e.g. hungry , serenity ) than to concrete concepts (e.g. capacity , rainy ). In particular, interoception dominates emotion concepts, especially negative emotions relating to fear and sadness , moreso than other concepts of equivalent abstractness and valence. Finally, we examine whether interoceptive strength represents valuable information in conceptual content by investigating its role in concreteness effects in word recognition, and find that it enhances semantic facilitation over and above the traditional five sensory modalities. Overall, these findings suggest that interoception has comparable status to other modalities in contributing to the perceptual grounding of abstract and concrete concepts.This article is part of the theme issue 'Varieties of abstract concepts: development, use and representation in the brain'. © 2018 The Author(s).
Somatosensory disturbance by methylmercury exposure.
Takaoka, Shigeru; Kawakami, Yoshinobu; Fujino, Tadashi; Oh-ishi, Fumihiro; Motokura, Fukuo; Kumagai, Yoshio; Miyaoka, Tetsu
2008-05-01
Minamata disease is methylmercury poisoning from consuming fish and shellfish contaminated by industrial waste. The polluted seafood was widely consumed in the area around Minamata, but many individuals were never examined for or classified as having Minamata disease. Following the determination of the Supreme Court of Japan in October 2004 that the Japanese Government was responsible for spreading Minamata disease, over 13,000 residents came forward to be examined for Minamata disease. We studied 197 residents from the Minamata area who had a history of fish consumption during the polluted period to determine the importance of sensory symptoms and findings in making a diagnosis of Minamata disease. We divided the exposed subjects into non-complicated (E) and complicated (E+N) groups based on the absence or presence of other neurological or neurologically related disorders and compared them to residents in control area (C) after matching for age and sex. We quantitatively measured four somatosensory modalities (minimal tactile sense by Semmes-Weinstein monofilaments, vibration sense, position sense, and two-point discrimination) and did psychophysical tests of fine-surface-texture discrimination. Subjective complaints were higher in groups E and E+N than C. Over 90% of E+N and E subjects displayed a sensory disturbance on conventional neurological examination and 28% had visual constriction. About 50% of the E and E +N groups had upper and lower extremity ataxia and about 70% had truncal ataxia. The prevalence of these neurological findings was significantly higher in exposed subjects than controls. All sensory modalities were impaired in the E and E+N groups. All four quantitatively measured sensory modalities were correlated. The prevalence of complaints, neurological findings, and sensory impairment was similar or a little worse in group E+N than in group E. We conclude that sensory symptoms and findings are important in making the diagnosis of Minamata disease and that they can be determined even in the presence of neurological or neurologically related diseases.
Xie, Zilong; Reetzke, Rachel; Chandrasekaran, Bharath
2018-05-24
Increasing visual perceptual load can reduce pre-attentive auditory cortical activity to sounds, a reflection of the limited and shared attentional resources for sensory processing across modalities. Here, we demonstrate that modulating visual perceptual load can impact the early sensory encoding of speech sounds, and that the impact of visual load is highly dependent on the predictability of the incoming speech stream. Participants (n = 20, 9 females) performed a visual search task of high (target similar to distractors) and low (target dissimilar to distractors) perceptual load, while early auditory electrophysiological responses were recorded to native speech sounds. Speech sounds were presented either in a 'repetitive context', or a less predictable 'variable context'. Independent of auditory stimulus context, pre-attentive auditory cortical activity was reduced during high visual load, relative to low visual load. We applied a data-driven machine learning approach to decode speech sounds from the early auditory electrophysiological responses. Decoding performance was found to be poorer under conditions of high (relative to low) visual load, when the incoming acoustic stream was predictable. When the auditory stimulus context was less predictable, decoding performance was substantially greater for the high (relative to low) visual load conditions. Our results provide support for shared attentional resources between visual and auditory modalities that substantially influence the early sensory encoding of speech signals in a context-dependent manner. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.
Basic and supplementary sensory feedback in handwriting
Danna, Jérémy; Velay, Jean-Luc
2015-01-01
The mastering of handwriting is so essential in our society that it is important to try to find new methods for facilitating its learning and rehabilitation. The ability to control the graphic movements clearly impacts on the quality of the writing. This control allows both the programming of letter formation before movement execution and the online adjustments during execution, thanks to diverse sensory feedback (FB). New technologies improve existing techniques or enable new methods to supply the writer with real-time computer-assisted FB. The possibilities are numerous and various. Therefore, two main questions arise: (1) What aspect of the movement is concerned and (2) How can we best inform the writer to help them correct their handwriting? In a first step, we report studies on FB naturally used by the writer. The purpose is to determine which information is carried by each sensory modality, how it is used in handwriting control and how this control changes with practice and learning. In a second step, we report studies on supplementary FB provided to the writer to help them to better control and learn how to write. We suggest that, depending on their contents, certain sensory modalities will be more appropriate than others to assist handwriting motor control. We emphasize particularly the relevance of auditory modality as online supplementary FB on handwriting movements. Using real-time supplementary FB to assist in the handwriting process is probably destined for a brilliant future with the growing availability and rapid development of tablets. PMID:25750633
The role of multisensory interplay in enabling temporal expectations.
Ball, Felix; Michels, Lara E; Thiele, Carsten; Noesselt, Toemme
2018-01-01
Temporal regularities can guide our attention to focus on a particular moment in time and to be especially vigilant just then. Previous research provided evidence for the influence of temporal expectation on perceptual processing in unisensory auditory, visual, and tactile contexts. However, in real life we are often exposed to a complex and continuous stream of multisensory events. Here we tested - in a series of experiments - whether temporal expectations can enhance perception in multisensory contexts and whether this enhancement differs from enhancements in unisensory contexts. Our discrimination paradigm contained near-threshold targets (subject-specific 75% discrimination accuracy) embedded in a sequence of distractors. The likelihood of target occurrence (early or late) was manipulated block-wise. Furthermore, we tested whether spatial and modality-specific target uncertainty (i.e. predictable vs. unpredictable target position or modality) would affect temporal expectation (TE) measured with perceptual sensitivity (d ' ) and response times (RT). In all our experiments, hidden temporal regularities improved performance for expected multisensory targets. Moreover, multisensory performance was unaffected by spatial and modality-specific uncertainty, whereas unisensory TE effects on d ' but not RT were modulated by spatial and modality-specific uncertainty. Additionally, the size of the temporal expectation effect, i.e. the increase in perceptual sensitivity and decrease of RT, scaled linearly with the likelihood of expected targets. Finally, temporal expectation effects were unaffected by varying target position within the stream. Together, our results strongly suggest that participants quickly adapt to novel temporal contexts, that they benefit from multisensory (relative to unisensory) stimulation and that multisensory benefits are maximal if the stimulus-driven uncertainty is highest. We propose that enhanced informational content (i.e. multisensory stimulation) enables the robust extraction of temporal regularities which in turn boost (uni-)sensory representations. Copyright © 2017 Elsevier B.V. All rights reserved.
Priming Letters by Colors: Evidence for the Bidirectionality of Grapheme-Color Synesthesia
ERIC Educational Resources Information Center
Weiss, Peter H.; Kalckert, Andreas; Fink, Gereon R.
2009-01-01
In synesthesia, stimulation of one sensory modality leads to a percept in another nonstimulated modality, for example, graphemes trigger an additional color percept in grapheme-color synesthesia, which encompasses the variants letter-color and digit-color synesthesia. Until recently, it was assumed that synesthesia occurs strictly unidirectional:…
Auditory-Visual Intermodal Matching of Small Numerosities in 6-Month-Old Infants
ERIC Educational Resources Information Center
Kobayashi, Tessei; Hiraki, Kazuo; Hasegawa, Toshikazu
2005-01-01
Recent studies have reported that preverbal infants are able to discriminate between numerosities of sets presented within a particular modality. There is still debate, however, over whether they are able to perform intermodal numerosity matching, i.e. to relate numerosities of sets presented with different sensory modalities. The present study…
Differential effects of ongoing EEG beta and theta power on memory formation
Scholz, Sebastian; Schneider, Signe Luisa
2017-01-01
Recently, elevated ongoing pre-stimulus beta power (13–17 Hz) at encoding has been associated with subsequent memory formation for visual stimulus material. It is unclear whether this activity is merely specific to visual processing or whether it reflects a state facilitating general memory formation, independent of stimulus modality. To answer that question, the present study investigated the relationship between neural pre-stimulus oscillations and verbal memory formation in different sensory modalities. For that purpose, a within-subject design was employed to explore differences between successful and failed memory formation in the visual and auditory modality. Furthermore, associative memory was addressed by presenting the stimuli in combination with background images. Results revealed that similar EEG activity in the low beta frequency range (13–17 Hz) is associated with subsequent memory success, independent of stimulus modality. Elevated power prior to stimulus onset differentiated successful from failed memory formation. In contrast, differential effects between modalities were found in the theta band (3–7 Hz), with an increased oscillatory activity before the onset of later remembered visually presented words. In addition, pre-stimulus theta power dissociated between successful and failed encoding of associated context, independent of the stimulus modality of the item itself. We therefore suggest that increased ongoing low beta activity reflects a memory promoting state, which is likely to be moderated by modality-independent attentional or inhibitory processes, whereas high ongoing theta power is suggested as an indicator of the enhanced binding of incoming interlinked information. PMID:28192459
Ibrahim, Leena A.; Mesik, Lukas; Ji, Xu-ying; Fang, Qi; Li, Hai-fu; Li, Ya-tang; Zingg, Brian; Zhang, Li I.; Tao, Huizhong Whit
2016-01-01
Summary Cross-modality interaction in sensory perception is advantageous for animals’ survival. How cortical sensory processing is cross-modally modulated and what are the underlying neural circuits remain poorly understood. In mouse primary visual cortex (V1), we discovered that orientation selectivity of layer (L)2/3 but not L4 excitatory neurons was sharpened in the presence of sound or optogenetic activation of projections from primary auditory cortex (A1) to V1. The effect was manifested by decreased average visual responses yet increased responses at the preferred orientation. It was more pronounced at lower visual contrast, and was diminished by suppressing L1 activity. L1 neurons were strongly innervated by A1-V1 axons and excited by sound, while visual responses of L2/3 vasoactive intestinal peptide (VIP) neurons were suppressed by sound, both preferentially at the cell's preferred orientation. These results suggest that the cross-modality modulation is achieved primarily through L1 neuron and L2/3 VIP-cell mediated inhibitory and disinhibitory circuits. PMID:26898778
Implicit multisensory associations influence voice recognition.
von Kriegstein, Katharina; Giraud, Anne-Lise
2006-10-01
Natural objects provide partially redundant information to the brain through different sensory modalities. For example, voices and faces both give information about the speech content, age, and gender of a person. Thanks to this redundancy, multimodal recognition is fast, robust, and automatic. In unimodal perception, however, only part of the information about an object is available. Here, we addressed whether, even under conditions of unimodal sensory input, crossmodal neural circuits that have been shaped by previous associative learning become activated and underpin a performance benefit. We measured brain activity with functional magnetic resonance imaging before, while, and after participants learned to associate either sensory redundant stimuli, i.e. voices and faces, or arbitrary multimodal combinations, i.e. voices and written names, ring tones, and cell phones or brand names of these cell phones. After learning, participants were better at recognizing unimodal auditory voices that had been paired with faces than those paired with written names, and association of voices with faces resulted in an increased functional coupling between voice and face areas. No such effects were observed for ring tones that had been paired with cell phones or names. These findings demonstrate that brief exposure to ecologically valid and sensory redundant stimulus pairs, such as voices and faces, induces specific multisensory associations. Consistent with predictive coding theories, associative representations become thereafter available for unimodal perception and facilitate object recognition. These data suggest that for natural objects effective predictive signals can be generated across sensory systems and proceed by optimization of functional connectivity between specialized cortical sensory modules.
Valente, Daniel L.; Braasch, Jonas; Myrbeck, Shane A.
2012-01-01
Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audiovisual environment in which participants were instructed to make auditory width judgments in dynamic bi-modal settings. The results of these psychophysical tests suggest the importance of congruent audio visual presentation to the ecological interpretation of an auditory scene. Supporting data were accumulated in five rooms of ascending volumes and varying reverberation times. Participants were given an audiovisual matching test in which they were instructed to pan the auditory width of a performing ensemble to a varying set of audio and visual cues in rooms. Results show that both auditory and visual factors affect the collected responses and that the two sensory modalities coincide in distinct interactions. The greatest differences between the panned audio stimuli given a fixed visual width were found in the physical space with the largest volume and the greatest source distance. These results suggest, in this specific instance, a predominance of auditory cues in the spatial analysis of the bi-modal scene. PMID:22280585
Lynott, Dermot; Connell, Louise
2013-06-01
We present modality exclusivity norms for 400 randomly selected noun concepts, for which participants provided perceptual strength ratings across five sensory modalities (i.e., hearing, taste, touch, smell, and vision). A comparison with previous norms showed that noun concepts are more multimodal than adjective concepts, as nouns tend to subsume multiple adjectival property concepts (e.g., perceptual experience of the concept baby involves auditory, haptic, olfactory, and visual properties, and hence leads to multimodal perceptual strength). To show the value of these norms, we then used them to test a prediction of the sound symbolism hypothesis: Analysis revealed a systematic relationship between strength of perceptual experience in the referent concept and surface word form, such that distinctive perceptual experience tends to attract distinctive lexical labels. In other words, modality-specific norms of perceptual strength are useful for exploring not just the nature of grounded concepts, but also the nature of form-meaning relationships. These norms will be of benefit to those interested in the representational nature of concepts, the roles of perceptual information in word processing and in grounded cognition more generally, and the relationship between form and meaning in language development and evolution.
Alterations to multisensory and unisensory integration by stimulus competition
Rowland, Benjamin A.; Stanford, Terrence R.; Stein, Barry E.
2011-01-01
In environments containing sensory events at competing locations, selecting a target for orienting requires prioritization of stimulus values. Although the superior colliculus (SC) is causally linked to the stimulus selection process, the manner in which SC multisensory integration operates in a competitive stimulus environment is unknown. Here we examined how the activity of visual-auditory SC neurons is affected by placement of a competing target in the opposite hemifield, a stimulus configuration that would, in principle, promote interhemispheric competition for access to downstream motor circuitry. Competitive interactions between the targets were evident in how they altered unisensory and multisensory responses of individual neurons. Responses elicited by a cross-modal stimulus (multisensory responses) proved to be substantially more resistant to competitor-induced depression than were unisensory responses (evoked by the component modality-specific stimuli). Similarly, when a cross-modal stimulus served as the competitor, it exerted considerably more depression than did its individual component stimuli, in some cases producing more depression than predicted by their linear sum. These findings suggest that multisensory integration can help resolve competition among multiple targets by enhancing orientation to the location of cross-modal events while simultaneously suppressing orientation to events at alternate locations. PMID:21957224
Alterations to multisensory and unisensory integration by stimulus competition.
Pluta, Scott R; Rowland, Benjamin A; Stanford, Terrence R; Stein, Barry E
2011-12-01
In environments containing sensory events at competing locations, selecting a target for orienting requires prioritization of stimulus values. Although the superior colliculus (SC) is causally linked to the stimulus selection process, the manner in which SC multisensory integration operates in a competitive stimulus environment is unknown. Here we examined how the activity of visual-auditory SC neurons is affected by placement of a competing target in the opposite hemifield, a stimulus configuration that would, in principle, promote interhemispheric competition for access to downstream motor circuitry. Competitive interactions between the targets were evident in how they altered unisensory and multisensory responses of individual neurons. Responses elicited by a cross-modal stimulus (multisensory responses) proved to be substantially more resistant to competitor-induced depression than were unisensory responses (evoked by the component modality-specific stimuli). Similarly, when a cross-modal stimulus served as the competitor, it exerted considerably more depression than did its individual component stimuli, in some cases producing more depression than predicted by their linear sum. These findings suggest that multisensory integration can help resolve competition among multiple targets by enhancing orientation to the location of cross-modal events while simultaneously suppressing orientation to events at alternate locations.
Massé, Ian O; Guillemette, Sonia; Laramée, Marie-Eve; Bronchti, Gilles; Boire, Denis
2014-11-07
Anophthalmia is a condition in which the eye does not develop from the early embryonic period. Early blindness induces cross-modal plastic modifications in the brain such as auditory and haptic activations of the visual cortex and also leads to a greater solicitation of the somatosensory and auditory cortices. The visual cortex is activated by auditory stimuli in anophthalmic mice and activity is known to alter the growth pattern of the cerebral cortex. The size of the primary visual, auditory and somatosensory cortices and of the corresponding specific sensory thalamic nuclei were measured in intact and enucleated C57Bl/6J mice and in ZRDCT anophthalmic mice (ZRDCT/An) to evaluate the contribution of cross-modal activity on the growth of the cerebral cortex. In addition, the size of these structures were compared in intact, enucleated and anophthalmic fourth generation backcrossed hybrid C57Bl/6J×ZRDCT/An mice to parse out the effects of mouse strains and of the different visual deprivations. The visual cortex was smaller in the anophthalmic ZRDCT/An than in the intact and enucleated C57Bl/6J mice. Also the auditory cortex was larger and the somatosensory cortex smaller in the ZRDCT/An than in the intact and enucleated C57Bl/6J mice. The size differences of sensory cortices between the enucleated and anophthalmic mice were no longer present in the hybrid mice, showing specific genetic differences between C57Bl/6J and ZRDCT mice. The post natal size increase of the visual cortex was less in the enucleated than in the anophthalmic and intact hybrid mice. This suggests differences in the activity of the visual cortex between enucleated and anophthalmic mice and that early in-utero spontaneous neural activity in the visual system contributes to the shaping of functional properties of cortical networks. Copyright © 2014 Elsevier B.V. All rights reserved.
Braille character discrimination in blindfolded human subjects.
Kauffman, Thomas; Théoret, Hugo; Pascual-Leone, Alvaro
2002-04-16
Visual deprivation may lead to enhanced performance in other sensory modalities. Whether this is the case in the tactile modality is controversial and may depend upon specific training and experience. We compared the performance of sighted subjects on a Braille character discrimination task to that of normal individuals blindfolded for a period of five days. Some participants in each group (blindfolded and sighted) received intensive Braille training to offset the effects of experience. Blindfolded subjects performed better than sighted subjects in the Braille discrimination task, irrespective of tactile training. For the left index finger, which had not been used in the formal Braille classes, blindfolding had no effect on performance while subjects who underwent tactile training outperformed non-stimulated participants. These results suggest that visual deprivation speeds up Braille learning and may be associated with behaviorally relevant neuroplastic changes.
Acid-sensing ion channels and transient-receptor potential ion channels in zebrafish taste buds.
Levanti, M; Randazzo, B; Viña, E; Montalbano, G; Garcia-Suarez, O; Germanà, A; Vega, J A; Abbate, F
2016-09-01
Sensory information from the environment is required for life and survival, and it is detected by specialized cells which together make up the sensory system. The fish sensory system includes specialized organs that are able to detect mechanical and chemical stimuli. In particular, taste buds are small organs located on the tongue in terrestrial vertebrates that function in the perception of taste. In fish, taste buds occur on the lips, the flanks, and the caudal (tail) fins of some species and on the barbels of others. In fish taste receptor cells, different classes of ion channels have been detected which, like in mammals, presumably participate in the detection and/or transduction of chemical gustatory signals. However, since some of these ion channels are involved in the detection of additional sensory modalities, it can be hypothesized that taste cells sense stimuli other than those specific for taste. This mini-review summarizes current knowledge on the presence of transient-receptor potential (TRP) and acid-sensing (ASIC) ion channels in the taste buds of teleosts, especially adult zebrafish. Up to now ASIC4, TRPC2, TRPA1, TRPV1 and TRPV4 ion channels have been found in the sensory cells, while ASIC2 was detected in the nerves supplying the taste buds. Copyright © 2016 Elsevier GmbH. All rights reserved.
Flexibility in Embodied Language Understanding
Willems, Roel M.; Casasanto, Daniel
2011-01-01
Do people use sensori-motor cortices to understand language? Here we review neurocognitive studies of language comprehension in healthy adults and evaluate their possible contributions to theories of language in the brain. We start by sketching the minimal predictions that an embodied theory of language understanding makes for empirical research, and then survey studies that have been offered as evidence for embodied semantic representations. We explore four debated issues: first, does activation of sensori-motor cortices during action language understanding imply that action semantics relies on mirror neurons? Second, what is the evidence that activity in sensori-motor cortices plays a functional role in understanding language? Third, to what extent do responses in perceptual and motor areas depend on the linguistic and extra-linguistic context? And finally, can embodied theories accommodate language about abstract concepts? Based on the available evidence, we conclude that sensori-motor cortices are activated during a variety of language comprehension tasks, for both concrete and abstract language. Yet, this activity depends on the context in which perception and action words are encountered. Although modality-specific cortical activity is not a sine qua non of language processing even for language about perception and action, sensori-motor regions of the brain appear to make functional contributions to the construction of meaning, and should therefore be incorporated into models of the neurocognitive architecture of language. PMID:21779264
Implantable Neural Interfaces for Sharks
2007-05-01
technology for recording and stimulating from the auditory and olfactory sensory nervous systems of the awake , swimming nurse shark, G. cirratum (Figures...and awake animals. Finally, evidence exists that microstimulation of the olfactory system could lead to patterned behavioral responses in the...auditory-evoked local field potentials (multi- modal sensory responses) from both anesthetized and awake animals. Figure 1: Pre-operative MR
Fotowat, Haleh; Harvey-Girard, Erik; Cheer, Joseph F; Krahe, Rüdiger; Maler, Leonard
2016-01-01
Serotonergic neurons of the raphe nuclei of vertebrates project to most regions of the brain and are known to significantly affect sensory processing. The subsecond dynamics of sensory modulation of serotonin levels and its relation to behavior, however, remain unknown. We used fast-scan cyclic voltammetry to measure serotonin release in the electrosensory system of weakly electric fish, Apteronotus leptorhynchus . These fish use an electric organ to generate a quasi-sinusoidal electric field for communicating with conspecifics. In response to conspecific signals, they frequently produce signal modulations called chirps. We measured changes in serotonin concentration in the hindbrain electrosensory lobe (ELL) with a resolution of 0.1 s concurrently with chirping behavior evoked by mimics of conspecific electric signals. We show that serotonin release can occur phase locked to stimulus onset as well as spontaneously in the ELL region responsible for processing these signals. Intense auditory stimuli, on the other hand, do not modulate serotonin levels in this region, suggesting modality specificity. We found no significant correlation between serotonin release and chirp production on a trial-by-trial basis. However, on average, in the trials where the fish chirped, there was a reduction in serotonin release in response to stimuli mimicking similar-sized same-sex conspecifics. We hypothesize that the serotonergic system is part of an intricate sensory-motor loop: serotonin release in a sensory area is triggered by sensory input, giving rise to motor output, which can in turn affect serotonin release at the timescale of the ongoing sensory experience and in a context-dependent manner.
Cortical plasticity as a mechanism for storing Bayesian priors in sensory perception.
Köver, Hania; Bao, Shaowen
2010-05-05
Human perception of ambiguous sensory signals is biased by prior experiences. It is not known how such prior information is encoded, retrieved and combined with sensory information by neurons. Previous authors have suggested dynamic encoding mechanisms for prior information, whereby top-down modulation of firing patterns on a trial-by-trial basis creates short-term representations of priors. Although such a mechanism may well account for perceptual bias arising in the short-term, it does not account for the often irreversible and robust changes in perception that result from long-term, developmental experience. Based on the finding that more frequently experienced stimuli gain greater representations in sensory cortices during development, we reasoned that prior information could be stored in the size of cortical sensory representations. For the case of auditory perception, we use a computational model to show that prior information about sound frequency distributions may be stored in the size of primary auditory cortex frequency representations, read-out by elevated baseline activity in all neurons and combined with sensory-evoked activity to generate a perception that conforms to Bayesian integration theory. Our results suggest an alternative neural mechanism for experience-induced long-term perceptual bias in the context of auditory perception. They make the testable prediction that the extent of such perceptual prior bias is modulated by both the degree of cortical reorganization and the magnitude of spontaneous activity in primary auditory cortex. Given that cortical over-representation of frequently experienced stimuli, as well as perceptual bias towards such stimuli is a common phenomenon across sensory modalities, our model may generalize to sensory perception, rather than being specific to auditory perception.
Bensafi, Moustafa; Fournel, Arnaud; Joussain, Pauline; Poncelet, Johan; Przybylski, Lauranne; Rouby, Catherine; Tillmann, Barbara
2017-06-01
Mental imagery in experts has been documented in visual arts, music and dance. Here, we examined this issue in an understudied art domain, namely, culinary arts. Previous research investigating mental imagery in experts has reported either a stronger involvement of the right hemisphere or bilateral brain activation. The first aim of our study was to examine whether culinary arts also recruit such a hemispheric pattern specifically during odor mental imagery. In a second aim, we investigated whether expertise effects observed in a given sensory domain transfer to another modality. We combined psychophysics and neurophysiology to study mental imagery in cooks, musicians and controls. We collected response times and event-related potentials (ERP) while participants mentally compared the odor of fruits, the timbre of musical instruments and the size of fruits, musical instruments and manufactured objects. Cooks were faster in imagining fruit odors, and musicians were faster in imagining the timbre of musical instruments. These differences were not observed in control participants. This expertise effect was reflected in the ERP late positive complex (LPC): only experts showed symmetric bilateral activation, specifically when cooks imagined odors and when musicians imagined timbres. In contrast, the LPC was significantly greater in the left hemisphere than in the right hemisphere for non-expert participants in all conditions. These findings suggest that sensory expertise does not involve transfer of mental imagery ability across modalities and highlight for the first time that olfactory expertise in cooks induces a balance of activations between hemispheres as does musical expertise in musicians. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Wang, Wuyi; Viswanathan, Shivakumar; Lee, Taraz; Grafton, Scott T
2016-01-01
Cortical theta band oscillations (4-8 Hz) in EEG signals have been shown to be important for a variety of different cognitive control operations in visual attention paradigms. However the synchronization source of these signals as defined by fMRI BOLD activity and the extent to which theta oscillations play a role in multimodal attention remains unknown. Here we investigated the extent to which cross-modal visual and auditory attention impacts theta oscillations. Using a simultaneous EEG-fMRI paradigm, healthy human participants performed an attentional vigilance task with six cross-modal conditions using naturalistic stimuli. To assess supramodal mechanisms, modulation of theta oscillation amplitude for attention to either visual or auditory stimuli was correlated with BOLD activity by conjunction analysis. Negative correlation was localized to cortical regions associated with the default mode network and positively with ventral premotor areas. Modality-associated attention to visual stimuli was marked by a positive correlation of theta and BOLD activity in fronto-parietal area that was not observed in the auditory condition. A positive correlation of theta and BOLD activity was observed in auditory cortex, while a negative correlation of theta and BOLD activity was observed in visual cortex during auditory attention. The data support a supramodal interaction of theta activity with of DMN function, and modality-associated processes within fronto-parietal networks related to top-down theta related cognitive control in cross-modal visual attention. On the other hand, in sensory cortices there are opposing effects of theta activity during cross-modal auditory attention.
Ågmo, Anders
2017-01-01
Intrasexual competition is an important element of natural selection in which the most attractive conspecific has a considerable reproductive advantage over the others. The conspecifics that are approached first often become the preferred mate partners, and could thus from a biological perspective have a reproductive advantage. This underlines the importance of the initial approach and raises the question of what induces this approach, or what makes a conspecific attractive. Identification of the sensory modalities crucial for the activation of approach is necessary for elucidating the central nervous processes involved in the activation of sexual motivation and eventually copulatory behavior. The initial approach to a potential mate depends on distant stimuli in the modalities of audition, olfaction, vision, and other undefined characteristics. This study investigated the role of the different modalities and the combination of these modalities in the sexual incentive value of a female rat. This study provides evidence that the presence of a single-sensory stimulus with one modality (olfaction, vision, or ‘others’, but not audition) is sufficient to attenuate the preference for a social contact with a male rat. However, a multisensory stimulus of multiple modalities is necessary to induce preference for the stimulus over social contact to a level of an intact receptive female. The initial approach behavior, therefore, seems to be induced by the combination of at least two modalities among which olfaction is crucial. This suggests that there is a cooperative function for the different modalities in the induction of approach behavior of a potential mate. PMID:28306729
NASA Technical Reports Server (NTRS)
Hart, S. G.; Shively, R. J.; Vidulich, M. A.; Miller, R. C.
1986-01-01
The influence of stimulus modality and task difficulty on workload and performance was investigated. The goal was to quantify the cost (in terms of response time and experienced workload) incurred when essentially serial task components shared common elements (e.g., the response to one initiated the other) which could be accomplished in parallel. The experimental tasks were based on the Fittsberg paradigm; the solution to a SternBERG-type memory task determines which of two identical FITTS targets are acquired. Previous research suggested that such functionally integrated dual tasks are performed with substantially less workload and faster response times than would be predicted by suming single-task components when both are presented in the same stimulus modality (visual). The physical integration of task elements was varied (although their functional relationship remained the same) to determine whether dual-task facilitation would persist if task components were presented in different sensory modalities. Again, it was found that the cost of performing the two-stage task was considerably less than the sum of component single-task levels when both were presented visually. Less facilitation was found when task elements were presented in different sensory modalities. These results suggest the importance of distinguishing between concurrent tasks that complete for limited resources from those that beneficially share common resources when selecting the stimulus modalities for information displays.
Jacklin, Derek L; Cloke, Jacob M; Potvin, Alphonse; Garrett, Inara; Winters, Boyer D
2016-01-27
Rats, humans, and monkeys demonstrate robust crossmodal object recognition (CMOR), identifying objects across sensory modalities. We have shown that rats' performance of a spontaneous tactile-to-visual CMOR task requires functional integration of perirhinal (PRh) and posterior parietal (PPC) cortices, which seemingly provide visual and tactile object feature processing, respectively. However, research with primates has suggested that PRh is sufficient for multisensory object representation. We tested this hypothesis in rats using a modification of the CMOR task in which multimodal preexposure to the to-be-remembered objects significantly facilitates performance. In the original CMOR task, with no preexposure, reversible lesions of PRh or PPC produced patterns of impairment consistent with modality-specific contributions. Conversely, in the CMOR task with preexposure, PPC lesions had no effect, whereas PRh involvement was robust, proving necessary for phases of the task that did not require PRh activity when rats did not have preexposure; this pattern was supported by results from c-fos imaging. We suggest that multimodal preexposure alters the circuitry responsible for object recognition, in this case obviating the need for PPC contributions and expanding PRh involvement, consistent with the polymodal nature of PRh connections and results from primates indicating a key role for PRh in multisensory object representation. These findings have significant implications for our understanding of multisensory information processing, suggesting that the nature of an individual's past experience with an object strongly determines the brain circuitry involved in representing that object's multisensory features in memory. The ability to integrate information from multiple sensory modalities is crucial to the survival of organisms living in complex environments. Appropriate responses to behaviorally relevant objects are informed by integration of multisensory object features. We used crossmodal object recognition tasks in rats to study the neurobiological basis of multisensory object representation. When rats had no prior exposure to the to-be-remembered objects, the spontaneous ability to recognize objects across sensory modalities relied on functional interaction between multiple cortical regions. However, prior multisensory exploration of the task-relevant objects remapped cortical contributions, negating the involvement of one region and significantly expanding the role of another. This finding emphasizes the dynamic nature of cortical representation of objects in relation to past experience. Copyright © 2016 the authors 0270-6474/16/361273-17$15.00/0.
Haigh, Sarah M; Gupta, Akshat; Barb, Scott M; Glass, Summer A F; Minshew, Nancy J; Dinstein, Ilan; Heeger, David J; Eack, Shaun M; Behrmann, Marlene
2016-08-01
Autism and schizophrenia share multiple phenotypic and genotypic markers, and there is ongoing debate regarding the relationship of these two disorders. To examine whether cortical dynamics are similar across these disorders, we directly compared fMRI responses to visual, somatosensory and auditory stimuli in adults with autism (N=15), with schizophrenia (N=15), and matched controls (N=15). All participants completed a one-back letter detection task presented at fixation (to control attention) while task-irrelevant sensory stimulation was delivered to the different modalities. We focused specifically on the response amplitudes and the variability in sensory fMRI responses of the two groups, given the evidence of greater trial-to-trial variability in adults with autism. Both autism and schizophrenia individuals showed weaker signal-to-noise ratios (SNR) in sensory-evoked responses compared to controls (d>0.42), but for different reasons. For the autism group, the fMRI response amplitudes were indistinguishable from controls but were more variable trial-to-trial (d=0.47). For the schizophrenia group, response amplitudes were smaller compared to autism (d=0.44) and control groups (d=0.74), but were not significantly more variable (d<0.29). These differential group profiles suggest (1) that greater trial-to-trial variability in cortical responses may be specific to autism and is not a defining characteristic of schizophrenia, and (2) that blunted response amplitudes may be characteristic of schizophrenia. The relationship between the amplitude and the variability of cortical activity might serve as a specific signature differentiating these neurodevelopmental disorders. Identifying the neural basis of these responses and their relationship to the underlying genetic bases may substantially enlighten the understanding of both disorders. Copyright © 2016 Elsevier B.V. All rights reserved.
Auditory cross-modal reorganization in cochlear implant users indicates audio-visual integration.
Stropahl, Maren; Debener, Stefan
2017-01-01
There is clear evidence for cross-modal cortical reorganization in the auditory system of post-lingually deafened cochlear implant (CI) users. A recent report suggests that moderate sensori-neural hearing loss is already sufficient to initiate corresponding cortical changes. To what extend these changes are deprivation-induced or related to sensory recovery is still debated. Moreover, the influence of cross-modal reorganization on CI benefit is also still unclear. While reorganization during deafness may impede speech recovery, reorganization also has beneficial influences on face recognition and lip-reading. As CI users were observed to show differences in multisensory integration, the question arises if cross-modal reorganization is related to audio-visual integration skills. The current electroencephalography study investigated cortical reorganization in experienced post-lingually deafened CI users ( n = 18), untreated mild to moderately hearing impaired individuals (n = 18) and normal hearing controls ( n = 17). Cross-modal activation of the auditory cortex by means of EEG source localization in response to human faces and audio-visual integration, quantified with the McGurk illusion, were measured. CI users revealed stronger cross-modal activations compared to age-matched normal hearing individuals. Furthermore, CI users showed a relationship between cross-modal activation and audio-visual integration strength. This may further support a beneficial relationship between cross-modal activation and daily-life communication skills that may not be fully captured by laboratory-based speech perception tests. Interestingly, hearing impaired individuals showed behavioral and neurophysiological results that were numerically between the other two groups, and they showed a moderate relationship between cross-modal activation and the degree of hearing loss. This further supports the notion that auditory deprivation evokes a reorganization of the auditory system even at early stages of hearing loss.
Neuronal Correlates of Cross-Modal Transfer in the Cerebellum and Pontine Nuclei
Campolattaro, Matthew M.; Kashef, Alireza; Lee, Inah; Freeman, John H.
2011-01-01
Cross-modal transfer occurs when learning established with a stimulus from one sensory modality facilitates subsequent learning with a new stimulus from a different sensory modality. The current study examined neuronal correlates of cross-modal transfer of Pavlovian eyeblink conditioning in rats. Neuronal activity was recorded from tetrodes within the anterior interpositus nucleus (IPN) of the cerebellum and basilar pontine nucleus (PN) during different phases of training. After stimulus pre-exposure and unpaired training sessions with a tone conditioned stimulus (CS), light CS, and periorbital stimulation unconditioned stimulus (US), rats received associative training with one of the CSs and the US (CS1-US). Training then continued on the same day with the other CS to assess cross-modal transfer (CS2-US). The final training session included associative training with both CSs on separate trials to establish stronger cross-modal transfer (CS1/CS2). Neurons in the IPN and PN showed primarily unimodal responses during pre-training sessions. Learning-related facilitation of activity correlated with the conditioned response (CR) developed in the IPN and PN during CS1-US training. Subsequent CS2-US training resulted in acquisition of CRs and learning-related neuronal activity in the IPN but substantially less little learning-related activity in the PN. Additional CS1/CS2 training increased CRs and learning-related activity in the IPN and PN during CS2-US trials. The findings suggest that cross-modal neuronal plasticity in the PN is driven by excitatory feedback from the IPN to the PN. Interacting plasticity mechanisms in the IPN and PN may underlie behavioral cross-modal transfer in eyeblink conditioning. PMID:21411647
O'Donnell, Sean; Clifford, Marie R; DeLeon, Sara; Papa, Christopher; Zahedi, Nazaneen; Bulova, Susan J
2013-01-01
The mosaic brain evolution hypothesis predicts that the relative volumes of functionally distinct brain regions will vary independently and correlate with species' ecology. Paper wasp species (Hymenoptera: Vespidae, Polistinae) differ in light exposure: they construct open versus enclosed nests and one genus (Apoica) is nocturnal. We asked whether light environments were related to species differences in the size of antennal and optic processing brain tissues. Paper wasp brains have anatomically distinct peripheral and central regions that process antennal and optic sensory inputs. We measured the volumes of 4 sensory processing brain regions in paper wasp species from 13 Neotropical genera including open and enclosed nesters, and diurnal and nocturnal species. Species differed in sensory region volumes, but there was no evidence for trade-offs among sensory modalities. All sensory region volumes correlated with brain size. However, peripheral optic processing investment increased with brain size at a higher rate than peripheral antennal processing investment. Our data suggest that mosaic and concerted (size-constrained) brain evolution are not exclusive alternatives. When brain regions increase with brain size at different rates, these distinct allometries can allow for differential investment among sensory modalities. As predicted by mosaic evolution, species ecology was associated with some aspects of brain region investment. Nest architecture variation was not associated with brain investment differences, but the nocturnal genus Apoica had the largest antennal:optic volume ratio in its peripheral sensory lobes. Investment in central processing tissues was not related to nocturnality, a pattern also noted in mammals. The plasticity of neural connections in central regions may accommodate evolutionary shifts in input from the periphery with relatively minor changes in volume. © 2013 S. Karger AG, Basel.
Pharmacologic attenuation of cross-modal sensory augmentation within the chronic pain insula
Harte, Steven E.; Ichesco, Eric; Hampson, Johnson P.; Peltier, Scott J.; Schmidt-Wilcke, Tobias; Clauw, Daniel J.; Harris, Richard E.
2016-01-01
Abstract Pain can be elicited through all mammalian sensory pathways yet cross-modal sensory integration, and its relationship to clinical pain, is largely unexplored. Centralized chronic pain conditions such as fibromyalgia are often associated with symptoms of multisensory hypersensitivity. In this study, female patients with fibromyalgia demonstrated cross-modal hypersensitivity to visual and pressure stimuli compared with age- and sex-matched healthy controls. Functional magnetic resonance imaging revealed that insular activity evoked by an aversive level of visual stimulation was associated with the intensity of fibromyalgia pain. Moreover, attenuation of this insular activity by the analgesic pregabalin was accompanied by concomitant reductions in clinical pain. A multivariate classification method using support vector machines (SVM) applied to visual-evoked brain activity distinguished patients with fibromyalgia from healthy controls with 82% accuracy. A separate SVM classification of treatment effects on visual-evoked activity reliably identified when patients were administered pregabalin as compared with placebo. Both SVM analyses identified significant weights within the insular cortex during aversive visual stimulation. These data suggest that abnormal integration of multisensory and pain pathways within the insula may represent a pathophysiological mechanism in some chronic pain conditions and that insular response to aversive visual stimulation may have utility as a marker for analgesic drug development. PMID:27101425
Music to My Eyes: Cross-Modal Interactions in the Perception of Emotions in Musical Performance
ERIC Educational Resources Information Center
Vines, Bradley W.; Krumhansl, Carol L.; Wanderley, Marcelo M.; Dalca, Ioana M.; Levitin, Daniel J.
2011-01-01
We investigate non-verbal communication through expressive body movement and musical sound, to reveal higher cognitive processes involved in the integration of emotion from multiple sensory modalities. Participants heard, saw, or both heard and saw recordings of a Stravinsky solo clarinet piece, performed with three distinct expressive styles:…
ERIC Educational Resources Information Center
Curtindale, Lori; Laurie-Rose, Cynthia; Bennett-Murphy, Laura; Hull, Sarah
2007-01-01
Applying optimal stimulation theory, the present study explored the development of sustained attention as a dynamic process. It examined the interaction of modality and temperament over time in children and adults. Second-grade children and college-aged adults performed auditory and visual vigilance tasks. Using the Carey temperament…
Exploring Modality Compatibility in the Response-Effect Compatibility Paradigm.
Földes, Noémi; Philipp, Andrea M; Badets, Arnaud; Koch, Iring
2017-01-01
According to ideomotor theory , action planning is based on anticipatory perceptual representations of action-effects. This aspect of action control has been investigated in studies using the response-effect compatibility (REC) paradigm, in which responses have been shown to be facilitated if ensuing perceptual effects share codes with the response based on dimensional overlap (i.e., REC). Additionally, according to the notion of ideomotor compatibility, certain response-effect (R-E) mappings will be stronger than others because some response features resemble the anticipated sensory response effects more strongly than others (e.g., since vocal responses usually produce auditory effects, an auditory stimulus should be anticipated in a stronger manner following vocal responses rather than following manual responses). Yet, systematic research on this matter is lacking. In the present study, two REC experiments aimed to explore the influence of R-E modality mappings. In Experiment 1, vocal number word responses produced visual effects on the screen (digits vs. number words; i.e., visual-symbolic vs. visual-verbal effect codes). The REC effect was only marginally larger for visual-verbal than for visual-symbolic effects. Using verbal effect codes in Experiment 2, we found that the REC effect was larger with auditory-verbal R-E mapping than with visual-verbal R-E mapping. Overall, the findings support the hypothesis of a role of R-E modality mappings in REC effects, suggesting both further evidence for ideomotor accounts as well as code-specific and modality-specific contributions to effect anticipation.
Shukla, Garima; Bhatia, Manvir; Behari, Madhuri
2005-10-01
Small fiber neuropathy is a common neurological disorder, often missed or ignored by physicians, since examination and routine nerve conduction studies are usually normal in this condition. Many methods including quantitative thermal sensory testing are currently being used for early detection of this condition, so as to enable timely investigation and treatment. This study was conducted to assess the yield of quantitative thermal sensory testing in diagnosis of small fiber neuropathy. We included patients presenting with history suggestive of positive and/or negative sensory symptoms, with normal examination findings, clinically suggestive of small fiber neuropathy, with normal or minimally abnormal routine nerve conduction studies. These patients were subjected to quantitative thermal sensory testing using a Medoc TSA-II Neurosensory analyser at two sites and for two modalities. QST data were compared with those in 120 normal healthy controls. Twenty-five patients (16 males, 9 females) with mean age 46.8+/-16.6 years (range: 21-75 years) were included in the study. The mean duration of symptoms was 1.6+/-1.6 years (range: 3 months-6 years). Eighteen patients (72%) had abnormal thresholds in at least one modality. Thermal thresholds were normal in 7 out of the 25 patients. This study demonstrates that quantitative thermal sensory testing is a fairly sensitive method for detection of small fiber neuropathy especially in patients with normal routine nerve conduction studies.
When concepts lose their color: A case of object color knowledge impairment
Stasenko, Alena; Garcea, Frank E.; Dombovy, Mary; Mahon, Bradford Z.
2014-01-01
Color is important in our daily interactions with objects, and plays a role in both low- and high-level visual processing. Previous neuropsychological studies have shown that color perception and object-color knowledge can doubly dissociate, and that both can dissociate from processing of object form. We present a case study of an individual who displayed an impairment for knowledge of the typical colors of objects, with preserved color perception and color naming. Our case also presented with a pattern of, if anything, worse performance for naming living items compared to nonliving things. The findings of the experimental investigation are evaluated in light of two theories of conceptual organization in the brain: the Sensory Functional Theory and the Domain-Specific Hypothesis. The dissociations observed in this case compel a model in which sensory/motor modality and semantic domain jointly constrain the organization of object knowledge. PMID:25058612
Innate recognition of water bodies in echolocating bats.
Greif, Stefan; Siemers, Björn M
2010-11-02
In the course of their lives, most animals must find different specific habitat and microhabitat types for survival and reproduction. Yet, in vertebrates, little is known about the sensory cues that mediate habitat recognition. In free flying bats the echolocation of insect-sized point targets is well understood, whereas how they recognize and classify spatially extended echo targets is currently unknown. In this study, we show how echolocating bats recognize ponds or other water bodies that are crucial for foraging, drinking and orientation. With wild bats of 15 different species (seven genera from three phylogenetically distant, large bat families), we found that bats perceived any extended, echo-acoustically smooth surface to be water, even in the presence of conflicting information from other sensory modalities. In addition, naive juvenile bats that had never before encountered a water body showed spontaneous drinking responses from smooth plates. This provides the first evidence for innate recognition of a habitat cue in a mammal.
Parallel Processing Strategies of the Primate Visual System
Nassi, Jonathan J.; Callaway, Edward M.
2009-01-01
Preface Incoming sensory information is sent to the brain along modality-specific channels corresponding to the five senses. Each of these channels further parses the incoming signals into parallel streams to provide a compact, efficient input to the brain. Ultimately, these parallel input signals must be elaborated upon and integrated within the cortex to provide a unified and coherent percept. Recent studies in the primate visual cortex have greatly contributed to our understanding of how this goal is accomplished. Multiple strategies including retinal tiling, hierarchical and parallel processing and modularity, defined spatially and by cell type-specific connectivity, are all used by the visual system to recover the rich detail of our visual surroundings. PMID:19352403
The development and use of SPIO Lycra compression bracing in children with neuromotor deficits.
Hylton, N; Allen, C
1997-01-01
The use of flexible compression bracing in persons with neuromotor deficits offers improved possibilities for stability and movement control without severely limiting joint movement options. At the Children's Therapy Center in Kent, Washington, this treatment modality has been explored with increasing application in children with moderate to severe cerebral palsy and other neuromotor deficits over the past 6 years, with good success. Significant functional improvements using Neoprene shoulder/trunk/hip Bracing led us to experiment with much lighter compression materials. The stabilizing pressure input orthosis or SPIO bracing system (developed by Cheryl Allen, parent and Chief Designer, and Nancy Hylton, PT) is custom-fitted to the stability, movement control and sensory deficit needs of a specific individual. SPIO bracing developed for a specific child has often become part of a rapidly increasing group of flexible bracing options which appear to provide an improved base of support for functional gains in balance, dynamic stability, general and specific movement control with improved postural and muscle readiness. Both deep sensory and subtle biomechanical factors may account for the functional changes observed. This article discusses the development and current use of flexible compression SPIO bracing in this area.
ERIC Educational Resources Information Center
Williams, Michael D.; Ray, Christopher T.; Griffith, Jennifer; De l'Aune, William
2011-01-01
The promise of novel technological strategies and solutions to assist persons with visual impairments (that is, those who are blind or have low vision) is frequently discussed and held to be widely beneficial in countless applications and daily activities. One such approach involving a tactile-vision sensory substitution modality as a mechanism to…
Nascent body ego: metapsychological and neurophysiological aspects.
Lehtonen, Johannes; Partanen, Juhani; Purhonen, Maija; Valkonen-Korhonen, Minna; Kononen, Mervi; Saarikoski, Seppo; Launiala, Kari
2006-10-01
For Freud, body ego was the organizing basis of the structural theory. He defined it as a psychic projection of the body surface. Isakower's and Lewin's classical findings suggest that the body surface experiences of nursing provide the infant with sensory-affective stimulation that initiates a projection of sensory processes towards the psychic realm. During nursing, somato-sensory, gustatory and olfactory modalities merge with a primitive somatic affect of satiation, whereas auditory modality is involved more indirectly and visual contact more gradually. Repeated regularly, such nascent experiences are likely to play a part in the organization of the primitive protosymbolic mental experience. In support of this hypothesis, the authors review findings from a neurophysiological study of infants before, during and after nursing. Nursing is associated with a significant amplitude change in the newborn electroencephalogram (EEG), which wanes before the age of 3 months, and is transformed at the age of 6 months into rhythmic 3-5 Hz hedonic theta-activity. Sucking requires active physiological work, which is shown in a regular rise in heart rate. The hypothesis of a sensory-affective organization of the nascent body ego, enhanced by nursing and active sucking, seems concordant with neurophysiological phenomena related to nursing.
Manfredi, Mirella; Cohn, Neil; Kutas, Marta
2017-06-01
Researchers have long questioned whether information presented through different sensory modalities involves distinct or shared semantic systems. We investigated uni-sensory cross-modal processing by recording event-related brain potentials to words replacing the climactic event in a visual narrative sequence (comics). We compared Onomatopoeic words, which phonetically imitate action sounds (Pow!), with Descriptive words, which describe an action (Punch!), that were (in)congruent within their sequence contexts. Across two experiments, larger N400s appeared to Anomalous Onomatopoeic or Descriptive critical panels than to their congruent counterparts, reflecting a difficulty in semantic access/retrieval. Also, Descriptive words evinced a greater late frontal positivity compared to Onomatopoetic words, suggesting that, though plausible, they may be less predictable/expected in visual narratives. Our results indicate that uni-sensory cross-model integration of word/letter-symbol strings within visual narratives elicit ERP patterns typically observed for written sentence processing, thereby suggesting the engagement of similar domain-independent integration/interpretation mechanisms. Copyright © 2017 Elsevier Inc. All rights reserved.
Manfredi, Mirella; Cohn, Neil; Kutas, Marta
2017-01-01
Researchers have long questioned whether information presented through different sensory modalities involves distinct or shared semantic systems. We investigated uni-sensory cross-modal processing by recording event-related brain potentials to words replacing the climactic event in a visual narrative sequence (comics). We compared Onomatopoeic words, which phonetically imitate action sounds (Pow!), with Descriptive words, which describe an action (Punch!), that were (in)congruent within their sequence contexts. Across two experiments, larger N400s appeared to Anomalous Onomatopoeic or Descriptive critical panels than to their congruent counterparts, reflecting a difficulty in semantic access/retrieval. Also, Descriptive words evinced a greater late frontal positivity compared to Onomatopoetic words, suggesting that, though plausible, they may be less predictable/expected in visual narratives. Our results indicate that uni-sensory cross-model integration of word/letter-symbol strings within visual narratives elicit ERP patterns typically observed for written sentence processing, thereby suggesting the engagement of similar domain-independent integration/interpretation mechanisms. PMID:28242517
[Sensory system development and the physical environment of infants born very preterm].
Kuhn, P; Zores, C; Astruc, D; Dufour, A; Casper, Ch
2011-07-01
The sensory systems develop in several sequences, with a process specific to each system and with a transnatal continuum. This development is based partly on interactions between the fetus and the newborn and their physical and human environments. These interactions are key drivers of the child development. The adaptation of the newborn's environment is crucial for his survival, his well-being and his development, especially if he is born prematurely. The physical environment of the hospital where immature infants are immersed differs greatly from the uterine environment from which they were extracted prematurely. There are discrepancies between their sensory expectations originating in the antenatal period and the atypical stimuli that newborns encounter in their postnatal nosocomial environment. These assertions are valid for all sensory modalities. Many studies have proven that very preterm infants are highly sensitive to this environment which can affect their physiological and behavioural well being. Moreover, it can alter their perception of important human sensory signals, particularly the ones coming from their mother. The long term impacts of this environment are more difficult to identify due to the multi-sensory nature of these stimuli and the multifactorial origin of the neurological disorders that these children may develop. However, the adaptation of their physical environment is one of the corner stones of specific developmental care programs, like the NIDCAP program that has been shown to be successful to improve their short and medium term outcomes. The architectural design, technical equipment and used health-care products, and the strategies and organizations of care are the main determinants of the physical environment of these children. Recommendations for the hospital environment, integrating a newborn's developmental perspective, have been made available. They should be applied more widely and should be completed. Technological equipment advances are also expected to allow better compliance to them. All these evolutions are completely in accordance with the concept of humane neonatal care. Copyright © 2011 Elsevier Masson SAS. All rights reserved.
Alais, David; Cass, John
2010-06-23
An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question. Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes. The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order may be optimised to object-centered rather than viewer-centered constraints.
Eytan, Danny; Pang, Elizabeth W; Doesburg, Sam M; Nenadovic, Vera; Gavrilovic, Bojan; Laussen, Peter; Guerguerian, Anne-Marie
2016-01-01
Acute brain injury is a common cause of death and critical illness in children and young adults. Fundamental management focuses on early characterization of the extent of injury and optimizing recovery by preventing secondary damage during the days following the primary injury. Currently, bedside technology for measuring neurological function is mainly limited to using electroencephalography (EEG) for detection of seizures and encephalopathic features, and evoked potentials. We present a proof of concept study in patients with acute brain injury in the intensive care setting, featuring a bedside functional imaging set-up designed to map cortical brain activation patterns by combining high density EEG recordings, multi-modal sensory stimulation (auditory, visual, and somatosensory), and EEG source modeling. Use of source-modeling allows for examination of spatiotemporal activation patterns at the cortical region level as opposed to the traditional scalp potential maps. The application of this system in both healthy and brain-injured participants is demonstrated with modality-specific source-reconstructed cortical activation patterns. By combining stimulation obtained with different modalities, most of the cortical surface can be monitored for changes in functional activation without having to physically transport the subject to an imaging suite. The results in patients in an intensive care setting with anatomically well-defined brain lesions suggest a topographic association between their injuries and activation patterns. Moreover, we report the reproducible application of a protocol examining a higher-level cortical processing with an auditory oddball paradigm involving presentation of the patient's own name. This study reports the first successful application of a bedside functional brain mapping tool in the intensive care setting. This application has the potential to provide clinicians with an additional dimension of information to manage critically-ill children and adults, and potentially patients not suited for magnetic resonance imaging technologies.
What is the link between synaesthesia and sound symbolism?
Bankieris, Kaitlyn; Simner, Julia
2015-01-01
Sound symbolism is a property of certain words which have a direct link between their phonological form and their semantic meaning. In certain instances, sound symbolism can allow non-native speakers to understand the meanings of etymologically unfamiliar foreign words, although the mechanisms driving this are not well understood. We examined whether sound symbolism might be mediated by the same types of cross-modal processes that typify synaesthetic experiences. Synaesthesia is an inherited condition in which sensory or cognitive stimuli (e.g., sounds, words) cause additional, unusual cross-modal percepts (e.g., sounds trigger colours, words trigger tastes). Synaesthesia may be an exaggeration of normal cross-modal processing, and if so, there may be a link between synaesthesia and the type of cross-modality inherent in sound symbolism. To test this we predicted that synaesthetes would have superior understanding of unfamiliar (sound symbolic) foreign words. In our study, 19 grapheme-colour synaesthetes and 57 non-synaesthete controls were presented with 400 adjectives from 10 unfamiliar languages and were asked to guess the meaning of each word in a two-alternative forced-choice task. Both groups showed superior understanding compared to chance levels, but synaesthetes significantly outperformed controls. This heightened ability suggests that sound symbolism may rely on the types of cross-modal integration that drive synaesthetes’ unusual experiences. It also suggests that synaesthesia endows or co-occurs with heightened multi-modal skills, and that this can arise in domains unrelated to the specific form of synaesthesia. PMID:25498744
Prenatal thalamic waves regulate cortical area size prior to sensory processing.
Moreno-Juan, Verónica; Filipchuk, Anton; Antón-Bolaños, Noelia; Mezzera, Cecilia; Gezelius, Henrik; Andrés, Belen; Rodríguez-Malmierca, Luis; Susín, Rafael; Schaad, Olivier; Iwasato, Takuji; Schüle, Roland; Rutlin, Michael; Nelson, Sacha; Ducret, Sebastien; Valdeolmillos, Miguel; Rijli, Filippo M; López-Bendito, Guillermina
2017-02-03
The cerebral cortex is organized into specialized sensory areas, whose initial territory is determined by intracortical molecular determinants. Yet, sensory cortical area size appears to be fine tuned during development to respond to functional adaptations. Here we demonstrate the existence of a prenatal sub-cortical mechanism that regulates the cortical areas size in mice. This mechanism is mediated by spontaneous thalamic calcium waves that propagate among sensory-modality thalamic nuclei up to the cortex and that provide a means of communication among sensory systems. Wave pattern alterations in one nucleus lead to changes in the pattern of the remaining ones, triggering changes in thalamic gene expression and cortical area size. Thus, silencing calcium waves in the auditory thalamus induces Rorβ upregulation in a neighbouring somatosensory nucleus preluding the enlargement of the barrel-field. These findings reveal that embryonic thalamic calcium waves coordinate cortical sensory area patterning and plasticity prior to sensory information processing.
Kramer, Ina; Sigrist, Markus; de Nooij, Joriene C; Taniuchi, Ichiro; Jessell, Thomas M; Arber, Silvia
2006-02-02
Subpopulations of sensory neurons in the dorsal root ganglion (DRG) can be characterized on the basis of sensory modalities that convey distinct peripheral stimuli, but the molecular mechanisms that underlie sensory neuronal diversification remain unclear. Here, we have used genetic manipulations in the mouse embryo to examine how Runx transcription factor signaling controls the acquisition of distinct DRG neuronal subtype identities. Runx3 acts to diversify an Ngn1-independent neuronal cohort by promoting the differentiation of proprioceptive sensory neurons through erosion of TrkB expression in prospective TrkC+ sensory neurons. In contrast, Runx1 controls neuronal diversification within Ngn1-dependent TrkA+ neurons by repression of neuropeptide CGRP expression and controlling the fine pattern of laminar termination in the dorsal spinal cord. Together, our findings suggest that Runx transcription factor signaling plays a key role in sensory neuron diversification.
Prenatal thalamic waves regulate cortical area size prior to sensory processing
Moreno-Juan, Verónica; Filipchuk, Anton; Antón-Bolaños, Noelia; Mezzera, Cecilia; Gezelius, Henrik; Andrés, Belen; Rodríguez-Malmierca, Luis; Susín, Rafael; Schaad, Olivier; Iwasato, Takuji; Schüle, Roland; Rutlin, Michael; Nelson, Sacha; Ducret, Sebastien; Valdeolmillos, Miguel; Rijli, Filippo M.; López-Bendito, Guillermina
2017-01-01
The cerebral cortex is organized into specialized sensory areas, whose initial territory is determined by intracortical molecular determinants. Yet, sensory cortical area size appears to be fine tuned during development to respond to functional adaptations. Here we demonstrate the existence of a prenatal sub-cortical mechanism that regulates the cortical areas size in mice. This mechanism is mediated by spontaneous thalamic calcium waves that propagate among sensory-modality thalamic nuclei up to the cortex and that provide a means of communication among sensory systems. Wave pattern alterations in one nucleus lead to changes in the pattern of the remaining ones, triggering changes in thalamic gene expression and cortical area size. Thus, silencing calcium waves in the auditory thalamus induces Rorβ upregulation in a neighbouring somatosensory nucleus preluding the enlargement of the barrel-field. These findings reveal that embryonic thalamic calcium waves coordinate cortical sensory area patterning and plasticity prior to sensory information processing. PMID:28155854
A measure for assessing the effects of audiovisual speech integration.
Altieri, Nicholas; Townsend, James T; Wenger, Michael J
2014-06-01
We propose a measure of audiovisual speech integration that takes into account accuracy and response times. This measure should prove beneficial for researchers investigating multisensory speech recognition, since it relates to normal-hearing and aging populations. As an example, age-related sensory decline influences both the rate at which one processes information and the ability to utilize cues from different sensory modalities. Our function assesses integration when both auditory and visual information are available, by comparing performance on these audiovisual trials with theoretical predictions for performance under the assumptions of parallel, independent self-terminating processing of single-modality inputs. We provide example data from an audiovisual identification experiment and discuss applications for measuring audiovisual integration skills across the life span.
Short-term memory stores organized by information domain.
Noyce, Abigail L; Cestero, Nishmar; Shinn-Cunningham, Barbara G; Somers, David C
2016-04-01
Vision and audition have complementary affinities, with vision excelling in spatial resolution and audition excelling in temporal resolution. Here, we investigated the relationships among the visual and auditory modalities and spatial and temporal short-term memory (STM) using change detection tasks. We created short sequences of visual or auditory items, such that each item within a sequence arose at a unique spatial location at a unique time. On each trial, two successive sequences were presented; subjects attended to either space (the sequence of locations) or time (the sequence of inter item intervals) and reported whether the patterns of locations or intervals were identical. Each subject completed blocks of unimodal trials (both sequences presented in the same modality) and crossmodal trials (Sequence 1 visual, Sequence 2 auditory, or vice versa) for both spatial and temporal tasks. We found a strong interaction between modality and task: Spatial performance was best on unimodal visual trials, whereas temporal performance was best on unimodal auditory trials. The order of modalities on crossmodal trials also mattered, suggesting that perceptual fidelity at encoding is critical to STM. Critically, no cost was attributable to crossmodal comparison: In both tasks, performance on crossmodal trials was as good as or better than on the weaker unimodal trials. STM representations of space and time can guide change detection in either the visual or the auditory modality, suggesting that the temporal or spatial organization of STM may supersede sensory-specific organization.
No evidence for visual context-dependency of olfactory learning in Drosophila
NASA Astrophysics Data System (ADS)
Yarali, Ayse; Mayerle, Moritz; Nawroth, Christian; Gerber, Bertram
2008-08-01
How is behaviour organised across sensory modalities? Specifically, we ask concerning the fruit fly Drosophila melanogaster how visual context affects olfactory learning and recall and whether information about visual context is getting integrated into olfactory memory. We find that changing visual context between training and test does not deteriorate olfactory memory scores, suggesting that these olfactory memories can drive behaviour despite a mismatch of visual context between training and test. Rather, both the establishment and the recall of olfactory memory are generally facilitated by light. In a follow-up experiment, we find no evidence for learning about combinations of odours and visual context as predictors for reinforcement even after explicit training in a so-called biconditional discrimination task. Thus, a ‘true’ interaction between visual and olfactory modalities is not evident; instead, light seems to influence olfactory learning and recall unspecifically, for example by altering motor activity, alertness or olfactory acuity.
Krahe, Rüdiger; Maler, Leonard
2016-01-01
Abstract Serotonergic neurons of the raphe nuclei of vertebrates project to most regions of the brain and are known to significantly affect sensory processing. The subsecond dynamics of sensory modulation of serotonin levels and its relation to behavior, however, remain unknown. We used fast-scan cyclic voltammetry to measure serotonin release in the electrosensory system of weakly electric fish, Apteronotus leptorhynchus. These fish use an electric organ to generate a quasi-sinusoidal electric field for communicating with conspecifics. In response to conspecific signals, they frequently produce signal modulations called chirps. We measured changes in serotonin concentration in the hindbrain electrosensory lobe (ELL) with a resolution of 0.1 s concurrently with chirping behavior evoked by mimics of conspecific electric signals. We show that serotonin release can occur phase locked to stimulus onset as well as spontaneously in the ELL region responsible for processing these signals. Intense auditory stimuli, on the other hand, do not modulate serotonin levels in this region, suggesting modality specificity. We found no significant correlation between serotonin release and chirp production on a trial-by-trial basis. However, on average, in the trials where the fish chirped, there was a reduction in serotonin release in response to stimuli mimicking similar-sized same-sex conspecifics. We hypothesize that the serotonergic system is part of an intricate sensory–motor loop: serotonin release in a sensory area is triggered by sensory input, giving rise to motor output, which can in turn affect serotonin release at the timescale of the ongoing sensory experience and in a context-dependent manner. PMID:27844054
The importance of sensory integration processes for action cascading
Gohil, Krutika; Stock, Ann-Kathrin; Beste, Christian
2015-01-01
Dual tasking or action cascading is essential in everyday life and often investigated using tasks presenting stimuli in different sensory modalities. Findings obtained with multimodal tasks are often broadly generalized, but until today, it has remained unclear whether multimodal integration affects performance in action cascading or the underlying neurophysiology. To bridge this gap, we asked healthy young adults to complete a stop-change paradigm which presented different stimuli in either one or two modalities while recording behavioral and neurophysiological data. Bimodal stimulus presentation prolonged response times and affected bottom-up and top-down guided attentional processes as reflected by the P1 and N1, respectively. However, the most important effect was the modulation of response selection processes reflected by the P3 suggesting that a potentially different way of forming task goals operates during action cascading in bimodal vs. unimodal tasks. When two modalities are involved, separate task goals need to be formed while a conjoint task goal may be generated when all stimuli are presented in the same modality. On a systems level, these processes seem to be related to the modulation of activity in fronto-polar regions (BA10) as well as Broca's area (BA44). PMID:25820681
Implicit Multisensory Associations Influence Voice Recognition
von Kriegstein, Katharina; Giraud, Anne-Lise
2006-01-01
Natural objects provide partially redundant information to the brain through different sensory modalities. For example, voices and faces both give information about the speech content, age, and gender of a person. Thanks to this redundancy, multimodal recognition is fast, robust, and automatic. In unimodal perception, however, only part of the information about an object is available. Here, we addressed whether, even under conditions of unimodal sensory input, crossmodal neural circuits that have been shaped by previous associative learning become activated and underpin a performance benefit. We measured brain activity with functional magnetic resonance imaging before, while, and after participants learned to associate either sensory redundant stimuli, i.e. voices and faces, or arbitrary multimodal combinations, i.e. voices and written names, ring tones, and cell phones or brand names of these cell phones. After learning, participants were better at recognizing unimodal auditory voices that had been paired with faces than those paired with written names, and association of voices with faces resulted in an increased functional coupling between voice and face areas. No such effects were observed for ring tones that had been paired with cell phones or names. These findings demonstrate that brief exposure to ecologically valid and sensory redundant stimulus pairs, such as voices and faces, induces specific multisensory associations. Consistent with predictive coding theories, associative representations become thereafter available for unimodal perception and facilitate object recognition. These data suggest that for natural objects effective predictive signals can be generated across sensory systems and proceed by optimization of functional connectivity between specialized cortical sensory modules. PMID:17002519
Flight motor networks modulate primary olfactory processing in the moth Manduca sexta.
Chapman, Phillip D; Burkland, Rex; Bradley, Samual P; Houot, Benjamin; Bullman, Victoria; Dacks, Andrew M; Daly, Kevin C
2018-05-22
Nervous systems must distinguish sensory signals derived from an animal's own movements (reafference) from environmentally derived sources (exafference). To accomplish this, motor networks producing reafference transmit motor information, via a corollary discharge circuit (CDC), to affected sensory networks, modulating sensory function during behavior. While CDCs have been described in most sensory modalities, none have been observed projecting to an olfactory pathway. In moths, two mesothoracic to deutocerebral histaminergic neurons (MDHns) project from flight sensorimotor centers in the mesothoracic neuromere to the antennal lobe (AL), where they provide the sole source of histamine (HA), but whether they represent a CDC is unknown. We demonstrate that MDHn spiking activity is positively correlated with wing-motor output and increased before bouts of motor activity, suggesting that MDHns communicate global locomotor state, rather than providing a precisely timed motor copy. Within the AL, HA application sharpened entrainment of projection neuron responses to odor stimuli embedded within simulated wing-beat-induced flows, whereas MDHn axotomy or AL HA receptor (HA-r) blockade reduced entrainment. This finding is consistent with higher-order CDCs, as the MDHns enhanced rather than filtered entrainment of AL projection neurons. Finally, HA-r blockade increased odor detection and discrimination thresholds in behavior assays. These results establish MDHns as a CDC that modulates AL temporal resolution, enhancing odor-guided behavior. MDHns thus appear to represent a higher-order CDC to an insect olfactory pathway; this CDC's unique nature highlights the importance of motor-to-sensory signaling as a context-specific mechanism that fine-tunes sensory function. Copyright © 2018 the Author(s). Published by PNAS.
Behavioral, Modeling, and Electrophysiological Evidence for Supramodality in Human Metacognition.
Faivre, Nathan; Filevich, Elisa; Solovey, Guillermo; Kühn, Simone; Blanke, Olaf
2018-01-10
Human metacognition, or the capacity to introspect on one's own mental states, has been mostly characterized through confidence reports in visual tasks. A pressing question is to what extent results from visual studies generalize to other domains. Answering this question allows determining whether metacognition operates through shared, supramodal mechanisms or through idiosyncratic, modality-specific mechanisms. Here, we report three new lines of evidence for decisional and postdecisional mechanisms arguing for the supramodality of metacognition. First, metacognitive efficiency correlated among auditory, tactile, visual, and audiovisual tasks. Second, confidence in an audiovisual task was best modeled using supramodal formats based on integrated representations of auditory and visual signals. Third, confidence in correct responses involved similar electrophysiological markers for visual and audiovisual tasks that are associated with motor preparation preceding the perceptual judgment. We conclude that the supramodality of metacognition relies on supramodal confidence estimates and decisional signals that are shared across sensory modalities. SIGNIFICANCE STATEMENT Metacognitive monitoring is the capacity to access, report, and regulate one's own mental states. In perception, this allows rating our confidence in what we have seen, heard, or touched. Although metacognitive monitoring can operate on different cognitive domains, we ignore whether it involves a single supramodal mechanism common to multiple cognitive domains or modality-specific mechanisms idiosyncratic to each domain. Here, we bring evidence in favor of the supramodality hypothesis by showing that participants with high metacognitive performance in one modality are likely to perform well in other modalities. Based on computational modeling and electrophysiology, we propose that supramodality can be explained by the existence of supramodal confidence estimates and by the influence of decisional cues on confidence estimates. Copyright © 2018 the authors 0270-6474/18/380263-15$15.00/0.
Electrophysiological CNS-processes related to associative learning in humans.
Christoffersen, Gert R J; Schachtman, Todd R
2016-01-01
The neurophysiology of human associative memory has been studied with electroencephalographic techniques since the 1930s. This research has revealed that different types of electrophysiological processes in the human brain can be modified by conditioning: sensory evoked potentials, sensory induced gamma-band activity, periods of frequency-specific waves (alpha and beta waves, the sensorimotor rhythm and the mu-rhythm) and slow cortical potentials. Conditioning of these processes has been studied in experiments that either use operant conditioning or repeated contingent pairings of conditioned and unconditioned stimuli (classical conditioning). In operant conditioning, the appearance of a specific brain process is paired with an external stimulus (neurofeedback) and the feedback enables subjects to obtain varying degrees of control of the CNS-process. Such acquired self-regulation of brain activity has found practical uses for instance in the amelioration of epileptic seizures, Autism Spectrum Disorders (ASD) and Attention Deficit Hyperactivity Disorder (ADHD). It has also provided communicative means of assistance for tetraplegic patients through the use of brain computer interfaces. Both extra and intracortically recorded signals have been coupled with contingent external feedback. It is the aim for this review to summarize essential results on all types of electromagnetic brain processes that have been modified by classical or operant conditioning. The results are organized according to type of conditioned EEG-process, type of conditioning, and sensory modalities of the conditioning stimuli. Copyright © 2015 Elsevier B.V. All rights reserved.
Sensation, mechanoreceptor, and nerve fiber function after nerve regeneration.
Krarup, Christian; Rosén, Birgitta; Boeckstyns, Michel; Ibsen Sørensen, Allan; Lundborg, Göran; Moldovan, Mihai; Archibald, Simon J
2017-12-01
Sensation is essential for recovery after peripheral nerve injury. However, the relationship between sensory modalities and function of regenerated fibers is uncertain. We have investigated the relationships between touch threshold, tactile gnosis, and mechanoreceptor and sensory fiber function after nerve regeneration. Twenty-one median or ulnar nerve lesions were repaired by a collagen nerve conduit or direct suture. Quantitative sensory hand function and sensory conduction studies by near-nerve technique, including tactile stimulation of mechanoreceptors, were followed for 2 years, and results were compared to noninjured hands. At both repair methods, touch thresholds at the finger tips recovered to 81 ± 3% and tactile gnosis only to 20 ± 4% (p < 0.001) of control. The sensory nerve action potentials (SNAPs) remained dispersed and areas recovered to 23 ± 2% and the amplitudes only to 7 ± 1% (P < 0.001). The areas of SNAPs after tactile stimulation recovered to 61 ± 11% and remained slowed. Touch sensation correlated with SNAP areas (p < 0.005) and was negatively related to the prolongation of tactile latencies (p < 0.01); tactile gnosis was not related to electrophysiological parameters. The recovered function of regenerated peripheral nerve fibers and reinnervated mechanoreceptors may differentially influence recovery of sensory modalities. Touch was affected by the number and function of regenerated fibers and mechanoreceptors. In contrast, tactile gnosis depends on the input and plasticity of the central nervous system (CNS), which may explain the absence of a direct relation between electrophysiological parameters and poor recovery. Dispersed maturation of sensory nerve fibers with desynchronized inputs to the CNS also contributes to the poor recovery of tactile gnosis. Ann Neurol 2017. Ann Neurol 2017;82:940-950. © 2017 American Neurological Association.
Nagy, J I; Lynn, B D; Senecal, J M M; Stecina, K
2018-05-07
Electrical coupling mediated by connexin36-containing gap junctions that form electrical synapses is known to be prevalent in the central nervous system, but such coupling was long ago reported also to occur between cutaneous sensory fibers. Here, we provide evidence supporting the capability of primary afferent fibers to engage in electrical coupling. In transgenic mice with enhanced green fluorescent protein (eGFP) serving as a reporter for connexin36 expression, immunofluorescence labeling of eGFP was found in subpopulations of neurons in lumbar dorsal root and trigeminal sensory ganglia, and in fibers within peripheral nerves and tissues. Immunolabeling of connexin36 was robust in the sciatic nerve, weaker in sensory ganglia than in peripheral nerve, and absent in these tissues from Cx36 null mice. Connexin36 mRNA was detected in ganglia from wild-type mice, but not in those from Cx36 null mice. Labeling of eGFP was localized within a subpopulation of ganglion cells containing substance P and calcitonin gene-releasing peptide, and in peripheral fibers containing these peptides. Expression of eGFP was also found in various proportions of sensory ganglion neurons containing transient receptor potential (TRP) channels, including TRPV1 and TRPM8. Ganglion cells labeled for isolectin B4 and tyrosine hydroxylase displayed very little co-localization with eGFP. Our results suggest that previously observed electrical coupling between peripheral sensory fibers occurs via electrical synapses formed by Cx36-containing gap junctions, and that some degree of selectivity in the extent of electrical coupling may occur between fibers belonging to subpopulations of sensory neurons identified according to their sensory modality responsiveness. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.
3D hierarchical spatial representation and memory of multimodal sensory data
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Dow, Paul A.; Huber, David J.
2009-04-01
This paper describes an efficient method and system for representing, processing and understanding multi-modal sensory data. More specifically, it describes a computational method and system for how to process and remember multiple locations in multimodal sensory space (e.g., visual, auditory, somatosensory, etc.). The multimodal representation and memory is based on a biologically-inspired hierarchy of spatial representations implemented with novel analogues of real representations used in the human brain. The novelty of the work is in the computationally efficient and robust spatial representation of 3D locations in multimodal sensory space as well as an associated working memory for storage and recall of these representations at the desired level for goal-oriented action. We describe (1) A simple and efficient method for human-like hierarchical spatial representations of sensory data and how to associate, integrate and convert between these representations (head-centered coordinate system, body-centered coordinate, etc.); (2) a robust method for training and learning a mapping of points in multimodal sensory space (e.g., camera-visible object positions, location of auditory sources, etc.) to the above hierarchical spatial representations; and (3) a specification and implementation of a hierarchical spatial working memory based on the above for storage and recall at the desired level for goal-oriented action(s). This work is most useful for any machine or human-machine application that requires processing of multimodal sensory inputs, making sense of it from a spatial perspective (e.g., where is the sensory information coming from with respect to the machine and its parts) and then taking some goal-oriented action based on this spatial understanding. A multi-level spatial representation hierarchy means that heterogeneous sensory inputs (e.g., visual, auditory, somatosensory, etc.) can map onto the hierarchy at different levels. When controlling various machine/robot degrees of freedom, the desired movements and action can be computed from these different levels in the hierarchy. The most basic embodiment of this machine could be a pan-tilt camera system, an array of microphones, a machine with arm/hand like structure or/and a robot with some or all of the above capabilities. We describe the approach, system and present preliminary results on a real-robotic platform.
Haptic guidance of overt visual attention.
List, Alexandra; Iordanescu, Lucica; Grabowecky, Marcia; Suzuki, Satoru
2014-11-01
Research has shown that information accessed from one sensory modality can influence perceptual and attentional processes in another modality. Here, we demonstrated a novel crossmodal influence of haptic-shape information on visual attention. Participants visually searched for a target object (e.g., an orange) presented among distractor objects, fixating the target as quickly as possible. While searching for the target, participants held (never viewed and out of sight) an item of a specific shape in their hands. In two experiments, we demonstrated that the time for the eyes to reach a target-a measure of overt visual attention-was reduced when the shape of the held item (e.g., a sphere) was consistent with the shape of the visual target (e.g., an orange), relative to when the held shape was unrelated to the target (e.g., a hockey puck) or when no shape was held. This haptic-to-visual facilitation occurred despite the fact that the held shapes were not predictive of the visual targets' shapes, suggesting that the crossmodal influence occurred automatically, reflecting shape-specific haptic guidance of overt visual attention.
Neuropsychology of prefrontal cortex
Siddiqui, Shazia Veqar; Chatterjee, Ushri; Kumar, Devvarta; Siddiqui, Aleem; Goyal, Nishant
2008-01-01
The history of clinical frontal lobe study is long and rich which provides valuable insights into neuropsychologic determinants of functions of prefrontal cortex (PFC). PFC is often classified as multimodal association cortex as extremely processed information from various sensory modalities is integrated here in a precise fashion to form the physiologic constructs of memory, perception, and diverse cognitive processes. Human neuropsychologic studies also support the notion of different functional operations within the PFC. The specification of the component ‘executive’ processes and their localization to particular regions of PFC have been implicated in a wide variety of psychiatric disorders. PMID:19742233
Vision-mediated exploitation of a novel host plant by a tephritid fruit fly.
Piñero, Jaime C; Souder, Steven K; Vargas, Roger I
2017-01-01
Shortly after its introduction into the Hawaiian Islands around 1895, the polyphagous, invasive fruit fly Bactrocera (Zeugodacus) cucurbitae (Coquillett) (Diptera: Tephritidae) was provided the opportunity to expand its host range to include a novel host, papaya (Carica papaya). It has been documented that female B. cucurbitae rely strongly on vision to locate host fruit. Given that the papaya fruit is visually conspicuous in the papaya agro-ecosystem, we hypothesized that female B. cucurbitae used vision as the main sensory modality to find and exploit the novel host fruit. Using a comparative approach that involved a series of studies under natural and semi-natural conditions in Hawaii, we assessed the ability of female B. cucurbitae to locate and oviposit in papaya fruit using the sensory modalities of olfaction and vision alone and also in combination. The results of these studies demonstrate that, under a variety of conditions, volatiles emitted by the novel host do not positively stimulate the behavior of the herbivore. Rather, vision seems to be the main mechanism driving the exploitation of the novel host. Volatiles emitted by the novel host papaya fruit did not contribute in any way to the visual response of females. Our findings highlight the remarkable role of vision in the host-location process of B. cucurbitae and provide empirical evidence for this sensory modality as a potential mechanism involved in host range expansion.
Vision-mediated exploitation of a novel host plant by a tephritid fruit fly
2017-01-01
Shortly after its introduction into the Hawaiian Islands around 1895, the polyphagous, invasive fruit fly Bactrocera (Zeugodacus) cucurbitae (Coquillett) (Diptera: Tephritidae) was provided the opportunity to expand its host range to include a novel host, papaya (Carica papaya). It has been documented that female B. cucurbitae rely strongly on vision to locate host fruit. Given that the papaya fruit is visually conspicuous in the papaya agro-ecosystem, we hypothesized that female B. cucurbitae used vision as the main sensory modality to find and exploit the novel host fruit. Using a comparative approach that involved a series of studies under natural and semi-natural conditions in Hawaii, we assessed the ability of female B. cucurbitae to locate and oviposit in papaya fruit using the sensory modalities of olfaction and vision alone and also in combination. The results of these studies demonstrate that, under a variety of conditions, volatiles emitted by the novel host do not positively stimulate the behavior of the herbivore. Rather, vision seems to be the main mechanism driving the exploitation of the novel host. Volatiles emitted by the novel host papaya fruit did not contribute in any way to the visual response of females. Our findings highlight the remarkable role of vision in the host-location process of B. cucurbitae and provide empirical evidence for this sensory modality as a potential mechanism involved in host range expansion. PMID:28380069
Stochastic correlative firing for figure-ground segregation.
Chen, Zhe
2005-03-01
Segregation of sensory inputs into separate objects is a central aspect of perception and arises in all sensory modalities. The figure-ground segregation problem requires identifying an object of interest in a complex scene, in many cases given binaural auditory or binocular visual observations. The computations required for visual and auditory figure-ground segregation share many common features and can be cast within a unified framework. Sensory perception can be viewed as a problem of optimizing information transmission. Here we suggest a stochastic correlative firing mechanism and an associative learning rule for figure-ground segregation in several classic sensory perception tasks, including the cocktail party problem in binaural hearing, binocular fusion of stereo images, and Gestalt grouping in motion perception.
Meijer, Guido T; Montijn, Jorrit S; Pennartz, Cyriel M A; Lansink, Carien S
2017-09-06
The sensory neocortex is a highly connected associative network that integrates information from multiple senses, even at the level of the primary sensory areas. Although a growing body of empirical evidence supports this view, the neural mechanisms of cross-modal integration in primary sensory areas, such as the primary visual cortex (V1), are still largely unknown. Using two-photon calcium imaging in awake mice, we show that the encoding of audiovisual stimuli in V1 neuronal populations is highly dependent on the features of the stimulus constituents. When the visual and auditory stimulus features were modulated at the same rate (i.e., temporally congruent), neurons responded with either an enhancement or suppression compared with unisensory visual stimuli, and their prevalence was balanced. Temporally incongruent tones or white-noise bursts included in audiovisual stimulus pairs resulted in predominant response suppression across the neuronal population. Visual contrast did not influence multisensory processing when the audiovisual stimulus pairs were congruent; however, when white-noise bursts were used, neurons generally showed response suppression when the visual stimulus contrast was high whereas this effect was absent when the visual contrast was low. Furthermore, a small fraction of V1 neurons, predominantly those located near the lateral border of V1, responded to sound alone. These results show that V1 is involved in the encoding of cross-modal interactions in a more versatile way than previously thought. SIGNIFICANCE STATEMENT The neural substrate of cross-modal integration is not limited to specialized cortical association areas but extends to primary sensory areas. Using two-photon imaging of large groups of neurons, we show that multisensory modulation of V1 populations is strongly determined by the individual and shared features of cross-modal stimulus constituents, such as contrast, frequency, congruency, and temporal structure. Congruent audiovisual stimulation resulted in a balanced pattern of response enhancement and suppression compared with unisensory visual stimuli, whereas incongruent or dissimilar stimuli at full contrast gave rise to a population dominated by response-suppressing neurons. Our results indicate that V1 dynamically integrates nonvisual sources of information while still attributing most of its resources to coding visual information. Copyright © 2017 the authors 0270-6474/17/378783-14$15.00/0.
Cross-modal illusory conjunctions between vision and touch.
Cinel, Caterina; Humphreys, Glyn W; Poli, Riccardo
2002-10-01
Cross-modal illusory conjunctions (ICs) happen when, under conditions of divided attention, felt textures are reported as being seen or vice versa. Experiments provided evidence for these errors, demonstrated that ICs are more frequent if tactile and visual stimuli are in the same hemispace, and showed that ICs still occur under forced-choice conditions but do not occur when attention to the felt texture is increased. Cross-modal ICs were also found in a patient with parietal damage even with relatively long presentations of visual stimuli. The data are consistent with there being cross-modal integration of sensory information, with the modality of origin sometimes being misattributed when attention is constrained. The empirical conclusions from the experiments are supported by formal models.
Cortical plasticity and preserved function in early blindness
Renier, Laurent; De Volder, Anne G.; Rauschecker, Josef P.
2013-01-01
The “neural Darwinism” theory predicts that when one sensory modality is lacking, as in congenital blindness, the target structures are taken over by the afferent inputs from other senses that will promote and control their functional maturation (Edelman, 1993). This view receives support from both cross-modal plasticity experiments in animal models and functional imaging studies in man, which are presented here. PMID:23453908
Face Recognition, Musical Appraisal, and Emotional Crossmodal Bias.
Invitto, Sara; Calcagnì, Antonio; Mignozzi, Arianna; Scardino, Rosanna; Piraino, Giulia; Turchi, Daniele; De Feudis, Irio; Brunetti, Antonio; Bevilacqua, Vitoantonio; de Tommaso, Marina
2017-01-01
Recent research on the crossmodal integration of visual and auditory perception suggests that evaluations of emotional information in one sensory modality may tend toward the emotional value generated in another sensory modality. This implies that the emotions elicited by musical stimuli can influence the perception of emotional stimuli presented in other sensory modalities, through a top-down process. The aim of this work was to investigate how crossmodal perceptual processing influences emotional face recognition and how potential modulation of this processing induced by music could be influenced by the subject's musical competence. We investigated how emotional face recognition processing could be modulated by listening to music and how this modulation varies according to the subjective emotional salience of the music and the listener's musical competence. The sample consisted of 24 participants: 12 professional musicians and 12 university students (non-musicians). Participants performed an emotional go/no-go task whilst listening to music by Albeniz, Chopin, or Mozart. The target stimuli were emotionally neutral facial expressions. We examined the N170 Event-Related Potential (ERP) and behavioral responses (i.e., motor reaction time to target recognition and musical emotional judgment). A linear mixed-effects model and a decision-tree learning technique were applied to N170 amplitudes and latencies. The main findings of the study were that musicians' behavioral responses and N170 is more affected by the emotional value of music administered in the emotional go/no-go task and this bias is also apparent in responses to the non-target emotional face. This suggests that emotional information, coming from multiple sensory channels, activates a crossmodal integration process that depends upon the stimuli emotional salience and the listener's appraisal.
David, Nicole; Skoruppa, Stefan; Gulberti, Alessandro
2016-01-01
The sense of agency describes the ability to experience oneself as the agent of one's own actions. Previous studies of the sense of agency manipulated the predicted sensory feedback related either to movement execution or to the movement’s outcome, for example by delaying the movement of a virtual hand or the onset of a tone that resulted from a button press. Such temporal sensorimotor discrepancies reduce the sense of agency. It remains unclear whether movement-related feedback is processed differently than outcome-related feedback in terms of agency experience, especially if these types of feedback differ with respect to sensory modality. We employed a mixed-reality setup, in which participants tracked their finger movements by means of a virtual hand. They performed a single tap, which elicited a sound. The temporal contingency between the participants’ finger movements and (i) the movement of the virtual hand or (ii) the expected auditory outcome was systematically varied. In a visual control experiment, the tap elicited a visual outcome. For each feedback type and participant, changes in the sense of agency were quantified using a forced-choice paradigm and the Method of Constant Stimuli. Participants were more sensitive to delays of outcome than to delays of movement execution. This effect was very similar for visual or auditory outcome delays. Our results indicate different contributions of movement- versus outcome-related sensory feedback to the sense of agency, irrespective of the modality of the outcome. We propose that this differential sensitivity reflects the behavioral importance of assessing authorship of the outcome of an action. PMID:27536948
Higher order visual input to the mushroom bodies in the bee, Bombus impatiens.
Paulk, Angelique C; Gronenberg, Wulfila
2008-11-01
To produce appropriate behaviors based on biologically relevant associations, sensory pathways conveying different modalities are integrated by higher-order central brain structures, such as insect mushroom bodies. To address this function of sensory integration, we characterized the structure and response of optic lobe (OL) neurons projecting to the calyces of the mushroom bodies in bees. Bees are well known for their visual learning and memory capabilities and their brains possess major direct visual input from the optic lobes to the mushroom bodies. To functionally characterize these visual inputs to the mushroom bodies, we recorded intracellularly from neurons in bumblebees (Apidae: Bombus impatiens) and a single neuron in a honeybee (Apidae: Apis mellifera) while presenting color and motion stimuli. All of the mushroom body input neurons were color sensitive while a subset was motion sensitive. Additionally, most of the mushroom body input neurons would respond to the first, but not to subsequent, presentations of repeated stimuli. In general, the medulla or lobula neurons projecting to the calyx signaled specific chromatic, temporal, and motion features of the visual world to the mushroom bodies, which included sensory information required for the biologically relevant associations bees form during foraging tasks.
Dazeley, Paul; Houston-Price, Carmel; Hill, Claire
2012-09-01
Commercial interventions seeking to promote fruit and vegetable consumption by encouraging preschool- and school-aged children to engage with foods with 'all their senses' are increasing in number. We review the efficacy of such sensory interaction programmes and consider the components of these that are likely to encourage food acceptance. Repeated exposure to a food's flavour has robust empirical support in terms of its potential to increase food intake. However, children are naturally reluctant to taste new or disliked foods, and parents often struggle to provide sufficient taste opportunities for these foods to be adopted into the child's diet. We therefore explore whether prior exposure to a new food's non-taste sensory properties, such as its smell, sound, appearance or texture, might facilitate the food's introduction into the child's diet, by providing the child with an opportunity to become partially familiar with the food without invoking the distress associated with tasting it. We review the literature pertaining to the benefits associated with exposure to foods through each of the five sensory modalities in turn. We conclude by calling for further research into the potential for familiarisation with the visual, olfactory, somaesthetic and auditory properties of foods to enhance children's willingness to consume a variety of fruits and vegetables.
Memorable Audiovisual Narratives Synchronize Sensory and Supramodal Neural Responses
2016-01-01
Abstract Our brains integrate information across sensory modalities to generate perceptual experiences and form memories. However, it is difficult to determine the conditions under which multisensory stimulation will benefit or hinder the retrieval of everyday experiences. We hypothesized that the determining factor is the reliability of information processing during stimulus presentation, which can be measured through intersubject correlation of stimulus-evoked activity. We therefore presented biographical auditory narratives and visual animations to 72 human subjects visually, auditorily, or combined, while neural activity was recorded using electroencephalography. Memory for the narrated information, contained in the auditory stream, was tested 3 weeks later. While the visual stimulus alone led to no meaningful retrieval, this related stimulus improved memory when it was combined with the story, even when it was temporally incongruent with the audio. Further, individuals with better subsequent memory elicited neural responses during encoding that were more correlated with their peers. Surprisingly, portions of this predictive synchronized activity were present regardless of the sensory modality of the stimulus. These data suggest that the strength of sensory and supramodal activity is predictive of memory performance after 3 weeks, and that neural synchrony may explain the mnemonic benefit of the functionally uninformative visual context observed for these real-world stimuli. PMID:27844062
A modular theory of multisensory integration for motor control
Tagliabue, Michele; McIntyre, Joseph
2014-01-01
To control targeted movements, such as reaching to grasp an object or hammering a nail, the brain can use divers sources of sensory information, such as vision and proprioception. Although a variety of studies have shown that sensory signals are optimally combined according to principles of maximum likelihood, increasing evidence indicates that the CNS does not compute a single, optimal estimation of the target's position to be compared with a single optimal estimation of the hand. Rather, it employs a more modular approach in which the overall behavior is built by computing multiple concurrent comparisons carried out simultaneously in a number of different reference frames. The results of these individual comparisons are then optimally combined in order to drive the hand. In this article we examine at a computational level two formulations of concurrent models for sensory integration and compare this to the more conventional model of converging multi-sensory signals. Through a review of published studies, both our own and those performed by others, we produce evidence favoring the concurrent formulations. We then examine in detail the effects of additive signal noise as information flows through the sensorimotor system. By taking into account the noise added by sensorimotor transformations, one can explain why the CNS may shift its reliance on one sensory modality toward a greater reliance on another and investigate under what conditions those sensory transformations occur. Careful consideration of how transformed signals will co-vary with the original source also provides insight into how the CNS chooses one sensory modality over another. These concepts can be used to explain why the CNS might, for instance, create a visual representation of a task that is otherwise limited to the kinesthetic domain (e.g., pointing with one hand to a finger on the other) and why the CNS might choose to recode sensory information in an external reference frame. PMID:24550816
Role of orientation reference selection in motion sickness
NASA Technical Reports Server (NTRS)
Peterka, Robert J.; Black, F. Owen
1992-01-01
The overall objective of this proposal is to understand the relationship between human orientation control and motion sickness susceptibility. Three areas related to orientation control will be investigated. These three areas are (1) reflexes associated with the control of eye movements and posture, (2) the perception of body rotation and position with respect to gravity, and (3) the strategies used to resolve sensory conflict situations which arise when different sensory systems provide orientation cues which are not consistent with one another or with previous experience. Of particular interest is the possibility that a subject may be able to ignore an inaccurate sensory modality in favor of one or more other sensory modalities which do provide accurate orientation reference information. We refer to this process as sensory selection. This proposal will attempt to quantify subjects' sensory selection abilities and determine if this ability confers some immunity to the development of motion sickness symptoms. Measurements of reflexes, motion perception, sensory selection abilities, and motion sickness susceptibility will concentrate on pitch and roll motions since these seem most relevant to the space motion sickness problem. Vestibulo-ocular (VOR) and oculomotor reflexes will be measured using a unique two-axis rotation device developed in our laboratory over the last seven years. Posture control reflexes will be measured using a movable posture platform capable of independently altering proprioceptive and visual orientation cues. Motion perception will be quantified using closed loop feedback technique developed by Zacharias and Young (Exp Brain Res, 1981). This technique requires a subject to null out motions induced by the experimenter while being exposed to various confounding sensory orientation cues. A subject's sensory selection abilities will be measured by the magnitude and timing of his reactions to changes in sensory environments. Motion sickness susceptibility will be measured by the time required to induce characteristic changes in the pattern of electrogastrogram recordings while exposed to various sensory environments during posture and motion perception tests. The results of this work are relevant to NASA's interest in understanding the etiology of space motion sickness. If any of the reflex, perceptual, or sensory selection abilities of subjects are found to correlate with motion sickness susceptibility, this work may be an important step in suggesting a method of predicting motion sickness susceptibility. If sensory selection can provide a means to avoid sensory conflict, then further work may lead to training programs which could enhance a subject's sensory selection ability and therefore minimize motion sickness susceptibility.
The 'sensory tolerance limit': A hypothetical construct determining exercise performance?
Hureau, Thomas J; Romer, Lee M; Amann, Markus
2018-02-01
Neuromuscular fatigue compromises exercise performance and is determined by central and peripheral mechanisms. Interactions between the two components of fatigue can occur via neural pathways, including feedback and feedforward processes. This brief review discusses the influence of feedback and feedforward mechanisms on exercise limitation. In terms of feedback mechanisms, particular attention is given to group III/IV sensory neurons which link limb muscle with the central nervous system. Central corollary discharge, a copy of the neural drive from the brain to the working muscles, provides a signal from the motor system to sensory systems and is considered a feedforward mechanism that might influence fatigue and consequently exercise performance. We highlight findings from studies supporting the existence of a 'critical threshold of peripheral fatigue', a previously proposed hypothesis based on the idea that a negative feedback loop operates to protect the exercising limb muscle from severe threats to homeostasis during whole-body exercise. While the threshold theory remains to be disproven within a given task, it is not generalisable across different exercise modalities. The 'sensory tolerance limit', a more theoretical concept, may address this issue and explain exercise tolerance in more global terms and across exercise modalities. The 'sensory tolerance limit' can be viewed as a negative feedback loop which accounts for the sum of all feedback (locomotor muscles, respiratory muscles, organs, and muscles not directly involved in exercise) and feedforward signals processed within the central nervous system with the purpose of regulating the intensity of exercise to ensure that voluntary activity remains tolerable.
Yang, Chao-Yang; Wu, Cheng-Tse
2017-03-01
This research investigated the risks involved in bicycle riding while using various sensory modalities to deliver training information. To understand the risks associated with using bike computers, this study evaluated hazard perception performance through lab-based simulations of authentic riding conditions. Analysing hazard sensitivity (d') of signal detection theory, the rider's response time, and eye glances provided insights into the risks of using bike computers. In this study, 30 participants were tested with eight hazard perception tasks while they maintained a cadence of 60 ± 5 RPM and used bike computers with different sensory displays, namely visual, auditory, and tactile feedback signals. The results indicated that synchronously using different sense organs to receive cadence feedback significantly affects hazard perception performance; direct visual information leads to the worst rider distraction, with a mean sensitivity to hazards (d') of -1.03. For systems with multiple interacting sensory aids, auditory aids were found to result in the greatest reduction in sensitivity to hazards (d' mean = -0.57), whereas tactile sensory aids reduced the degree of rider distraction (d' mean = -0.23). Our work complements existing work in this domain by advancing the understanding of how to design devices that deliver information subtly, thereby preventing disruption of a rider's perception of road hazards. Copyright © 2016 Elsevier Ltd. All rights reserved.
Atkinson, Joanna R.
2006-01-01
The study of voice-hallucinations in deaf individuals, who exploit the visuomotor rather than auditory modality for communication, provides rare insight into the relationship between sensory experience and how “voices” are perceived. Relatively little is known about the perceptual characteristics of voice-hallucinations in congenitally deaf people who use lip-reading or sign language as their preferred means of communication. The existing literature on hallucinations in deaf people is reviewed, alongside consideration of how such phenomena may fit into explanatory subvocal articulation hypotheses proposed for auditory verbal hallucinations in hearing people. It is suggested that a failure in subvocal articulation processes may account for voice-hallucinations in both hearing and deaf people but that the distinct way in which hallucinations are experienced may be due to differences in a sensory feedback component, which is influenced by both auditory deprivation and language modality. This article highlights how the study of deaf people may inform wider understanding of auditory verbal hallucinations and subvocal processes generally. PMID:16510696
Beyond sensory images: Object-based representation in the human ventral pathway
Pietrini, Pietro; Furey, Maura L.; Ricciardi, Emiliano; Gobbini, M. Ida; Wu, W.-H. Carolyn; Cohen, Leonardo; Guazzelli, Mario; Haxby, James V.
2004-01-01
We investigated whether the topographically organized, category-related patterns of neural response in the ventral visual pathway are a representation of sensory images or a more abstract representation of object form that is not dependent on sensory modality. We used functional MRI to measure patterns of response evoked during visual and tactile recognition of faces and manmade objects in sighted subjects and during tactile recognition in blind subjects. Results showed that visual and tactile recognition evoked category-related patterns of response in a ventral extrastriate visual area in the inferior temporal gyrus that were correlated across modality for manmade objects. Blind subjects also demonstrated category-related patterns of response in this “visual” area, and in more ventral cortical regions in the fusiform gyrus, indicating that these patterns are not due to visual imagery and, furthermore, that visual experience is not necessary for category-related representations to develop in these cortices. These results demonstrate that the representation of objects in the ventral visual pathway is not simply a representation of visual images but, rather, is a representation of more abstract features of object form. PMID:15064396
Nonlinear Hebbian Learning as a Unifying Principle in Receptive Field Formation.
Brito, Carlos S N; Gerstner, Wulfram
2016-09-01
The development of sensory receptive fields has been modeled in the past by a variety of models including normative models such as sparse coding or independent component analysis and bottom-up models such as spike-timing dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic plasticity. Here we show that the above variety of approaches can all be unified into a single common principle, namely nonlinear Hebbian learning. When nonlinear Hebbian learning is applied to natural images, receptive field shapes were strongly constrained by the input statistics and preprocessing, but exhibited only modest variation across different choices of nonlinearities in neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse network activity are necessary for the development of localized receptive fields. The analysis of alternative sensory modalities such as auditory models or V2 development lead to the same conclusions. In all examples, receptive fields can be predicted a priori by reformulating an abstract model as nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural statistics can account for many aspects of receptive field formation across models and sensory modalities.
Nonlinear Hebbian Learning as a Unifying Principle in Receptive Field Formation
Gerstner, Wulfram
2016-01-01
The development of sensory receptive fields has been modeled in the past by a variety of models including normative models such as sparse coding or independent component analysis and bottom-up models such as spike-timing dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic plasticity. Here we show that the above variety of approaches can all be unified into a single common principle, namely nonlinear Hebbian learning. When nonlinear Hebbian learning is applied to natural images, receptive field shapes were strongly constrained by the input statistics and preprocessing, but exhibited only modest variation across different choices of nonlinearities in neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse network activity are necessary for the development of localized receptive fields. The analysis of alternative sensory modalities such as auditory models or V2 development lead to the same conclusions. In all examples, receptive fields can be predicted a priori by reformulating an abstract model as nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural statistics can account for many aspects of receptive field formation across models and sensory modalities. PMID:27690349
Michalka, Samantha W; Kong, Lingqiang; Rosen, Maya L; Shinn-Cunningham, Barbara G; Somers, David C
2015-08-19
The frontal lobes control wide-ranging cognitive functions; however, functional subdivisions of human frontal cortex are only coarsely mapped. Here, functional magnetic resonance imaging reveals two distinct visual-biased attention regions in lateral frontal cortex, superior precentral sulcus (sPCS) and inferior precentral sulcus (iPCS), anatomically interdigitated with two auditory-biased attention regions, transverse gyrus intersecting precentral sulcus (tgPCS) and caudal inferior frontal sulcus (cIFS). Intrinsic functional connectivity analysis demonstrates that sPCS and iPCS fall within a broad visual-attention network, while tgPCS and cIFS fall within a broad auditory-attention network. Interestingly, we observe that spatial and temporal short-term memory (STM), respectively, recruit visual and auditory attention networks in the frontal lobe, independent of sensory modality. These findings not only demonstrate that both sensory modality and information domain influence frontal lobe functional organization, they also demonstrate that spatial processing co-localizes with visual processing and that temporal processing co-localizes with auditory processing in lateral frontal cortex. Copyright © 2015 Elsevier Inc. All rights reserved.
Borisoff, Jaimie F; Elliott, Stacy L; Hocaloski, Shea; Birch, Gary E
2010-11-01
Sexual health is often severely impacted after spinal cord injury (SCI). Current research has primarily addressed male erection and fertility, when in fact pleasure and orgasm are top priorities for functional recovery. Sensory substitution technology operates by communicating input from a lost sensory pathway to another intact sensory modality. It was hypothesized that through training and neuroplasticity, mapped tongue sensations would be interpreted as sensory perceptions arising from insensate genitalia, and improve the sexual experience. To report the development of a sensory substitution system for the sexual rehabilitation of men with chronic SCI. Subjects performed sexual self-stimulation while using a novel sensory substitution device that mapped the stroking motion of the hand to a congruous flow of electrocutaneous sensations on the tongue. Three questionnaires, along with structured interviews, were used to rate the perceived sexual sensations following each training session. Subjects completed 20 sessions over approximately 8 weeks of training. Each subject reported an increased level of sexual pleasure soon after training with the device. Each subject also reported specific perceptions of cutaneous-like sensations below their lesion that matched their hand motion. Later sessions, while remaining pleasurable and interesting, were inconsistent, and no subject reported an orgasmic feeling during a session. The subjects were all interested in continuing training with the device at home, if possible, in the future. This study is the first to show that sensory substitution is a possible therapeutic avenue for sexual rehabilitation in people lacking normal genital sexual sensations. However more research, for instance on frequency and duration of training, is needed in order to induce functional lasting neuroplasticity. In the near term, SCI rehabilitation should more fully address sexuality and the role of neuroplasticity for promoting the maximal potential for sexual pleasure and orgasm. © 2010 International Society for Sexual Medicine.
Sanz-Cervera, Pilar; Pastor-Cerezuela, Gemma; González-Sala, Francisco; Tárraga-Mínguez, Raúl; Fernández-Andrés, Maria-Inmaculada
2017-01-01
Children with neurodevelopmental disorders often show impairments in sensory processing (SP) and higher functions. The main objective of this study was to compare SP, praxis and social participation (SOC) in four groups of children: ASD Group (n = 21), ADHD Group (n = 21), ASD+ADHD Group (n = 21), and Comparison Group (n = 27). Participants were the parents and teachers of these children who were 5–8 years old (M = 6.32). They completed the Sensory Processing Measure (SPM) to evaluate the sensory profile, praxis and SOC of the children in both the home and classroom contexts. In the home context, the most affected was the ASD+ADHD group. The ADHD group obtained higher scores than the ASD group on the Body Awareness (BOD) subscale, indicating a higher level of dysfunction. The ASD group, however, did not obtain higher scores than the ADHD group on any subscale. In the classroom context, the most affected were the two ASD groups: the ASD+ADHD group obtained higher scores than the ADHD group on the Hearing (HEA) and Social Participation (SOC) subscales, and the ASD group obtained higher scores than the ADHD group on the SOC subscale. Regarding sensory modalities, difficulties in proprioception seem to be more characteristic to the ADHD condition. As for higher-level functioning, social difficulties seem to be more characteristic to the ASD condition. Differences between the two contexts were only found in the ASD group, which could be related to contextual hyperselectivity, an inherent autistic feature. Despite possible individual differences, specific intervention programs should be developed to improve the sensory challenges faced by children with different diagnoses. PMID:29075217
Experience with a Talker Can Transfer Across Modalities to Facilitate Lipreading
Sanchez, Kauyumari; Dias, James W.; Rosenblum, Lawrence D.
2013-01-01
Rosenblum, Miller, and Sanchez (2007) found that participants first trained to lipread a particular talker were then better able to perceive the auditory speech of that same talker, compared to that of a novel talker. This suggests that the talker experience a perceiver gains in one sensory modality can be transferred to another modality to make that speech easier to perceive. An experiment was conducted to examine whether this cross-sensory transfer of talker experience could occur: 1) from auditory to lipread speech; 2) with subjects not screened for adequate lipreading skill; 3) when both a familiar and unfamiliar talker are presented during lipreading; and 4) for both old (presentation set) and new words. Subjects were first asked to identify a set of words from a talker. They were then asked to perform a lipreading task from two faces, one of which was of the same talker they heard in the first phase of the experiment. Results revealed that subjects who lipread from the same talker they had heard performed better than those who lipead a different talker, regardless of whether the words were old or new. These results add further evidence that learning of amodal talker information can facilitate speech perception across modalities and also suggest that this information is not restricted to previously heard words. PMID:23955059
Neural Signature of Value-Based Sensorimotor Prioritization in Humans
Blangero, Annabelle
2017-01-01
In situations in which impending sensory events demand fast action choices, we must be ready to prioritize higher-value courses of action to avoid missed opportunities. When such a situation first presents itself, stimulus–action contingencies and their relative value must be encoded to establish a value-biased state of preparation for an impending sensorimotor decision. Here, we sought to identify neurophysiological signatures of such processes in the human brain (both female and male). We devised a task requiring fast action choices based on the discrimination of a simple visual cue in which the differently valued sensory alternatives were presented 750–800 ms before as peripheral “targets” that specified the stimulus–action mapping for the upcoming decision. In response to the targets, we identified a discrete, transient, spatially selective signal in the event-related potential (ERP), which scaled with relative value and strongly predicted the degree of behavioral bias in the upcoming decision both across and within subjects. This signal is not compatible with any hitherto known ERP signature of spatial selection and also bears novel distinctions with respect to characterizations of value-sensitive, spatially selective activity found in sensorimotor areas of nonhuman primates. Specifically, a series of follow-up experiments revealed that the signal was reliably invoked regardless of response laterality, response modality, sensory feature, and reward valence. It was absent, however, when the response deadline was relaxed and the strategic need for biasing removed. Therefore, more than passively representing value or salience, the signal appears to play a versatile and active role in adaptive sensorimotor prioritization. SIGNIFICANCE STATEMENT In many situations such as fast-moving sports, we must be ready to act fast in response to sensory events and, in our preparation, prioritize courses of action that lead to greater rewards. Although behavioral effects of value biases in sensorimotor decision making have been widely studied, little is known about the neural processes that set these biases in place beforehand. Here, we report the discovery of a transient, spatially selective neural signal in humans that encodes the relative value of competing decision alternatives and strongly predicts behavioral value biases in decisions made ∼500 ms later. Follow-up manipulations of value differential, reward valence, response modality, sensory features, and time constraints establish that the signal reflects an active, feature- and effector-general preparatory mechanism for value-based prioritization. PMID:28982706
Object discrimination using optimized multi-frequency auditory cross-modal haptic feedback.
Gibson, Alison; Artemiadis, Panagiotis
2014-01-01
As the field of brain-machine interfaces and neuro-prosthetics continues to grow, there is a high need for sensor and actuation mechanisms that can provide haptic feedback to the user. Current technologies employ expensive, invasive and often inefficient force feedback methods, resulting in an unrealistic solution for individuals who rely on these devices. This paper responds through the development, integration and analysis of a novel feedback architecture where haptic information during the neural control of a prosthetic hand is perceived through multi-frequency auditory signals. Through representing force magnitude with volume and force location with frequency, the feedback architecture can translate the haptic experiences of a robotic end effector into the alternative sensory modality of sound. Previous research with the proposed cross-modal feedback method confirmed its learnability, so the current work aimed to investigate which frequency map (i.e. frequency-specific locations on the hand) is optimal in helping users distinguish between hand-held objects and tasks associated with them. After short use with the cross-modal feedback during the electromyographic (EMG) control of a prosthetic hand, testing results show that users are able to use audial feedback alone to discriminate between everyday objects. While users showed adaptation to three different frequency maps, the simplest map containing only two frequencies was found to be the most useful in discriminating between objects. This outcome provides support for the feasibility and practicality of the cross-modal feedback method during the neural control of prosthetics.
The mere exposure effect in the domain of haptics.
Jakesch, Martina; Carbon, Claus-Christian
2012-01-01
Zajonc showed that the attitude towards stimuli that one had been previously exposed to is more positive than towards novel stimuli. This mere exposure effect (MEE) has been tested extensively using various visual stimuli. Research on the MEE is sparse, however, for other sensory modalities. We used objects of two material categories (stone and wood) and two complexity levels (simple and complex) to test the influence of exposure frequency (F0 = novel stimuli, F2 = stimuli exposed twice, F10 = stimuli exposed ten times) under two sensory modalities (haptics only and haptics & vision). Effects of exposure frequency were found for high complex stimuli with significantly increasing liking from F0 to F2 and F10, but only for the stone category. Analysis of "Need for Touch" data showed the MEE in participants with high need for touch, which suggests different sensitivity or saturation levels of MEE. This different sensitivity or saturation levels might also reflect the effects of expertise on the haptic evaluation of objects. It seems that haptic and cross-modal MEEs are influenced by factors similar to those in the visual domain indicating a common cognitive basis.
Cornell Kärnekull, Stina; Arshamian, Artin; Nilsson, Mats E.; Larsson, Maria
2016-01-01
Although evidence is mixed, studies have shown that blind individuals perform better than sighted at specific auditory, tactile, and chemosensory tasks. However, few studies have assessed blind and sighted individuals across different sensory modalities in the same study. We tested early blind (n = 15), late blind (n = 15), and sighted (n = 30) participants with analogous olfactory and auditory tests in absolute threshold, discrimination, identification, episodic recognition, and metacognitive ability. Although the multivariate analysis of variance (MANOVA) showed no overall effect of blindness and no interaction with modality, follow-up between-group contrasts indicated a blind-over-sighted advantage in auditory episodic recognition, that was most pronounced in early blind individuals. In contrast to the auditory modality, there was no empirical support for compensatory effects in any of the olfactory tasks. There was no conclusive evidence for group differences in metacognitive ability to predict episodic recognition performance. Taken together, the results showed no evidence of an overall superior performance in blind relative sighted individuals across olfactory and auditory functions, although early blind individuals exceled in episodic auditory recognition memory. This observation may be related to an experience-induced increase in auditory attentional capacity. PMID:27729884
A number-form area in the blind
Abboud, Sami; Maidenbaum, Shachar; Dehaene, Stanislas; Amedi, Amir
2015-01-01
Distinct preference for visual number symbols was recently discovered in the human right inferior temporal gyrus (rITG). It remains unclear how this preference emerges, what is the contribution of shape biases to its formation and whether visual processing underlies it. Here we use congenital blindness as a model for brain development without visual experience. During fMRI, we present blind subjects with shapes encoded using a novel visual-to-music sensory-substitution device (The EyeMusic). Greater activation is observed in the rITG when subjects process symbols as numbers compared with control tasks on the same symbols. Using resting-state fMRI in the blind and sighted, we further show that the areas with preference for numerals and letters exhibit distinct patterns of functional connectivity with quantity and language-processing areas, respectively. Our findings suggest that specificity in the ventral ‘visual’ stream can emerge independently of sensory modality and visual experience, under the influence of distinct connectivity patterns. PMID:25613599
Sörqvist, Patrik; Stenfelt, Stefan; Rönnberg, Jerker
2012-11-01
Two fundamental research questions have driven attention research in the past: One concerns whether selection of relevant information among competing, irrelevant, information takes place at an early or at a late processing stage; the other concerns whether the capacity of attention is limited by a central, domain-general pool of resources or by independent, modality-specific pools. In this article, we contribute to these debates by showing that the auditory-evoked brainstem response (an early stage of auditory processing) to task-irrelevant sound decreases as a function of central working memory load (manipulated with a visual-verbal version of the n-back task). Furthermore, individual differences in central/domain-general working memory capacity modulated the magnitude of the auditory-evoked brainstem response, but only in the high working memory load condition. The results support a unified view of attention whereby the capacity of a late/central mechanism (working memory) modulates early precortical sensory processing.
Smith, Cody J.; O’Brien, Timothy; Chatzigeorgiou, Marios; Spencer, W. Clay; Feingold-Link, Elana; Husson, Steven J.; Hori, Sayaka; Mitani, Shohei; Gottschalk, Alexander; Schafer, William R.; Miller, David M.
2013-01-01
SUMMARY Sensory neurons adopt distinct morphologies and functional modalities to mediate responses to specific stimuli. Transcription factors and their downstream effectors orchestrate this outcome but are incompletely defined. Here, we show that different classes of mechanosensory neurons in C. elegans are distinguished by the combined action of the transcription factors MEC-3, AHR-1, and ZAG-1. Low levels of MEC-3 specify the elaborate branching pattern of PVD nociceptors, whereas high MEC-3 is correlated with the simple morphology of AVM and PVM touch neurons. AHR-1 specifies AVM touch neuron fate by elevating MEC-3 while simultaneously blocking expression of nociceptive genes such as the MEC-3 target, the claudin-like membrane protein HPO-30, that promotes the complex dendritic branching pattern of PVD. ZAG-1 exercises a parallel role to prevent PVM from adopting the PVD fate. The conserved dendritic branching function of the Drosophila AHR-1 homolog, Spineless, argues for similar pathways in mammals. PMID:23889932
Eliciting naturalistic cortical responses with a sensory prosthesis via optimized microstimulation
NASA Astrophysics Data System (ADS)
Choi, John S.; Brockmeier, Austin J.; McNiel, David B.; von Kraus, Lee M.; Príncipe, José C.; Francis, Joseph T.
2016-10-01
Objective. Lost sensations, such as touch, could one day be restored by electrical stimulation along the sensory neural pathways. Such stimulation, when informed by electronic sensors, could provide naturalistic cutaneous and proprioceptive feedback to the user. Perceptually, microstimulation of somatosensory brain regions produces localized, modality-specific sensations, and several spatiotemporal parameters have been studied for their discernibility. However, systematic methods for encoding a wide array of naturally occurring stimuli into biomimetic percepts via multi-channel microstimulation are lacking. More specifically, generating spatiotemporal patterns for explicitly evoking naturalistic neural activation has not yet been explored. Approach. We address this problem by first modeling the dynamical input-output relationship between multichannel microstimulation and downstream neural responses, and then optimizing the input pattern to reproduce naturally occurring touch responses as closely as possible. Main results. Here we show that such optimization produces responses in the S1 cortex of the anesthetized rat that are highly similar to natural, tactile-stimulus-evoked counterparts. Furthermore, information on both pressure and location of the touch stimulus was found to be highly preserved. Significance. Our results suggest that the currently presented stimulus optimization approach holds great promise for restoring naturalistic levels of sensation.
Schmidt, Timo Torsten; Blankenburg, Felix
2018-05-31
Working memory (WM) studies have been essential for ascertaining how the brain flexibly handles mentally represented information in the absence of sensory stimulation. Most studies on the memory of sensory stimulus features have focused, however, on the visual domain. Here, we report a human WM study in the tactile modality where participants had to memorize the spatial layout of patterned Braille-like stimuli presented to the index finger. We used a whole-brain searchlight approach in combination with multi-voxel pattern analysis (MVPA) to investigate tactile WM representations without a priori assumptions about which brain regions code tactospatial information. Our analysis revealed that posterior and parietal cortices, as well as premotor regions, retained information across the twelve-second delay phase. Interestingly, parts of this brain network were previously shown to also contain information of visuospatial WM. Also, by specifically testing somatosensory regions for WM representations, we observed content-specific activation patterns in primary somatosensory cortex (SI). Our findings demonstrate that tactile WM depends on a distributed network of brain regions in analogy to the representation of visuospatial information. Copyright © 2018. Published by Elsevier Inc.
Keeping time in the brain: Autism spectrum disorder and audiovisual temporal processing.
Stevenson, Ryan A; Segers, Magali; Ferber, Susanne; Barense, Morgan D; Camarata, Stephen; Wallace, Mark T
2016-07-01
A growing area of interest and relevance in the study of autism spectrum disorder (ASD) focuses on the relationship between multisensory temporal function and the behavioral, perceptual, and cognitive impairments observed in ASD. Atypical sensory processing is becoming increasingly recognized as a core component of autism, with evidence of atypical processing across a number of sensory modalities. These deviations from typical processing underscore the value of interpreting ASD within a multisensory framework. Furthermore, converging evidence illustrates that these differences in audiovisual processing may be specifically related to temporal processing. This review seeks to bridge the connection between temporal processing and audiovisual perception, and to elaborate on emerging data showing differences in audiovisual temporal function in autism. We also discuss the consequence of such changes, the specific impact on the processing of different classes of audiovisual stimuli (e.g. speech vs. nonspeech, etc.), and the presumptive brain processes and networks underlying audiovisual temporal integration. Finally, possible downstream behavioral implications, and possible remediation strategies are outlined. Autism Res 2016, 9: 720-738. © 2015 International Society for Autism Research, Wiley Periodicals, Inc. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.
Assessment of feedback modalities for wearable visual aids in blind mobility
Sorrentino, Paige; Bohlool, Shadi; Zhang, Carey; Arditti, Mort; Goodrich, Gregory; Weiland, James D.
2017-01-01
Sensory substitution devices engage sensory modalities other than vision to communicate information typically obtained through the sense of sight. In this paper, we examine the ability of subjects who are blind to follow simple verbal and vibrotactile commands that allow them to navigate a complex path. A total of eleven visually impaired subjects were enrolled in the study. Prototype systems were developed to deliver verbal and vibrotactile commands to allow an investigator to guide a subject through a course. Using this mode, subjects could follow commands easily and navigate significantly faster than with their cane alone (p <0.05). The feedback modes were similar with respect to the increased speed for course completion. Subjects rated usability of the feedback systems as “above average” with scores of 76.3 and 90.9 on the system usability scale. PMID:28182731
Alcaire, Florencia; Antúnez, Lucía; Vidal, Leticia; Giménez, Ana; Ares, Gastón
2017-07-01
Reformulation of industrialized products has been regarded as one of the most cost-effective strategies to reduce sugar intake. Although non-nutritive sweeteners have been extensively used to reduce the added sugar content of these products, increasing evidence about the existence of compensatory energy intake mechanisms makes it necessary to develop alternative strategies to achieve rapid sugar reductions. In this context, the aim of the present work was to evaluate aroma-related cross modal interactions for sugar reduction in vanilla milk desserts. In particular, the influence of increasing vanilla concentration and the joint increase of vanilla and starch concentration on consumer sensory and hedonic perception was assessed. Two studies with 100 consumers each were conducted, in which a total of 15 samples were evaluated. For each sample, consumers rated their overall liking and answered a check-all-that-apply (CATA) question comprising 12 flavour and texture terms. Sugar reduction caused significant changes in the flavour and texture characteristics of the desserts. An increase in vanilla concentration had a minor effect on their sensory characteristics. However, increasing both vanilla and starch concentration led to an increase in vanilla flavour and sweetness perception and reduced changes in consumer hedonic perception. These results showed the potential of aroma-related cross modal interactions for minimizing the sensory changes caused by sugar reduction. These strategies could contribute to product reformulation without the need to use non-nutritive sweeteners. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ehrenfeld, Stephan; Herbort, Oliver; Butz, Martin V.
2013-01-01
This paper addresses the question of how the brain maintains a probabilistic body state estimate over time from a modeling perspective. The neural Modular Modality Frame (nMMF) model simulates such a body state estimation process by continuously integrating redundant, multimodal body state information sources. The body state estimate itself is distributed over separate, but bidirectionally interacting modules. nMMF compares the incoming sensory and present body state information across the interacting modules and fuses the information sources accordingly. At the same time, nMMF enforces body state estimation consistency across the modules. nMMF is able to detect conflicting sensory information and to consequently decrease the influence of implausible sensor sources on the fly. In contrast to the previously published Modular Modality Frame (MMF) model, nMMF offers a biologically plausible neural implementation based on distributed, probabilistic population codes. Besides its neural plausibility, the neural encoding has the advantage of enabling (a) additional probabilistic information flow across the separate body state estimation modules and (b) the representation of arbitrary probability distributions of a body state. The results show that the neural estimates can detect and decrease the impact of false sensory information, can propagate conflicting information across modules, and can improve overall estimation accuracy due to additional module interactions. Even bodily illusions, such as the rubber hand illusion, can be simulated with nMMF. We conclude with an outlook on the potential of modeling human data and of invoking goal-directed behavioral control. PMID:24191151
Pregnenolone blocks cannabinoid-induced acute psychotic-like states in mice
Busquets-Garcia, Arnau; Soria-Gómez, Edgar; Redon, Bastien; Mackenbach, Yarmo; Chaouloff, Francis; Varilh, Marjorie; Ferreira, Guillaume; Piazza, Pier-Vincenzo; Marsicano, Giovanni
2017-01-01
Cannabis-induced acute psychotic-like states (CIAPS) represent a growing health issue, but their underlying neurobiological mechanisms are poorly understood. The use of antipsychotics and benzodiazepines against CIAPS is limited by side-effects and/or by their ability to tackle only certain aspects of psychosis. Thus, safer wide-spectrum treatments are currently needed. Although the blockade of cannabinoid type-1 receptor (CB1) had been suggested as a therapeutical means against CIAPS, the use of orthosteric CB1 receptor full antagonists is strongly limited by undesired side effects and low efficacy. The neurosteroid pregnenolone has been recently shown to act as a potent endogenous allosteric signal-specific inhibitor of CB1 receptors. Thus, we tested in mice the potential therapeutic use of pregnenolone against acute psychotic-like effects of Δ9-tetrahydrocannabinol (THC), the main psychoactive component of cannabis. We found that pregnenolone blocks a wide spectrum of THC-induced endophenotypes typically associated with psychotic-like states, including impairments in cognitive functions, somatosensory gating and social interaction. In order to capture THC-induced positive psychotic-like symptoms (e.g. perceptual delusions), we adapted a behavioral paradigm based on associations between different sensory modalities and selective devaluation, allowing the measurement of mental sensory representations in mice. Acting at hippocampal CB1 receptors, THC impaired the correct processing of mental sensory representations (reality testing) in an antipsychotic- and pregnenolone-sensitive manner. Overall, this work reveals that signal-specific inhibitors mimicking pregnenolone effects can be considered as promising new therapeutic tools to treat CIAPS. PMID:28220044
Adaptation of Physiological and Cognitive Workload via Interactive Multi-modal Displays
2014-05-28
peer-reviewed journals (N/A for none) 09/07/2013 Received Paper 8.00 James Merlo, Joseph E. Mercado , Jan B.F. Van Erp, Peter A. Hancock. Improving...08, . : , Mr. Joseph Mercado , Mr. Timothy White, Dr. Peter Hancock. Effects of Cross-Modal Sensory Cueing Automation Failurein a Target Detection Task...fields:...... ...... ...... ...... ...... PERCENT_SUPPORTEDNAME FTE Equivalent: Total Number: Discipline Joseph Mercado 0.50 Timothy White 0.50 1.00 2
Saliency Detection as a Reactive Process: Unexpected Sensory Events Evoke Corticomuscular Coupling
Kilintari, Marina; Srinivasan, Mandayam; Haggard, Patrick
2018-01-01
Survival in a fast-changing environment requires animals not only to detect unexpected sensory events, but also to react. In humans, these salient sensory events generate large electrocortical responses, which have been traditionally interpreted within the sensory domain. Here we describe a basic physiological mechanism coupling saliency-related cortical responses with motor output. In four experiments conducted on 70 healthy participants, we show that salient substartle sensory stimuli modulate isometric force exertion by human participants, and that this modulation is tightly coupled with electrocortical activity elicited by the same stimuli. We obtained four main results. First, the force modulation follows a complex triphasic pattern consisting of alternating decreases and increases of force, time-locked to stimulus onset. Second, this modulation occurs regardless of the sensory modality of the eliciting stimulus. Third, the magnitude of the force modulation is predicted by the amplitude of the electrocortical activity elicited by the same stimuli. Fourth, both neural and motor effects are not reflexive but depend on contextual factors. Together, these results indicate that sudden environmental stimuli have an immediate effect on motor processing, through a tight corticomuscular coupling. These observations suggest that saliency detection is not merely perceptive but reactive, preparing the animal for subsequent appropriate actions. SIGNIFICANCE STATEMENT Salient events occurring in the environment, regardless of their modalities, elicit large electrical brain responses, dominated by a widespread “vertex” negative-positive potential. This response is the largest synchronization of neural activity that can be recorded from a healthy human being. Current interpretations assume that this vertex potential reflects sensory processes. Contrary to this general assumption, we show that the vertex potential is strongly coupled with a modulation of muscular activity that follows the same pattern. Both the vertex potential and its motor effects are not reflexive but strongly depend on contextual factors. These results reconceptualize the significance of these evoked electrocortical responses, suggesting that saliency detection is not merely perceptive but reactive, preparing the animal for subsequent appropriate actions. PMID:29378865
Septo-hippocampal GABAergic signaling across multiple modalities in awake mice.
Kaifosh, Patrick; Lovett-Barron, Matthew; Turi, Gergely F; Reardon, Thomas R; Losonczy, Attila
2013-09-01
Hippocampal interneurons receive GABAergic input from the medial septum. Using two-photon Ca(2+) imaging of axonal boutons in hippocampal CA1 of behaving mice, we found that populations of septo-hippocampal GABAergic boutons were activated during locomotion and salient sensory events; sensory responses scaled with stimulus intensity and were abolished by anesthesia. We found similar activity patterns among boutons with common putative postsynaptic targets, with low-dimensional bouton population dynamics being driven primarily by presynaptic spiking.
Danger detection and escape behaviour in wood crickets.
Dupuy, Fabienne; Casas, Jérôme; Body, Mélanie; Lazzari, Claudio R
2011-07-01
The wind-sensitive cercal system of Orthopteroid insects that mediates the detection of the approach of a predator is a very sensitive sensory system. It has been intensively analysed from a behavioural and neurobiological point of view, and constitutes a classical model system in neuroethology. The escape behaviour is triggered in orthopteroids by the detection of air-currents produced by approaching objects, allowing these insects to keep away from potential dangers. Nevertheless, escape behaviour has not been studied in terms of success. Moreover, an attacking predator is more than "air movement", it is also a visible moving entity. The sensory basis of predator detection is thus probably more complex than the perception of air movement by the cerci. We have used a piston mimicking an attacking running predator for a quantitative evaluation of the escape behaviour of wood crickets Nemobius sylvestris. The movement of the piston not only generates air movement, but it can be seen by the insect and can touch it as a natural predator. This procedure allowed us to study the escape behaviour in terms of detection and also in terms of success. Our results showed that 5-52% of crickets that detected the piston thrust were indeed touched. Crickets escaped to stimulation from behind better than to a stimulation from the front, even though they detected the approaching object similarly in both cases. After cerci ablation, 48% crickets were still able to detect a piston approaching from behind (compared with 79% of detection in intact insects) and 24% crickets escaped successfully (compared with 62% in the case of intact insects). So, cerci play a major role in the detection of an approaching object but other mechanoreceptors or sensory modalities are implicated in this detection. It is not possible to assure that other sensory modalities participate (in the case of intact animals) in the behaviour; rather, than in the absence of cerci other sensory modalities can partially mediate the behaviour. Nevertheless, neither antennae nor eyes seem to be used for detecting approaching objects, as their inactivation did not reduce their detection and escape abilities in the presence of cerci. Copyright © 2011 Elsevier Ltd. All rights reserved.
Cross-Modal Correspondences Enhance Performance on a Colour-to-Sound Sensory Substitution Device.
Hamilton-Fletcher, Giles; Wright, Thomas D; Ward, Jamie
Visual sensory substitution devices (SSDs) can represent visual characteristics through distinct patterns of sound, allowing a visually impaired user access to visual information. Previous SSDs have avoided colour and when they do encode colour, have assigned sounds to colour in a largely unprincipled way. This study introduces a new tablet-based SSD termed the ‘Creole’ (so called because it combines tactile scanning with image sonification) and a new algorithm for converting colour to sound that is based on established cross-modal correspondences (intuitive mappings between different sensory dimensions). To test the utility of correspondences, we examined the colour–sound associative memory and object recognition abilities of sighted users who had their device either coded in line with or opposite to sound–colour correspondences. Improved colour memory and reduced colour-errors were made by users who had the correspondence-based mappings. Interestingly, the colour–sound mappings that provided the highest improvements during the associative memory task also saw the greatest gains for recognising realistic objects that also featured these colours, indicating a transfer of abilities from memory to recognition. These users were also marginally better at matching sounds to images varying in luminance, even though luminance was coded identically across the different versions of the device. These findings are discussed with relevance for both colour and correspondences for sensory substitution use.
Auditory and motion metaphors have different scalp distributions: an ERP study
Schmidt-Snoek, Gwenda L.; Drew, Ashley R.; Barile, Elizabeth C.; Agauas, Stephen J.
2015-01-01
While many links have been established between sensory-motor words used literally (kick the ball) and sensory-motor regions of the brain, it is less clear whether metaphorically used words (kick the habit) also show such signs of “embodiment.” Additionally, not much is known about the timing or nature of the connection between language and sensory-motor neural processing. We used stimuli divided into three figurativeness conditions—literal, metaphor, and anomalous—and two modality conditions—auditory (Her limousine was a privileged snort) and motion (The editorial was a brass-knuckle punch). The conditions were matched on a large number of potentially confounding factors including cloze probability. The electroencephalographic response to the final word of each sentence was measured at 64 electrode sites on the scalp of 22 participants and event-related potentials (ERPs) calculated. Analysis revealed greater amplitudes for metaphorical than literal sentences in both 350–500 ms and 500–650 ms timeframes. Results supported the possibility of different neural substrates for motion and auditory sentences. Greater differences for motion sentences were seen in the left posterior and left central electrode sites than elsewhere on the scalp. These findings are consistent with a sensory-motor neural categorization of language and with the integration of modal and amodal information during the N400 and P600 timeframes. PMID:25821433
Dissociation of motor and sensory inhibition processes in normal aging.
Anguera, Joaquin A; Gazzaley, Adam
2012-04-01
Age-related cognitive impairments have been attributed to deficits in inhibitory processes that mediate both motor restraint and sensory filtering. However, behavioral studies have failed to show an association between tasks that measure these distinct types of inhibition. In the present study, we hypothesized neural markers reflecting each type of inhibition may reveal a relationship across inhibitory domains in older adults. Electroencephalography (EEG) and behavioral measures were used to explore whether there was an across-participant correlation between sensory suppression and motor inhibition. Sixteen healthy older adult participants (65-80 years) engaged in two separate experimental paradigms: a selective attention, delayed-recognition task and a stop-signal task. Findings revealed no significant relationship existed between neural markers of sensory suppression (P1 amplitude; N170 latency) and markers of motor inhibition (N2 and P3 amplitude and latency) in older adults. These distinct inhibitory domains are differentially impacted in normal aging, as evidenced by previous behavioral work and the current neural findings. Thus a generalized inhibitory deficit may not be a common impairment in cognitive aging. Given that some theories of cognitive aging suggest age-related failure of inhibitory mechanisms may span different modalities, the present findings contribute to an alternative view where age-related declines within each inhibitory modality are unrelated. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Olfactory short-term memory encoding and maintenance - an event-related potential study.
Lenk, Steffen; Bluschke, Annet; Beste, Christian; Iannilli, Emilia; Rößner, Veit; Hummel, Thomas; Bender, Stephan
2014-09-01
This study examined whether the memory encoding and short term maintenance of olfactory stimuli is associated with neurophysiological activation patterns which parallel those described for sensory modalities such as vision and auditory. We examined olfactory event-related potentials in an olfactory change detection task in twenty-four healthy adults and compared the measured activation to that found during passive olfactory stimulation. During the early olfactory post-processing phase, we found a sustained negativity over bilateral frontotemporal areas in the passive perception condition which was enhanced in the active memory task. There was no significant lateralization in either experimental condition. During the maintenance interval at the end of the delay period, we still found sustained activation over bilateral frontotemporal areas which was more negative in trials with correct - as compared to incorrect - behavioural responses. This was complemented by a general significantly stronger frontocentral activation. Summarizing, we were able to show that olfactory short term memory involves a parallel sequence of activation as found in other sensory modalities. In addition to olfactory-specific frontotemporal activations in the memory encoding phase, we found slow cortical potentials over frontocentral areas during the memory maintenance phase indicating the activation of a supramodal memory maintenance system. These findings could represent the neurophysiological underpinning of the 'olfactory flacon', the olfactory counter-part to the visual sketchpad and phonological loop embedded in Baddeley's working memory model. Copyright © 2014 Elsevier Inc. All rights reserved.
Rothen, Nicolas; Meier, Beat
2010-04-01
In synaesthesia, the input of one sensory modality automatically triggers an additional experience, not normally triggered by the input of that modality. Therefore, compared to non-synaesthetes, additional experiences exist and these may be used as retrieval cues when memory is tested. Previous case studies have suggested that synaesthesia may yield even extraordinary memory abilities. However, group studies found either a task-specific memory advantage or no performance advantage at all. The aim of the present study was to test whether grapheme-colour synaesthesia gives rise to a general memory benefit using a standardised memory test (Wechsler Memory Scale). The synaesthetes showed a performance advantage in episodic memory tests, but not in short-term memory tests. However, performance was still within the ordinary range. The results support the hypothesis that synaesthesia provides for a richer world of experience and as a consequence additional retrieval cues may be available and beneficial but not to the point of extraordinary memory ability.
Sex differences in the ability to recognise non-verbal displays of emotion: a meta-analysis.
Thompson, Ashley E; Voyer, Daniel
2014-01-01
The present study aimed to quantify the magnitude of sex differences in humans' ability to accurately recognise non-verbal emotional displays. Studies of relevance were those that required explicit labelling of discrete emotions presented in the visual and/or auditory modality. A final set of 551 effect sizes from 215 samples was included in a multilevel meta-analysis. The results showed a small overall advantage in favour of females on emotion recognition tasks (d=0.19). However, the magnitude of that sex difference was moderated by several factors, namely specific emotion, emotion type (negative, positive), sex of the actor, sensory modality (visual, audio, audio-visual) and age of the participants. Method of presentation (computer, slides, print, etc.), type of measurement (response time, accuracy) and year of publication did not significantly contribute to variance in effect sizes. These findings are discussed in the context of social and biological explanations of sex differences in emotion recognition.
The interactions of multisensory integration with endogenous and exogenous attention
Tang, Xiaoyu; Wu, Jinglong; Shen, Yong
2016-01-01
Stimuli from multiple sensory organs can be integrated into a coherent representation through multiple phases of multisensory processing; this phenomenon is called multisensory integration. Multisensory integration can interact with attention. Here, we propose a framework in which attention modulates multisensory processing in both endogenous (goal-driven) and exogenous (stimulus-driven) ways. Moreover, multisensory integration exerts not only bottom-up but also top-down control over attention. Specifically, we propose the following: (1) endogenous attentional selectivity acts on multiple levels of multisensory processing to determine the extent to which simultaneous stimuli from different modalities can be integrated; (2) integrated multisensory events exert top-down control on attentional capture via multisensory search templates that are stored in the brain; (3) integrated multisensory events can capture attention efficiently, even in quite complex circumstances, due to their increased salience compared to unimodal events and can thus improve search accuracy; and (4) within a multisensory object, endogenous attention can spread from one modality to another in an exogenous manner. PMID:26546734
The interactions of multisensory integration with endogenous and exogenous attention.
Tang, Xiaoyu; Wu, Jinglong; Shen, Yong
2016-02-01
Stimuli from multiple sensory organs can be integrated into a coherent representation through multiple phases of multisensory processing; this phenomenon is called multisensory integration. Multisensory integration can interact with attention. Here, we propose a framework in which attention modulates multisensory processing in both endogenous (goal-driven) and exogenous (stimulus-driven) ways. Moreover, multisensory integration exerts not only bottom-up but also top-down control over attention. Specifically, we propose the following: (1) endogenous attentional selectivity acts on multiple levels of multisensory processing to determine the extent to which simultaneous stimuli from different modalities can be integrated; (2) integrated multisensory events exert top-down control on attentional capture via multisensory search templates that are stored in the brain; (3) integrated multisensory events can capture attention efficiently, even in quite complex circumstances, due to their increased salience compared to unimodal events and can thus improve search accuracy; and (4) within a multisensory object, endogenous attention can spread from one modality to another in an exogenous manner. Copyright © 2015 Elsevier Ltd. All rights reserved.
Keil, Julian; Pomper, Ulrich; Feuerbach, Nele; Senkowski, Daniel
2017-03-01
Intersensory attention (IA) describes the process of directing attention to a specific modality. Temporal orienting (TO) characterizes directing attention to a specific moment in time. Previously, studies indicated that these two processes could have opposite effects on early evoked brain activity. The exact time-course and processing stages of both processes are still unknown. In this human electroencephalography study, we investigated the effects of IA and TO on visuo-tactile stimulus processing within one paradigm. IA was manipulated by presenting auditory cues to indicate whether participants should detect visual or tactile targets in visuo-tactile stimuli. TO was manipulated by presenting stimuli block-wise at fixed or variable inter-stimulus intervals. We observed that TO affects evoked activity to visuo-tactile stimuli prior to IA. Moreover, we found that TO reduces the amplitude of early evoked brain activity, whereas IA enhances it. Using beamformer source-localization, we observed that IA increases neural responses in sensory areas of the attended modality whereas TO reduces brain activity in widespread cortical areas. Based on these findings we derive an updated working model for the effects of temporal and intersensory attention on early evoked brain activity. Copyright © 2017 Elsevier Inc. All rights reserved.
Lee, Sang-Soo; Lee, Sung-Hyun; Han, Seol-Heui
2003-07-01
We describe terminal changes in a long-term follow-up of a 51-year-old man with sporadic hereditary sensory and autonomic neuropathy (HSAN). From the age of 15 years onwards, he suffered from multiple painless ulcers of his feet and fingers, necessitating amputation. Neurological studies revealed almost complete sensory loss affecting all modalities in the upper and lower limbs, minimal involvement of motor fibers, and areflexia. A neurophysiological abnormality involved an absence of sensory action potentials with relatively normal motor nerve conduction velocities. Biopsy of the sural nerve showed almost total loss of myelinated fibers with a mild decrease in unmyelinated fibers. Despite the late onset of the disease, the progressive course, and the lancinating pain, the terminal features of this patient, which involved a selective loss of myelinated fibers and widespread sensory loss, seem to be symptomatic of HSAN II, the progressive form of autosomal recessive sensory neuropathy, and emphasize the clinical heterogeneity of HSAN.
Sensory feedback add-on for upper-limb prostheses.
Fallahian, Nader; Saeedi, Hassan; Mokhtarinia, Hamidreza; Tabatabai Ghomshe, Farhad
2017-06-01
Sensory feedback systems have been of great interest in upper-limb prosthetics. Despite tremendous research, there are no commercial modality-matched feedback systems. This article aims to introduce the first detachable and feedback add-on option that can be attached to in-use prostheses. A sensory feedback system was tested on a below-elbow myoelectric prosthesis. The aim was to have the amputee grasp fragile objects without crushing while other accidental feedback sources were blocked. A total of 8 successful trials (out of 10) showed that sensory feedback system decreased the amputee's visual dependency by improving awareness of his prosthesis. Sensory feedback system can be used either as post-fabrication (prosthetic add-on option) or para-fabrication (incorporated into prosthetic design). The use of these direct feedback systems can be explored with a current prosthesis before ordering new high-tech prosthesis. Clinical relevance This technical note introduces the first attach/detach-able sensory feedback system that can simply be added to in-use (myo)electric prosthesis, with no obligation to change prosthesis design or components.
Thompson, Hannah E; Jefferies, Elizabeth
2013-08-01
Research suggests that semantic memory deficits can occur in at least three ways. Patients can (1) show amodal degradation of concepts within the semantic store itself, such as in semantic dementia (SD), (2) have difficulty in controlling activation within the semantic system and accessing appropriate knowledge in line with current goals or context, as in semantic aphasia (SA) and (3) experience a semantic deficit in only one modality following degraded input from sensory cortex. Patients with SA show deficits of semantic control and access across word and picture tasks, consistent with the view that their problems arise from impaired modality-general control processes. However, there are a few reports in the literature of patients with semantic access problems restricted to auditory-verbal materials, who show decreasing ability to retrieve concepts from words when they are presented repeatedly with closely related distractors. These patients challenge the notion that semantic control processes are modality-general and suggest instead a separation of 'access' to auditory-verbal and non-verbal semantic systems. We had the rare opportunity to study such a case in detail. Our aims were to examine the effect of manipulations of control demands in auditory-verbal semantic, non-verbal semantic and non-semantic tasks, allowing us to assess whether such cases always show semantic control/access impairments that follow a modality-specific pattern, or whether there are alternative explanations. Our findings revealed: (1) deficits on executive tasks, unrelated to semantic demands, which were more evident in the auditory modality than the visual modality; (2) deficits in executively-demanding semantic tasks which were accentuated in the auditory-verbal domain compared with the visual modality, but still present on non-verbal tasks, and (3) a coupling between comprehension and executive control requirements, in that mild impairment on single word comprehension was greatly increased on more demanding, associative judgements across modalities. This pattern of results suggests that mild executive-semantic impairment, paired with disrupted connectivity from auditory input, may give rise to semantic 'access' deficits affecting only the auditory modality. Copyright © 2013 Elsevier Ltd. All rights reserved.
Investigating Deviance Distraction and the Impact of the Modality of the To-Be-Ignored Stimuli.
Marsja, Erik; Neely, Gregory; Ljungberg, Jessica K
2018-03-01
It has been suggested that deviance distraction is caused by unexpected sensory events in the to-be-ignored stimuli violating the cognitive system's predictions of incoming stimuli. The majority of research has used methods where the to-be-ignored expected (standards) and the unexpected (deviants) stimuli are presented within the same modality. Less is known about the behavioral impact of deviance distraction when the to-be-ignored stimuli are presented in different modalities (e.g., standard and deviants presented in different modalities). In three experiments using cross-modal oddball tasks with mixed-modality to-be-ignored stimuli, we examined the distractive role of unexpected auditory deviants presented in a continuous stream of expected standard vibrations. The results showed that deviance distraction seems to be dependent upon the to-be-ignored stimuli being presented within the same modality, and that the simplest omission of something expected; in this case, a standard vibration may be enough to capture attention and distract performance.
Vogel, Bastian D; Brück, Carolin; Jacob, Heike; Eberle, Mark; Wildgruber, Dirk
2016-07-07
Impaired interpretation of nonverbal emotional cues in patients with schizophrenia has been reported in several studies and a clinical relevance of these deficits for social functioning has been assumed. However, it is unclear to what extent the impairments depend on specific emotions or specific channels of nonverbal communication. Here, the effect of cue modality and emotional categories on accuracy of emotion recognition was evaluated in 21 patients with schizophrenia and compared to a healthy control group (n = 21). To this end, dynamic stimuli comprising speakers of both genders in three different sensory modalities (auditory, visual and audiovisual) and five emotional categories (happy, alluring, neutral, angry and disgusted) were used. Patients with schizophrenia were found to be impaired in emotion recognition in comparison to the control group across all stimuli. Considering specific emotions more severe deficits were revealed in the recognition of alluring stimuli and less severe deficits in the recognition of disgusted stimuli as compared to all other emotions. Regarding cue modality the extent of the impairment in emotional recognition did not significantly differ between auditory and visual cues across all emotional categories. However, patients with schizophrenia showed significantly more severe disturbances for vocal as compared to facial cues when sexual interest is expressed (alluring stimuli), whereas more severe disturbances for facial as compared to vocal cues were observed when happiness or anger is expressed. Our results confirmed that perceptual impairments can be observed for vocal as well as facial cues conveying various social and emotional connotations. The observed differences in severity of impairments with most severe deficits for alluring expressions might be related to specific difficulties in recognizing the complex social emotional information of interpersonal intentions as compared to "basic" emotional states. Therefore, future studies evaluating perception of nonverbal cues should consider a broader range of social and emotional signals beyond basic emotions including attitudes and interpersonal intentions. Identifying specific domains of social perception particularly prone for misunderstandings in patients with schizophrenia might allow for a refinement of interventions aiming at improving social functioning.
Implications of differences of echoic and iconic memory for the design of multimodal displays
NASA Astrophysics Data System (ADS)
Glaser, Daniel Shields
It has been well documented that dual-task performance is more accurate when each task is based on a different sensory modality. It is also well documented that the memory for each sense has unequal durations, particularly visual (iconic) and auditory (echoic) sensory memory. In this dissertation I address whether differences in sensory memory (e.g. iconic vs. echoic) duration have implications for the design of a multimodal display. Since echoic memory persists for seconds in contrast to iconic memory which persists only for milliseconds, one of my hypotheses was that in a visual-auditory dual task condition, performance will be better if the visual task is completed before the auditory task than vice versa. In Experiment 1 I investigated whether the ability to recall multi-modal stimuli is affected by recall order, with each mode being responded to separately. In Experiment 2, I investigated the effects of stimulus order and recall order on the ability to recall information from a multi-modal presentation. In Experiment 3 I investigated the effect of presentation order using a more realistic task. In Experiment 4 I investigated whether manipulating the presentation order of stimuli of different modalities improves humans' ability to combine the information from the two modalities in order to make decision based on pre-learned rules. As hypothesized, accuracy was greater when visual stimuli were responded to first and auditory stimuli second. Also as hypothesized, performance was improved by not presenting both sequences at the same time, limiting the perceptual load. Contrary to my expectations, overall performance was better when a visual sequence was presented before the audio sequence. Though presenting a visual sequence prior to an auditory sequence lengthens the visual retention interval, it also provides time for visual information to be recoded to a more robust form without disruption. Experiment 4 demonstrated that decision making requiring the integration of visual and auditory information is enhanced by reducing workload and promoting a strategic use of echoic memory. A framework for predicting Experiment 1-4 results is proposed and evaluated.
Auracher, Jan
2017-01-01
The concept of sound iconicity implies that phonemes are intrinsically associated with non-acoustic phenomena, such as emotional expression, object size or shape, or other perceptual features. In this respect, sound iconicity is related to other forms of cross-modal associations in which stimuli from different sensory modalities are associated with each other due to the implicitly perceived correspondence of their primal features. One prominent example is the association between vowels, categorized according to their place of articulation, and size, with back vowels being associated with bigness and front vowels with smallness. However, to date the relative influence of perceptual and conceptual cognitive processing on this association is not clear. To bridge this gap, three experiments were conducted in which associations between nonsense words and pictures of animals or emotional body postures were tested. In these experiments participants had to infer the relation between visual stimuli and the notion of size from the content of the pictures, while directly perceivable features did not support-or even contradicted-the predicted association. Results show that implicit associations between articulatory-acoustic characteristics of phonemes and pictures are mainly influenced by semantic features, i.e., the content of a picture, whereas the influence of perceivable features, i.e., size or shape, is overridden. This suggests that abstract semantic concepts can function as an interface between different sensory modalities, facilitating cross-modal associations.
Experience with a talker can transfer across modalities to facilitate lipreading.
Sanchez, Kauyumari; Dias, James W; Rosenblum, Lawrence D
2013-10-01
Rosenblum, Miller, and Sanchez (Psychological Science, 18, 392-396, 2007) found that subjects first trained to lip-read a particular talker were then better able to perceive the auditory speech of that same talker, as compared with that of a novel talker. This suggests that the talker experience a perceiver gains in one sensory modality can be transferred to another modality to make that speech easier to perceive. An experiment was conducted to examine whether this cross-sensory transfer of talker experience could occur (1) from auditory to lip-read speech, (2) with subjects not screened for adequate lipreading skill, (3) when both a familiar and an unfamiliar talker are presented during lipreading, and (4) for both old (presentation set) and new words. Subjects were first asked to identify a set of words from a talker. They were then asked to perform a lipreading task from two faces, one of which was of the same talker they heard in the first phase of the experiment. Results revealed that subjects who lip-read from the same talker they had heard performed better than those who lip-read a different talker, regardless of whether the words were old or new. These results add further evidence that learning of amodal talker information can facilitate speech perception across modalities and also suggest that this information is not restricted to previously heard words.
The sense of agency is action-effect causality perception based on cross-modal grouping.
Kawabe, Takahiro; Roseboom, Warrick; Nishida, Shin'ya
2013-07-22
Sense of agency, the experience of controlling external events through one's actions, stems from contiguity between action- and effect-related signals. Here we show that human observers link their action- and effect-related signals using a computational principle common to cross-modal sensory grouping. We first report that the detection of a delay between tactile and visual stimuli is enhanced when both stimuli are synchronized with separate auditory stimuli (experiment 1). This occurs because the synchronized auditory stimuli hinder the potential grouping between tactile and visual stimuli. We subsequently demonstrate an analogous effect on observers' key press as an action and a sensory event. This change is associated with a modulation in sense of agency; namely, sense of agency, as evaluated by apparent compressions of action-effect intervals (intentional binding) or subjective causality ratings, is impaired when both participant's action and its putative visual effect events are synchronized with auditory tones (experiments 2 and 3). Moreover, a similar role of action-effect grouping in determining sense of agency is demonstrated when the additional signal is presented in the modality identical to an effect event (experiment 4). These results are consistent with the view that sense of agency is the result of general processes of causal perception and that cross-modal grouping plays a central role in these processes.
The sense of agency is action–effect causality perception based on cross-modal grouping
Kawabe, Takahiro; Roseboom, Warrick; Nishida, Shin'ya
2013-01-01
Sense of agency, the experience of controlling external events through one's actions, stems from contiguity between action- and effect-related signals. Here we show that human observers link their action- and effect-related signals using a computational principle common to cross-modal sensory grouping. We first report that the detection of a delay between tactile and visual stimuli is enhanced when both stimuli are synchronized with separate auditory stimuli (experiment 1). This occurs because the synchronized auditory stimuli hinder the potential grouping between tactile and visual stimuli. We subsequently demonstrate an analogous effect on observers' key press as an action and a sensory event. This change is associated with a modulation in sense of agency; namely, sense of agency, as evaluated by apparent compressions of action–effect intervals (intentional binding) or subjective causality ratings, is impaired when both participant's action and its putative visual effect events are synchronized with auditory tones (experiments 2 and 3). Moreover, a similar role of action–effect grouping in determining sense of agency is demonstrated when the additional signal is presented in the modality identical to an effect event (experiment 4). These results are consistent with the view that sense of agency is the result of general processes of causal perception and that cross-modal grouping plays a central role in these processes. PMID:23740784
Can quantitative sensory testing predict responses to analgesic treatment?
Grosen, K; Fischer, I W D; Olesen, A E; Drewes, A M
2013-10-01
The role of quantitative sensory testing (QST) in prediction of analgesic effect in humans is scarcely investigated. This updated review assesses the effectiveness in predicting analgesic effects in healthy volunteers, surgical patients and patients with chronic pain. A systematic review of English written, peer-reviewed articles was conducted using PubMed and Embase (1980-2013). Additional studies were identified by chain searching. Search terms included 'quantitative sensory testing', 'sensory testing' and 'analgesics'. Studies on the relationship between QST and response to analgesic treatment in human adults were included. Appraisal of the methodological quality of the included studies was based on evaluative criteria for prognostic studies. Fourteen studies (including 720 individuals) met the inclusion criteria. Significant correlations were observed between responses to analgesics and several QST parameters including (1) heat pain threshold in experimental human pain, (2) electrical and heat pain thresholds, pressure pain tolerance and suprathreshold heat pain in surgical patients, and (3) electrical and heat pain threshold and conditioned pain modulation in patients with chronic pain. Heterogeneity among studies was observed especially with regard to application of QST and type and use of analgesics. Although promising, the current evidence is not sufficiently robust to recommend the use of any specific QST parameter in predicting analgesic response. Future studies should focus on a range of different experimental pain modalities rather than a single static pain stimulation paradigm. © 2013 European Federation of International Association for the Study of Pain Chapters.
Teratogenic Effects of Pyridoxine on the Spinal Cord and Dorsal Root Ganglia of Embryonic Chickens
Sharp, Andrew A.; Fedorovich, Yuri
2015-01-01
Our understanding of the role of somatosensory feedback in regulating motility during chicken embryogenesis and fetal development in general has been hampered by the lack of an approach to selectively alter specific sensory modalities. In adult mammals, pyridoxine overdose has been shown to cause a peripheral sensory neuropathy characterized by a loss of both muscle and cutaneous afferents, but predominated by a loss of proprioception. We have begun to explore the sensitivity of the nervous system in chicken embryos to the application of pyridoxine on embryonic days 7 and 8, after sensory neurons in the lumbosacral region become post-mitotic. Upon examination of the spinal cord, DRG and peripheral nerves, we find that pyridoxine causes a loss of TrkC-positive neurons, a decrease in the diameter of the muscle innervating nerve tibialis, and a reduction in the number of large diameter axons in this nerve. However, we found no change in the number of Substance P or CGRP-positive neurons, the number of motor neurons or the diameter or axonal composition of the femoral cutaneous nerve. Therefore, pyridoxine causes a peripheral sensory neuropathy in embryonic chickens largely consistent with its effects in adult mammals. However, the lesion may be more restricted to proprioception in the chicken embryo. Therefore, pyridoxine lesion induced during embryogenesis in the chicken embryo can be used to asses how the loss of sensation, largely proprioception, alters spontaneous embryonic motility and subsequent motor development. PMID:25592428
Touch to see: neuropsychological evidence of a sensory mirror system for touch.
Bolognini, Nadia; Olgiati, Elena; Xaiz, Annalisa; Posteraro, Lucio; Ferraro, Francesco; Maravita, Angelo
2012-09-01
The observation of touch can be grounded in the activation of brain areas underpinning direct tactile experience, namely the somatosensory cortices. What is the behavioral impact of such a mirror sensory activity on visual perception? To address this issue, we investigated the causal interplay between observed and felt touch in right brain-damaged patients, as a function of their underlying damaged visual and/or tactile modalities. Patients and healthy controls underwent a detection task, comprising visual stimuli depicting touches or without a tactile component. Touch and No-touch stimuli were presented in egocentric or allocentric perspectives. Seeing touches, regardless of the viewing perspective, differently affects visual perception depending on which sensory modality is damaged: In patients with a selective visual deficit, but without any tactile defect, the sight of touch improves the visual impairment; this effect is associated with a lesion to the supramarginal gyrus. In patients with a tactile deficit, but intact visual perception, the sight of touch disrupts visual processing, inducing a visual extinction-like phenomenon. This disruptive effect is associated with the damage of the postcentral gyrus. Hence, a damage to the somatosensory system can lead to a dysfunctional visual processing, and an intact somatosensory processing can aid visual perception.
Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli
Störmer, Viola S.; McDonald, John J.; Hillyard, Steven A.
2009-01-01
The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex. PMID:20007778
Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli.
Störmer, Viola S; McDonald, John J; Hillyard, Steven A
2009-12-29
The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex.
Multisensory integration in Lepidoptera: Insights into flower-visitor interactions.
Kinoshita, Michiyo; Stewart, Finlay J; Ômura, Hisashi
2017-04-01
As most work on flower foraging focuses on bees, studying Lepidoptera can offer fresh perspectives on how sensory capabilities shape the interaction between flowers and insects. Through a combination of innate preferences and learning, many Lepidoptera persistently visit particular flower species. Butterflies tend to rely on their highly developed sense of colour to locate rewarding flowers, while moths have evolved sophisticated olfactory systems towards the same end. However, these modalities can interact in complex ways; for instance, butterflies' colour preference can shift depending on olfactory context. The mechanisms by which such cross-modal interaction occurs are poorly understood, but the mushroom bodies appear to play a central role. Because of the diversity seen within Lepidoptera in terms of their sensory capabilities and the nature of their relationships with flowers, they represent a fruitful avenue for comparative studies to shed light on the co-evolution of flowers and flower-visiting insects. © 2017 WILEY Periodicals, Inc.
Stepwise Connectivity of the Modal Cortex Reveals the Multimodal Organization of the Human Brain
Sepulcre, Jorge; Sabuncu, Mert R.; Yeo, Thomas B.; Liu, Hesheng; Johnson, Keith A.
2012-01-01
How human beings integrate information from external sources and internal cognition to produce a coherent experience is still not well understood. During the past decades, anatomical, neurophysiological and neuroimaging research in multimodal integration have stood out in the effort to understand the perceptual binding properties of the brain. Areas in the human lateral occipito-temporal, prefrontal and posterior parietal cortices have been associated with sensory multimodal processing. Even though this, rather patchy, organization of brain regions gives us a glimpse of the perceptual convergence, the articulation of the flow of information from modality-related to the more parallel cognitive processing systems remains elusive. Using a method called Stepwise Functional Connectivity analysis, the present study analyzes the functional connectome and transitions from primary sensory cortices to higher-order brain systems. We identify the large-scale multimodal integration network and essential connectivity axes for perceptual integration in the human brain. PMID:22855814
Asymmetries of the human social brain in the visual, auditory and chemical modalities.
Brancucci, Alfredo; Lucci, Giuliana; Mazzatenta, Andrea; Tommasi, Luca
2009-04-12
Structural and functional asymmetries are present in many regions of the human brain responsible for motor control, sensory and cognitive functions and communication. Here, we focus on hemispheric asymmetries underlying the domain of social perception, broadly conceived as the analysis of information about other individuals based on acoustic, visual and chemical signals. By means of these cues the brain establishes the border between 'self' and 'other', and interprets the surrounding social world in terms of the physical and behavioural characteristics of conspecifics essential for impression formation and for creating bonds and relationships. We show that, considered from the standpoint of single- and multi-modal sensory analysis, the neural substrates of the perception of voices, faces, gestures, smells and pheromones, as evidenced by modern neuroimaging techniques, are characterized by a general pattern of right-hemispheric functional asymmetry that might benefit from other aspects of hemispheric lateralization rather than constituting a true specialization for social information.
Perceptual congruency of audio-visual speech affects ventriloquism with bilateral visual stimuli.
Kanaya, Shoko; Yokosawa, Kazuhiko
2011-02-01
Many studies on multisensory processes have focused on performance in simplified experimental situations, with a single stimulus in each sensory modality. However, these results cannot necessarily be applied to explain our perceptual behavior in natural scenes where various signals exist within one sensory modality. We investigated the role of audio-visual syllable congruency on participants' auditory localization bias or the ventriloquism effect using spoken utterances and two videos of a talking face. Salience of facial movements was also manipulated. Results indicated that more salient visual utterances attracted participants' auditory localization. Congruent pairing of audio-visual utterances elicited greater localization bias than incongruent pairing, while previous studies have reported little dependency on the reality of stimuli in ventriloquism. Moreover, audio-visual illusory congruency, owing to the McGurk effect, caused substantial visual interference on auditory localization. Multisensory performance appears more flexible and adaptive in this complex environment than in previous studies.
Šabanović, Selma; Bennett, Casey C; Chang, Wan-Ling; Huber, Lesa
2013-06-01
We evaluated the seal-like robot PARO in the context of multi-sensory behavioral therapy in a local nursing home. Participants were 10 elderly nursing home residents with varying levels of dementia. We report three principle findings from our observations of interactions between the residents, PARO, and a therapist during seven weekly therapy sessions. Firstly, we show PARO provides indirect benefits for users by increasing their activity in particular modalities of social interaction, including visual, verbal, and physical interaction, which vary between primary and non-primary interactors. Secondly, PARO's positive effects on older adults' activity levels show steady growth over the duration of our study, suggesting they are not due to short-term "novelty effects." Finally, we show a variety of ways in which individual participants interacted with PARO and relate this to the "interpretive flexibility" of its design.
van Giesen, Lena; Garrity, Paul A
2017-01-01
The ionotropic receptors (IRs) are a branch of the ionotropic glutamate receptor family and serve as important mediators of sensory transduction in invertebrates. Recent work shows that, though initially studied as olfactory receptors, the IRs also mediate the detection of taste, temperature, and humidity. Here, we summarize recent insights into IR evolution and its potential ecological significance as well as recent advances in our understanding of how IRs contribute to diverse sensory modalities.
van Giesen, Lena; Garrity, Paul A.
2017-01-01
The ionotropic receptors (IRs) are a branch of the ionotropic glutamate receptor family and serve as important mediators of sensory transduction in invertebrates. Recent work shows that, though initially studied as olfactory receptors, the IRs also mediate the detection of taste, temperature, and humidity. Here, we summarize recent insights into IR evolution and its potential ecological significance as well as recent advances in our understanding of how IRs contribute to diverse sensory modalities. PMID:29034089
Working memory resources are shared across sensory modalities.
Salmela, V R; Moisala, M; Alho, K
2014-10-01
A common assumption in the working memory literature is that the visual and auditory modalities have separate and independent memory stores. Recent evidence on visual working memory has suggested that resources are shared between representations, and that the precision of representations sets the limit for memory performance. We tested whether memory resources are also shared across sensory modalities. Memory precision for two visual (spatial frequency and orientation) and two auditory (pitch and tone duration) features was measured separately for each feature and for all possible feature combinations. Thus, only the memory load was varied, from one to four features, while keeping the stimuli similar. In Experiment 1, two gratings and two tones-both containing two varying features-were presented simultaneously. In Experiment 2, two gratings and two tones-each containing only one varying feature-were presented sequentially. The memory precision (delayed discrimination threshold) for a single feature was close to the perceptual threshold. However, as the number of features to be remembered was increased, the discrimination thresholds increased more than twofold. Importantly, the decrease in memory precision did not depend on the modality of the other feature(s), or on whether the features were in the same or in separate objects. Hence, simultaneously storing one visual and one auditory feature had an effect on memory precision equal to those of simultaneously storing two visual or two auditory features. The results show that working memory is limited by the precision of the stored representations, and that working memory can be described as a resource pool that is shared across modalities.
Jacoby, Oscar; Hall, Sarah E; Mattingley, Jason B
2012-07-16
Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another. Copyright © 2012 Elsevier Inc. All rights reserved.
The Mere Exposure Effect in the Domain of Haptics
Jakesch, Martina; Carbon, Claus-Christian
2012-01-01
Background Zajonc showed that the attitude towards stimuli that one had been previously exposed to is more positive than towards novel stimuli. This mere exposure effect (MEE) has been tested extensively using various visual stimuli. Research on the MEE is sparse, however, for other sensory modalities. Methodology/Principal Findings We used objects of two material categories (stone and wood) and two complexity levels (simple and complex) to test the influence of exposure frequency (F0 = novel stimuli, F2 = stimuli exposed twice, F10 = stimuli exposed ten times) under two sensory modalities (haptics only and haptics & vision). Effects of exposure frequency were found for high complex stimuli with significantly increasing liking from F0 to F2 and F10, but only for the stone category. Analysis of “Need for Touch” data showed the MEE in participants with high need for touch, which suggests different sensitivity or saturation levels of MEE. Conclusions/Significance This different sensitivity or saturation levels might also reflect the effects of expertise on the haptic evaluation of objects. It seems that haptic and cross-modal MEEs are influenced by factors similar to those in the visual domain indicating a common cognitive basis. PMID:22347451
Silent music reading: auditory imagery and visuotonal modality transfer in singers and non-singers.
Hoppe, Christian; Splittstößer, Christoph; Fliessbach, Klaus; Trautner, Peter; Elger, Christian E; Weber, Bernd
2014-11-01
In daily life, responses are often facilitated by anticipatory imagery of expected targets which are announced by associated stimuli from different sensory modalities. Silent music reading represents an intriguing case of visuotonal modality transfer in working memory as it induces highly defined auditory imagery on the basis of presented visuospatial information (i.e. musical notes). Using functional MRI and a delayed sequence matching-to-sample paradigm, we compared brain activations during retention intervals (10s) of visual (VV) or tonal (TT) unimodal maintenance versus visuospatial-to-tonal modality transfer (VT) tasks. Visual or tonal sequences were comprised of six elements, white squares or tones, which were low, middle, or high regarding vertical screen position or pitch, respectively (presentation duration: 1.5s). For the cross-modal condition (VT, session 3), the visuospatial elements from condition VV (session 1) were re-defined as low, middle or high "notes" indicating low, middle or high tones from condition TT (session 2), respectively, and subjects had to match tonal sequences (probe) to previously presented note sequences. Tasks alternately had low or high cognitive load. To evaluate possible effects of music reading expertise, 15 singers and 15 non-musicians were included. Scanner task performance was excellent in both groups. Despite identity of applied visuospatial stimuli, visuotonal modality transfer versus visual maintenance (VT>VV) induced "inhibition" of visual brain areas and activation of primary and higher auditory brain areas which exceeded auditory activation elicited by tonal stimulation (VT>TT). This transfer-related visual-to-auditory activation shift occurred in both groups but was more pronounced in experts. Frontoparietal areas were activated by higher cognitive load but not by modality transfer. The auditory brain showed a potential to anticipate expected auditory target stimuli on the basis of non-auditory information and sensory brain activation rather mirrored expectation than stimulation. Silent music reading probably relies on these basic neurocognitive mechanisms. Copyright © 2014 Elsevier Inc. All rights reserved.
The evaluation of sources of knowledge underlying different conceptual categories.
Gainotti, Guido; Spinelli, Pietro; Scaricamazza, Eugenia; Marra, Camillo
2013-01-01
According to the "embodied cognition" theory and the "sensory-motor model of semantic knowledge": (a) concepts are represented in the brain in the same format in which they are constructed by the sensory-motor system and (b) various conceptual categories differ according to the weight of different kinds of information in their representation. In this study, we tried to check the second assumption by asking normal elderly subjects to subjectively evaluate the role of various perceptual, motor and language-mediated sources of knowledge in the construction of different semantic categories. Our first aim was to rate the influence of different sources of knowledge in the representation of animals, plant life and artifact categories, rather than in living and non-living beings, as many previous studies on this subject have done. We also tried to check the influence of age and stimulus modality on these evaluations of the "sources of knowledge" underlying different conceptual categories. The influence of age was checked by comparing results obtained in our group of elderly subjects with those obtained in a previous study, conducted with a similar methodology on a sample of young students. And the influence of stimulus modality was assessed by presenting the stimuli in the verbal modality to 50 subjects and in the pictorial modality to 50 other subjects. The distinction between "animals" and "plant life" in the "living" categories was confirmed by analyzing their prevalent sources of knowledge and by a cluster analysis, which allowed us to distinguish "plant life" items from animals. Furthermore, results of the study showed: (a) that our subjects considered the visual modality as the main source of knowledge for all categories taken into account; and (b) that in biological categories the next most important source of information was represented by other perceptual modalities, whereas in artifacts it was represented by the actions performed with them. Finally, age and stimulus modality did not significantly influence judgment of relevance of the sources of knowledge involved in the construction of different conceptual categories.
[Ventriloquism and audio-visual integration of voice and face].
Yokosawa, Kazuhiko; Kanaya, Shoko
2012-07-01
Presenting synchronous auditory and visual stimuli in separate locations creates the illusion that the sound originates from the direction of the visual stimulus. Participants' auditory localization bias, called the ventriloquism effect, has revealed factors affecting the perceptual integration of audio-visual stimuli. However, many studies on audio-visual processes have focused on performance in simplified experimental situations, with a single stimulus in each sensory modality. These results cannot necessarily explain our perceptual behavior in natural scenes, where various signals exist within a single sensory modality. In the present study we report the contributions of a cognitive factor, that is, the audio-visual congruency of speech, although this factor has often been underestimated in previous ventriloquism research. Thus, we investigated the contribution of speech congruency on the ventriloquism effect using a spoken utterance and two videos of a talking face. The salience of facial movements was also manipulated. As a result, when bilateral visual stimuli are presented in synchrony with a single voice, cross-modal speech congruency was found to have a significant impact on the ventriloquism effect. This result also indicated that more salient visual utterances attracted participants' auditory localization. The congruent pairing of audio-visual utterances elicited greater localization bias than did incongruent pairing, whereas previous studies have reported little dependency on the reality of stimuli in ventriloquism. Moreover, audio-visual illusory congruency, owing to the McGurk effect, caused substantial visual interference to auditory localization. This suggests that a greater flexibility in responding to multi-sensory environments exists than has been previously considered.
Honeine, Jean-Louis; Crisafulli, Oscar; Sozzi, Stefania
2015-01-01
We investigated the integration time of haptic and visual input and their interaction during stance stabilization. Eleven subjects performed four tandem-stance conditions (60 trials each). Vision, touch, and both vision and touch were added and withdrawn. Furthermore, vision was replaced with touch and vice versa. Body sway, tibialis anterior, and peroneus longus activity were measured. Following addition or withdrawal of vision or touch, an integration time period elapsed before the earliest changes in sway were observed. Thereafter, sway varied exponentially to a new steady-state while reweighting occurred. Latencies of sway changes on sensory addition ranged from 0.6 to 1.5 s across subjects, consistently longer for touch than vision, and were regularly preceded by changes in muscle activity. Addition of vision and touch simultaneously shortened the latencies with respect to vision or touch separately, suggesting cooperation between sensory modalities. Latencies following withdrawal of vision or touch or both simultaneously were shorter than following addition. When vision was replaced with touch or vice versa, adding one modality did not interfere with the effect of withdrawal of the other, suggesting that integration of withdrawal and addition were performed in parallel. The time course of the reweighting process to reach the new steady-state was also shorter on withdrawal than addition. The effects of different sensory inputs on posture stabilization illustrate the operation of a time-consuming, possibly supraspinal process that integrates and fuses modalities for accurate balance control. This study also shows the facilitatory interaction of visual and haptic inputs in integration and reweighting of stance-stabilizing inputs. PMID:26334013
Cue Integration in Categorical Tasks: Insights from Audio-Visual Speech Perception
Bejjanki, Vikranth Rao; Clayards, Meghan; Knill, David C.; Aslin, Richard N.
2011-01-01
Previous cue integration studies have examined continuous perceptual dimensions (e.g., size) and have shown that human cue integration is well described by a normative model in which cues are weighted in proportion to their sensory reliability, as estimated from single-cue performance. However, this normative model may not be applicable to categorical perceptual dimensions (e.g., phonemes). In tasks defined over categorical perceptual dimensions, optimal cue weights should depend not only on the sensory variance affecting the perception of each cue but also on the environmental variance inherent in each task-relevant category. Here, we present a computational and experimental investigation of cue integration in a categorical audio-visual (articulatory) speech perception task. Our results show that human performance during audio-visual phonemic labeling is qualitatively consistent with the behavior of a Bayes-optimal observer. Specifically, we show that the participants in our task are sensitive, on a trial-by-trial basis, to the sensory uncertainty associated with the auditory and visual cues, during phonemic categorization. In addition, we show that while sensory uncertainty is a significant factor in determining cue weights, it is not the only one and participants' performance is consistent with an optimal model in which environmental, within category variability also plays a role in determining cue weights. Furthermore, we show that in our task, the sensory variability affecting the visual modality during cue-combination is not well estimated from single-cue performance, but can be estimated from multi-cue performance. The findings and computational principles described here represent a principled first step towards characterizing the mechanisms underlying human cue integration in categorical tasks. PMID:21637344
Multisensory Integration in Non-Human Primates during a Sensory-Motor Task
Lanz, Florian; Moret, Véronique; Rouiller, Eric Michel; Loquet, Gérard
2013-01-01
Daily our central nervous system receives inputs via several sensory modalities, processes them and integrates information in order to produce a suitable behavior. The amazing part is that such a multisensory integration brings all information into a unified percept. An approach to start investigating this property is to show that perception is better and faster when multimodal stimuli are used as compared to unimodal stimuli. This forms the first part of the present study conducted in a non-human primate’s model (n = 2) engaged in a detection sensory-motor task where visual and auditory stimuli were displayed individually or simultaneously. The measured parameters were the reaction time (RT) between stimulus and onset of arm movement, successes and errors percentages, as well as the evolution as a function of time of these parameters with training. As expected, RTs were shorter when the subjects were exposed to combined stimuli. The gains for both subjects were around 20 and 40 ms, as compared with the auditory and visual stimulus alone, respectively. Moreover the number of correct responses increased in response to bimodal stimuli. We interpreted such multisensory advantage through redundant signal effect which decreases perceptual ambiguity, increases speed of stimulus detection, and improves performance accuracy. The second part of the study presents single-unit recordings derived from the premotor cortex (PM) of the same subjects during the sensory-motor task. Response patterns to sensory/multisensory stimulation are documented and specific type proportions are reported. Characterization of bimodal neurons indicates a mechanism of audio-visual integration possibly through a decrease of inhibition. Nevertheless the neural processing leading to faster motor response from PM as a polysensory association cortical area remains still unclear. PMID:24319421
Mroczko-Wąsowicz, Aleksandra; Werning, Markus
2012-01-01
Synesthesia is traditionally regarded as a phenomenon in which an additional non-standard phenomenal experience occurs consistently in response to ordinary stimulation applied to the same or another modality. Recent studies suggest an important role of semantic representations in the induction of synesthesia. In the present proposal we try to link the empirically grounded theory of sensory-motor contingency and mirror system based embodied simulation/emulation to newly discovered cases of swimming style-color synesthesia. In the latter color experiences are evoked only by showing the synesthetes a picture of a swimming person or asking them to think about a given swimming style. Neural mechanisms of mirror systems seem to be involved here. It has been shown that for mirror-sensory synesthesia, such as mirror-touch or mirror-pain synesthesia (when visually presented tactile or noxious stimulation of others results in the projection of the tactile or pain experience onto oneself), concurrent experiences are caused by overactivity in the mirror neuron system responding to the specific observation. The comparison of different forms of synesthesia has the potential of challenging conventional thinking on this phenomenon and providing a more general, sensory-motor account of synesthesia encompassing cases driven by semantic or emulational rather than pure sensory or motor representations. Such an interpretation could include top-down associations, questioning the explanation in terms of hard-wired structural connectivity. In the paper the hypothesis is developed that the wide-ranging phenomenon of synesthesia might result from a process of hyperbinding between "too many" semantic attribute domains. This hypothesis is supplemented by some suggestions for an underlying neural mechanism.
Mroczko-Wąsowicz, Aleksandra; Werning, Markus
2012-01-01
Synesthesia is traditionally regarded as a phenomenon in which an additional non-standard phenomenal experience occurs consistently in response to ordinary stimulation applied to the same or another modality. Recent studies suggest an important role of semantic representations in the induction of synesthesia. In the present proposal we try to link the empirically grounded theory of sensory-motor contingency and mirror system based embodied simulation/emulation to newly discovered cases of swimming style-color synesthesia. In the latter color experiences are evoked only by showing the synesthetes a picture of a swimming person or asking them to think about a given swimming style. Neural mechanisms of mirror systems seem to be involved here. It has been shown that for mirror-sensory synesthesia, such as mirror-touch or mirror-pain synesthesia (when visually presented tactile or noxious stimulation of others results in the projection of the tactile or pain experience onto oneself), concurrent experiences are caused by overactivity in the mirror neuron system responding to the specific observation. The comparison of different forms of synesthesia has the potential of challenging conventional thinking on this phenomenon and providing a more general, sensory-motor account of synesthesia encompassing cases driven by semantic or emulational rather than pure sensory or motor representations. Such an interpretation could include top-down associations, questioning the explanation in terms of hard-wired structural connectivity. In the paper the hypothesis is developed that the wide-ranging phenomenon of synesthesia might result from a process of hyperbinding between “too many” semantic attribute domains. This hypothesis is supplemented by some suggestions for an underlying neural mechanism. PMID:22936919
Modal-Power-Based Haptic Motion Recognition
NASA Astrophysics Data System (ADS)
Kasahara, Yusuke; Shimono, Tomoyuki; Kuwahara, Hiroaki; Sato, Masataka; Ohnishi, Kouhei
Motion recognition based on sensory information is important for providing assistance to human using robots. Several studies have been carried out on motion recognition based on image information. However, in the motion of humans contact with an object can not be evaluated precisely by image-based recognition. This is because the considering force information is very important for describing contact motion. In this paper, a modal-power-based haptic motion recognition is proposed; modal power is considered to reveal information on both position and force. Modal power is considered to be one of the defining features of human motion. A motion recognition algorithm based on linear discriminant analysis is proposed to distinguish between similar motions. Haptic information is extracted using a bilateral master-slave system. Then, the observed motion is decomposed in terms of primitive functions in a modal space. The experimental results show the effectiveness of the proposed method.
Evidence for auditory-visual processing specific to biological motion.
Wuerger, Sophie M; Crocker-Buque, Alexander; Meyer, Georg F
2012-01-01
Biological motion is usually associated with highly correlated sensory signals from more than one modality: an approaching human walker will not only have a visual representation, namely an increase in the retinal size of the walker's image, but also a synchronous auditory signal since the walker's footsteps will grow louder. We investigated whether the multisensorial processing of biological motion is subject to different constraints than ecologically invalid motion. Observers were presented with a visual point-light walker and/or synchronised auditory footsteps; the walker was either approaching the observer (looming motion) or walking away (receding motion). A scrambled point-light walker served as a control. Observers were asked to detect the walker's motion as quickly and as accurately as possible. In Experiment 1 we tested whether the reaction time advantage due to redundant information in the auditory and visual modality is specific for biological motion. We found no evidence for such an effect: the reaction time reduction was accounted for by statistical facilitation for both biological and scrambled motion. In Experiment 2, we dissociated the auditory and visual information and tested whether inconsistent motion directions across the auditory and visual modality yield longer reaction times in comparison to consistent motion directions. Here we find an effect specific to biological motion: motion incongruency leads to longer reaction times only when the visual walker is intact and recognisable as a human figure. If the figure of the walker is abolished by scrambling, motion incongruency has no effect on the speed of the observers' judgments. In conjunction with Experiment 1 this suggests that conflicting auditory-visual motion information of an intact human walker leads to interference and thereby delaying the response.
The neuroecology of cartilaginous fishes: sensory strategies for survival.
Collin, Shaun P
2012-01-01
As apex predators, chondrichthyans, or cartilaginous fishes, hold an important position within a range of aquatic ecosystems and influence the balance between species' abundance and biodiversity. Having been in existence for over 400 million years and representing the earliest stages of the evolution of jawed vertebrates, this group also covers a diverse range of eco-morphotypes, occupying both marine and freshwater habitats. The class Chondrichthyes is divided into two subclasses: the Elasmobranchii (sharks, skates, and rays) and the Holocephali (elephant sharks and chimaeras). However, many of their life history traits, such as low fecundity, the production of small numbers of highly precocious young, slow growth rates, and late maturity, make them highly susceptible to human exploitation. To mitigate the negative effects of human impacts, it is important that we understand the sensory strategies that elasmobranchs use for navigating within their environment, forming reproductive aggregations, feeding, and even communicating. One approach to investigate the sensory bases of their behavior is to examine the peripheral sense organs mediating vision, olfaction, gustation, lateral line, electroreception, and audition in a large range of species in order to identify specific adaptations, the range of sensitivity thresholds, and the compromise between sensory spatial resolution and sensitivity. In addition, we can quantitatively assess the convergence of sensory input to the central nervous system and the relative importance of different sensory modalities. Using a comparative approach and often a combination of anatomical, electrophysiological, and molecular techniques, significant variation has been identified in the spatial and chromatic sampling of the photoreceptors in the eye, the surface area and the number of olfactory lamellae within the nasal cavity, the level of gustatory sampling within the oral cavity, the type and innervation of neuromasts of the lateral line system, the distribution of electroreceptive pores over the head, and the morphology of the inner ear. These results are presented in the context of predictions of sensory capabilities for species living in a range of ecological niches, what further research is needed, and how this sensory input may be a predictor of behavior. Copyright © 2012 S. Karger AG, Basel.
Bainbridge, Chance; Rodriguez, Anjelica; Schuler, Andrew; Cisneros, Michael; Vidal-Gadea, Andrés G
2016-10-01
The magnetic field of the earth provides many organisms with sufficient information to successfully navigate through their environments. While evidence suggests the widespread use of this sensory modality across many taxa, it remains an understudied sensory modality. We have recently showed that the nematode C. elegans orients to earth-strength magnetic fields using the first pair of described magnetosensory neurons, AFDs. The AFD cells are a pair of ciliated sensory neurons crowned by fifty villi known to be implicated in temperature sensation. We investigated the potential importance of these subcellular structures for the performance of magnetic orientation. We show that ciliary integrity and villi number are essential for magnetic orientation. Mutants with impairments AFD cilia or villi structure failed to orient to magnetic fields. Similarly, C. elegans larvae possessing immature AFD neurons with fewer villi were also unable to orient to magnetic fields. Larvae of every stage however retained the ability to orient to thermal gradients. To our knowledge, this is the first behavioral separation of magnetic and thermal orientation in C. elegans. We conclude that magnetic orientation relies on the function of both cilia and villi in the AFD neurons. The role of villi in multiple sensory transduction pathways involved in the sensory transduction of vectorial stimuli further supports the likely role of the villi of the AFD neurons as the site for magnetic field transduction. The genetic and behavioral tractability of C. elegans make it a promising system for uncovering potentially conserved molecular mechanisms by which animals across taxa detect and orient to magnetic fields. Copyright © 2016 Elsevier Ltd. All rights reserved.
Intermodal Attention Shifts in Multimodal Working Memory.
Katus, Tobias; Grubert, Anna; Eimer, Martin
2017-04-01
Attention maintains task-relevant information in working memory (WM) in an active state. We investigated whether the attention-based maintenance of stimulus representations that were encoded through different modalities is flexibly controlled by top-down mechanisms that depend on behavioral goals. Distinct components of the ERP reflect the maintenance of tactile and visual information in WM. We concurrently measured tactile (tCDA) and visual contralateral delay activity (CDA) to track the attentional activation of tactile and visual information during multimodal WM. Participants simultaneously received tactile and visual sample stimuli on the left and right sides and memorized all stimuli on one task-relevant side. After 500 msec, an auditory retrocue indicated whether the sample set's tactile or visual content had to be compared with a subsequent test stimulus set. tCDA and CDA components that emerged simultaneously during the encoding phase were consistently reduced after retrocues that marked the corresponding (tactile or visual) modality as task-irrelevant. The absolute size of cue-dependent modulations was similar for the tCDA/CDA components and did not depend on the number of tactile/visual stimuli that were initially encoded into WM. Our results suggest that modality-specific maintenance processes in sensory brain regions are flexibly modulated by top-down influences that optimize multimodal WM representations for behavioral goals.
The role of visual deprivation and experience on the performance of sensory substitution devices.
Stronks, H Christiaan; Nau, Amy C; Ibbotson, Michael R; Barnes, Nick
2015-10-22
It is commonly accepted that the blind can partially compensate for their loss of vision by developing enhanced abilities with their remaining senses. This visual compensation may be related to the fact that blind people rely on their other senses in everyday life. Many studies have indeed shown that experience plays an important role in visual compensation. Numerous neuroimaging studies have shown that the visual cortices of the blind are recruited by other functional brain areas and can become responsive to tactile or auditory input instead. These cross-modal plastic changes are more pronounced in the early blind compared to late blind individuals. The functional consequences of cross-modal plasticity on visual compensation in the blind are debated, as are the influences of various etiologies of vision loss (i.e., blindness acquired early or late in life). Distinguishing between the influences of experience and visual deprivation on compensation is especially relevant for rehabilitation of the blind with sensory substitution devices. The BrainPort artificial vision device and The vOICe are assistive devices for the blind that redirect visual information to another intact sensory system. Establishing how experience and different etiologies of vision loss affect the performance of these devices may help to improve existing rehabilitation strategies, formulate effective selection criteria and develop prognostic measures. In this review we will discuss studies that investigated the influence of training and visual deprivation on the performance of various sensory substitution approaches. Copyright © 2015 Elsevier B.V. All rights reserved.
Neural Signature of Value-Based Sensorimotor Prioritization in Humans.
Blangero, Annabelle; Kelly, Simon P
2017-11-01
In situations in which impending sensory events demand fast action choices, we must be ready to prioritize higher-value courses of action to avoid missed opportunities. When such a situation first presents itself, stimulus-action contingencies and their relative value must be encoded to establish a value-biased state of preparation for an impending sensorimotor decision. Here, we sought to identify neurophysiological signatures of such processes in the human brain (both female and male). We devised a task requiring fast action choices based on the discrimination of a simple visual cue in which the differently valued sensory alternatives were presented 750-800 ms before as peripheral "targets" that specified the stimulus-action mapping for the upcoming decision. In response to the targets, we identified a discrete, transient, spatially selective signal in the event-related potential (ERP), which scaled with relative value and strongly predicted the degree of behavioral bias in the upcoming decision both across and within subjects. This signal is not compatible with any hitherto known ERP signature of spatial selection and also bears novel distinctions with respect to characterizations of value-sensitive, spatially selective activity found in sensorimotor areas of nonhuman primates. Specifically, a series of follow-up experiments revealed that the signal was reliably invoked regardless of response laterality, response modality, sensory feature, and reward valence. It was absent, however, when the response deadline was relaxed and the strategic need for biasing removed. Therefore, more than passively representing value or salience, the signal appears to play a versatile and active role in adaptive sensorimotor prioritization. SIGNIFICANCE STATEMENT In many situations such as fast-moving sports, we must be ready to act fast in response to sensory events and, in our preparation, prioritize courses of action that lead to greater rewards. Although behavioral effects of value biases in sensorimotor decision making have been widely studied, little is known about the neural processes that set these biases in place beforehand. Here, we report the discovery of a transient, spatially selective neural signal in humans that encodes the relative value of competing decision alternatives and strongly predicts behavioral value biases in decisions made ∼500 ms later. Follow-up manipulations of value differential, reward valence, response modality, sensory features, and time constraints establish that the signal reflects an active, feature- and effector-general preparatory mechanism for value-based prioritization. Copyright © 2017 the authors 0270-6474/17/3710725-13$15.00/0.
Olfactory-Induced Synesthesias: A Review and Model
ERIC Educational Resources Information Center
Stevenson, Richard J.; Tomiczek, Caroline
2007-01-01
Recent reviews of synesthesia concentrate upon rare neurodevelopmental examples and exclude common olfactory-induced experiences with which they may profitably be compared. Like the neurodevelopmental synesthesias, odor-induced experiences involve different sensory modalities; are reliable, asymmetric (concurrents cannot induce), and automatic;…
Sensory Load Incurs Conceptual Processing Costs
ERIC Educational Resources Information Center
Vermeulen, Nicolas; Corneille, Olivier; Niedenthal, Paula M.
2008-01-01
Theories of grounded cognition propose that modal simulations underlie cognitive representation of concepts [Barsalou, L. W. (1999). "Perceptual symbol systems." "Behavioral and Brain Sciences, 22"(4), 577-660; Barsalou, L. W. (2008). "Grounded cognition." "Annual Review of Psychology, 59", 617-645]. Based…
Electrophysiological evidence for Audio-visuo-lingual speech integration.
Treille, Avril; Vilain, Coriandre; Schwartz, Jean-Luc; Hueber, Thomas; Sato, Marc
2018-01-31
Recent neurophysiological studies demonstrate that audio-visual speech integration partly operates through temporal expectations and speech-specific predictions. From these results, one common view is that the binding of auditory and visual, lipread, speech cues relies on their joint probability and prior associative audio-visual experience. The present EEG study examined whether visual tongue movements integrate with relevant speech sounds, despite little associative audio-visual experience between the two modalities. A second objective was to determine possible similarities and differences of audio-visual speech integration between unusual audio-visuo-lingual and classical audio-visuo-labial modalities. To this aim, participants were presented with auditory, visual, and audio-visual isolated syllables, with the visual presentation related to either a sagittal view of the tongue movements or a facial view of the lip movements of a speaker, with lingual and facial movements previously recorded by an ultrasound imaging system and a video camera. In line with previous EEG studies, our results revealed an amplitude decrease and a latency facilitation of P2 auditory evoked potentials in both audio-visual-lingual and audio-visuo-labial conditions compared to the sum of unimodal conditions. These results argue against the view that auditory and visual speech cues solely integrate based on prior associative audio-visual perceptual experience. Rather, they suggest that dynamic and phonetic informational cues are sharable across sensory modalities, possibly through a cross-modal transfer of implicit articulatory motor knowledge. Copyright © 2017 Elsevier Ltd. All rights reserved.
Gender difference in the theta/alpha ratio during the induction of peaceful audiovisual modalities.
Yang, Chia-Yen; Lin, Ching-Po
2015-09-01
Gender differences in emotional perception have been found in numerous psychological and psychophysiological studies. The conducting modalities in diverse characteristics of different sensory systems make it interesting to determine how cooperation and competition contribute to emotional experiences. We have previously estimated the bias from the match attributes of auditory and visual modalities and revealed specific brain activity frequency patterns related to a peaceful mood. In that multimodality experiment, we focused on how inner-quiet information is processed in the human brain, and found evidence of auditory domination from the theta-band activity. However, a simple quantitative description of these three frequency bands is lacking, and no studies have assessed the effects of peacefulness on the emotional state. Therefore, the aim of this study was to use magnetoencephalography to determine if gender differences exist (and when and where) in the frequency interactions underpinning the perception of peacefulness. This study provides evidence of auditory and visual domination in perceptual bias during multimodality processing of peaceful consciousness. The results of power ratio analyses suggest that the values of the theta/alpha ratio are associated with a modality as well as hemispheric asymmetries in the anterior-to-posterior direction, which shift from right to left with the auditory to visual stimulations in a peaceful mood. This means that the theta/alpha ratio might be useful for evaluating emotion. Moreover, the difference was found to be most pronounced for auditory domination and visual sensitivity in the female group.
Pathways to Seeing Music: Enhanced Structural Connectivity in Colored-Music Synesthesia
Zamm, Anna; Schlaug, Gottfried; Eagleman, David M.; Loui, Psyche
2013-01-01
Synesthesia, a condition in which a stimulus in one sensory modality consistently and automatically triggers concurrent percepts in another modality, provides a window into the neural correlates of cross-modal associations. While research on grapheme-color synesthesia has provided evidence for both hyperconnectivity/hyperbinding and disinhibited feedback as possible underlying mechanisms, less research has explored the neuroanatomical basis of other forms of synesthesia. In the current study we investigated the white matter correlates of colored-music synesthesia. As these synesthetes report seeing colors upon hearing musical sounds, we hypothesized they might show different patterns of connectivity between visual and auditory association areas. We used diffusion tensor imaging to trace the white matter tracts in temporal and occipital lobe regions in 10 synesthetes and 10 matched non-synesthete controls. Results showed that synesthetes possessed different hemispheric patterns of fractional anisotropy, an index of white matter integrity, in the inferior fronto-occipital fasciculus (IFOF), a major white matter pathway that connects visual and auditory association areas to frontal regions. Specifically, white matter integrity within the right IFOF was significantly greater in synesthetes than controls. Furthermore, white matter integrity in synesthetes was correlated with scores on audiovisual tests of the Synesthesia Battery, especially in white matter underlying the right fusiform gyrus. Our findings provide the first evidence of a white matter substrate of colored-music synesthesia, and suggest that enhanced white matter connectivity is involved in enhanced cross-modal associations. PMID:23454047
Hemispheric differences in processing of vocalizations depend on early experience.
Phan, Mimi L; Vicario, David S
2010-02-02
An intriguing phenomenon in the neurobiology of language is lateralization: the dominant role of one hemisphere in a particular function. Lateralization is not exclusive to language because lateral differences are observed in other sensory modalities, behaviors, and animal species. Despite much scientific attention, the function of lateralization, its possible dependence on experience, and the functional implications of such dependence have yet to be clearly determined. We have explored the role of early experience in the development of lateralized sensory processing in the brain, using the songbird model of vocal learning. By controlling exposure to natural vocalizations (through isolation, song tutoring, and muting), we manipulated the postnatal auditory environment of developing zebra finches, and then assessed effects on hemispheric specialization for communication sounds in adulthood. Using bilateral multielectrode recordings from a forebrain auditory area known to selectively process species-specific vocalizations, we found that auditory responses to species-typical songs and long calls, in both male and female birds, were stronger in the right hemisphere than in the left, and that right-side responses adapted more rapidly to stimulus repetition. We describe specific instances, particularly in males, where these lateral differences show an influence of auditory experience with song and/or the bird's own voice during development.
Banerjee, Sunayana B.; Liu, Robert C.
2013-01-01
Much of the literature on maternal behavior has focused on the role of infant experience and hormones in a canonical subcortical circuit for maternal motivation and maternal memory. Although early studies demonstrated that the cerebral cortex also plays a significant role in maternal behaviors, little has been done to explore what that role may be. Recent work though has provided evidence that the cortex, particularly sensory cortices, contains correlates of sensory memories of infant cues, consistent with classical studies of experience-dependent sensory cortical plasticity in non-maternal paradigms. By reviewing the literature from both the maternal behavior and sensory cortical plasticity fields, focusing on the auditory modality, we hypothesize that maternal hormones (predominantly estrogen) may act to prime auditory cortical neurons for a longer-lasting neural trace of infant vocal cues, thereby facilitating recognition and discrimination. This could then more efficiently activate the subcortical circuit to elicit and sustain maternal behavior. PMID:23916405
Rhone, Ariane E; Nourski, Kirill V; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A; McMurray, Bob
In everyday conversation, viewing a talker's face can provide information about the timing and content of an upcoming speech signal, resulting in improved intelligibility. Using electrocorticography, we tested whether human auditory cortex in Heschl's gyrus (HG) and on superior temporal gyrus (STG) and motor cortex on precentral gyrus (PreC) were responsive to visual/gestural information prior to the onset of sound and whether early stages of auditory processing were sensitive to the visual content (speech syllable versus non-speech motion). Event-related band power (ERBP) in the high gamma band was content-specific prior to acoustic onset on STG and PreC, and ERBP in the beta band differed in all three areas. Following sound onset, we found with no evidence for content-specificity in HG, evidence for visual specificity in PreC, and specificity for both modalities in STG. These results support models of audio-visual processing in which sensory information is integrated in non-primary cortical areas.
Alaerts, Kaat; Swinnen, Stephan P; Wenderoth, Nicole
2011-05-01
Seeing or hearing manual actions activates the mirror neuron system, that is, specialized neurons within motor areas which fire when an action is performed but also when it is passively perceived. Using TMS, it was shown that motor cortex of typically developed subjects becomes facilitated not only from seeing others' actions, but also from merely hearing action-related sounds. In the present study, TMS was used for the first time to explore the "auditory" and "visual" responsiveness of motor cortex in individuals with congenital blindness or deafness. TMS was applied over left primary motor cortex (M1) to measure cortico-motor facilitation while subjects passively perceived manual actions (either visually or aurally). Although largely unexpected, congenitally blind or deaf subjects displayed substantially lower resonant motor facilitation upon action perception compared to seeing/hearing control subjects. Moreover, muscle-specific changes in cortico-motor excitability within M1 appeared to be absent in individuals with profound blindness or deafness. Overall, these findings strongly argue against the hypothesis that an increased reliance on the remaining sensory modality in blind or deaf subjects is accompanied by an increased responsiveness of the "auditory" or "visual" perceptual-motor "mirror" system, respectively. Moreover, the apparent lack of resonant motor facilitation for the blind and deaf subjects may challenge the hypothesis of a unitary mirror system underlying human action recognition and may suggest that action perception in blind and deaf subjects engages a mode of action processing that is different from the human action recognition system recruited in typically developed subjects.
Decoding visual object categories in early somatosensory cortex.
Smith, Fraser W; Goodale, Melvyn A
2015-04-01
Neurons, even in the earliest sensory areas of cortex, are subject to a great deal of contextual influence from both within and across modality connections. In the present work, we investigated whether the earliest regions of somatosensory cortex (S1 and S2) would contain content-specific information about visual object categories. We reasoned that this might be possible due to the associations formed through experience that link different sensory aspects of a given object. Participants were presented with visual images of different object categories in 2 fMRI experiments. Multivariate pattern analysis revealed reliable decoding of familiar visual object category in bilateral S1 (i.e., postcentral gyri) and right S2. We further show that this decoding is observed for familiar but not unfamiliar visual objects in S1. In addition, whole-brain searchlight decoding analyses revealed several areas in the parietal lobe that could mediate the observed context effects between vision and somatosensation. These results demonstrate that even the first cortical stages of somatosensory processing carry information about the category of visually presented familiar objects. © The Author 2013. Published by Oxford University Press.
Decoding Visual Object Categories in Early Somatosensory Cortex
Smith, Fraser W.; Goodale, Melvyn A.
2015-01-01
Neurons, even in the earliest sensory areas of cortex, are subject to a great deal of contextual influence from both within and across modality connections. In the present work, we investigated whether the earliest regions of somatosensory cortex (S1 and S2) would contain content-specific information about visual object categories. We reasoned that this might be possible due to the associations formed through experience that link different sensory aspects of a given object. Participants were presented with visual images of different object categories in 2 fMRI experiments. Multivariate pattern analysis revealed reliable decoding of familiar visual object category in bilateral S1 (i.e., postcentral gyri) and right S2. We further show that this decoding is observed for familiar but not unfamiliar visual objects in S1. In addition, whole-brain searchlight decoding analyses revealed several areas in the parietal lobe that could mediate the observed context effects between vision and somatosensation. These results demonstrate that even the first cortical stages of somatosensory processing carry information about the category of visually presented familiar objects. PMID:24122136
Aggression and courtship in Drosophila: pheromonal communication and sex recognition.
Fernández, María Paz; Kravitz, Edward A
2013-11-01
Upon encountering a conspecific in the wild, males have to rapidly detect, integrate and process the most relevant signals to evoke an appropriate behavioral response. Courtship and aggression are the most important social behaviors in nature for procreation and survival: for males, making the right choice between the two depends on the ability to identify the sex of the other individual. In flies as in most species, males court females and attack other males. Although many sensory modalities are involved in sex recognition, chemosensory communication mediated by specific molecules that serve as pheromones plays a key role in helping males distinguish between courtship and aggression targets. The chemosensory signals used by flies include volatile and non-volatile compounds, detected by the olfactory and gustatory systems. Recently, several putative olfactory and gustatory receptors have been identified that play key roles in sex recognition, allowing investigators to begin to map the neuronal circuits that convey this sensory information to higher processing centers in the brain. Here, we describe how Drosophila melanogaster males use taste and smell to make correct behavioral choices.
Bonhomme, V; Boveroux, P; Brichant, J F; Laureys, S; Boly, M
2012-01-01
This paper reviews the current knowledge about the mechanisms of anesthesia-induced alteration of consciousness. It is now evident that hypnotic anesthetic agents have specific brain targets whose function is hierarchically altered in a dose-dependent manner. Higher order networks, thought to be involved in mental content generation, as well as sub-cortical networks involved in thalamic activity regulation seems to be affected first by increasing concentrations of hypnotic agents that enhance inhibitory neurotransmission. Lower order sensory networks are preserved, including thalamo-cortical connectivity into those networks, even at concentrations that suppress responsiveness, but cross-modal sensory interactions are inhibited. Thalamo-cortical connectivity into the consciousness networks decreases with increasing concentrations of those agents, and is transformed into an anti-correlated activity between the thalamus and the cortex for the deepest levels of sedation, when the subject is non responsive. Future will tell us whether these brain function alterations are also observed with hypnotic agents that mainly inhibit excitatory neurotransmission. The link between the observations made using fMRI and the identified biochemical targets of hypnotic anesthetic agents still remains to be identified.
Aggression and Courtship in Drosophila: Pheromonal Communication and Sex Recognition
Fernández, María Paz; Kravitz, Edward A.
2013-01-01
Upon encountering a conspecific in the wild, males have to rapidly detect, integrate and process the most relevant signals to evoke an appropriate behavioral response. Courtship and aggression are the most important social behaviors in nature for procreation and survival: for males, making the right choice between the two depends on the ability to identify the sex of the other individual. In flies as in most species, males court females and attack other males. Although many sensory modalities are involved in sex recognition, chemosensory communication mediated by specific molecules that serve as pheromones plays a key role in helping males distinguish between courtship and aggression targets. The chemosensory signals used by flies include volatile and non-volatile compounds, detected by the olfactory and gustatory systems. Recently, several putative olfactory and gustatory receptors have been identified that play key roles in sex recognition, allowing investigators to begin to map the neuronal circuits that convey this sensory information to higher processing centers in the brain. Here, we describe how Drosophila melanogaster males use taste and smell to make correct behavioral choices. PMID:24043358
Sensory and Emotional Perception of Wooden Surfaces through Fingertip Touch
Bhatta, Shiv R.; Tiippana, Kaisa; Vahtikari, Katja; Hughes, Mark; Kyttä, Marketta
2017-01-01
Previous studies on tactile experiences have investigated a wide range of material surfaces across various skin sites of the human body in self-touch or other touch modes. Here, we investigate whether the sensory and emotional aspects of touch are related when evaluating wooden surfaces using fingertips in the absence of other sensory modalities. Twenty participants evaluated eight different pine and oak wood surfaces, using sensory and emotional touch descriptors, through the lateral motion of active fingertip exploration. The data showed that natural and smooth wood surfaces were perceived more positively in emotional touch than coated surfaces. We highlight the importance of preserving the naturalness of the surface texture in the process of wood-surface treatment so as to improve positive touch experiences, as well as avoid negative ones. We argue that the results may offer possibilities in the design of wood-based interior products with a view to improving consumer touch experiences. PMID:28348541
Severe vision and hearing impairment and successful aging: a multidimensional view.
Wahl, Hans-Werner; Heyl, Vera; Drapaniotis, Philipp M; Hörmann, Karl; Jonas, Jost B; Plinkert, Peter K; Rohrschneider, Klaus
2013-12-01
Previous research on psychosocial adaptation of sensory-impaired older adults has focused mainly on only one sensory modality and on a limited number of successful aging outcomes. We considered a broad range of successful aging indicators and compared older adults with vision impairment, hearing impairment, and dual sensory impairments and without sensory impairment. Data came from samples of severely visually impaired (VI; N = 121), severely hearing-impaired (HI; N = 116), dual sensory-impaired (DI; N = 43), and sensory-unimpaired older adults (UI; N = 150). Participants underwent a wide-ranging assessment, covering everyday competence, cognitive functioning, social resources, self-regulation strategies, cognitive and affective well-being, and 4-year survival status (except the DI group). The most pronounced difference among groups was in the area of everyday competence (lowest in VI and DI). Multigroup comparisons in latent space revealed both similar and differing relationship strengths among health, everyday competence, social resources, self-regulation, and overall well-being, depending on sensory status. After 4 years, mortality in VI (29%) and HI (30%) was significantly higher than in UI (20%) at the bivariate level, but not after controlling for confounders in a multivariate analysis. A multidimensional approach to the understanding of sensory impairment and psychosocial adaptation in old age reveals a complex picture of loss and maintenance.
The contributions of vision and haptics to reaching and grasping
Stone, Kayla D.; Gonzalez, Claudia L. R.
2015-01-01
This review aims to provide a comprehensive outlook on the sensory (visual and haptic) contributions to reaching and grasping. The focus is on studies in developing children, normal, and neuropsychological populations, and in sensory-deprived individuals. Studies have suggested a right-hand/left-hemisphere specialization for visually guided grasping and a left-hand/right-hemisphere specialization for haptically guided object recognition. This poses the interesting possibility that when vision is not available and grasping relies heavily on the haptic system, there is an advantage to use the left hand. We review the evidence for this possibility and dissect the unique contributions of the visual and haptic systems to grasping. We ultimately discuss how the integration of these two sensory modalities shape hand preference. PMID:26441777
Role of hippocampus in polymodal-cue guided tasks in rats.
Miniaci, Maria Concetta; Lippiello, Pellegrino; Monda, Marcellino; Scotto, Pietro
2016-09-01
To examine how signals from different sensory modalities are integrated to generate an appropriate goal-oriented behavior, we trained rats in an eight-arm radial maze to visit a cue arm provided with intramaze cues from different sensory modalities, i.e. visual, tactile and auditory, in order to obtain a reward. When the same rats were then examined on test trials in which the cue arm contained one of the stimuli that the animals were trained with (i.e. light, sound or rough sheet), they showed a significant impairment with respect to the performance on the polymodal-cue task. The contribution of the dorsal hippocampus to the acquisition and retention of polymodal-cue guided task was also examined. We found that rats with dorsal hippocampal lesions before training showed a significant deficit in the acquisition of polymodal-cue oriented task that improved with overtraining. The selective lesion of the dorsal hippocampus after training disrupted memory retention, but the animals' performance improved following retraining of the polymodal task. All hippocampal lesioned rats displayed an impaired performance on the unimodal test. These findings suggest that the dorsal hippocampus contributes to the processing of multimodal sensory information for the associative memory formation and consolidation. Copyright © 2016 Elsevier B.V. All rights reserved.
The cortical basis of true memory and false memory for motion.
Karanian, Jessica M; Slotnick, Scott D
2014-02-01
Behavioral evidence indicates that false memory, like true memory, can be rich in sensory detail. By contrast, there is fMRI evidence that true memory for visual information produces greater activity in earlier visual regions than false memory, which suggests true memory is associated with greater sensory detail. However, false memory in previous fMRI paradigms may have lacked sufficient sensory detail to recruit earlier visual processing regions. To investigate this possibility in the present fMRI study, we employed a paradigm that produced feature-specific false memory with a high degree of visual detail. During the encoding phase, moving or stationary abstract shapes were presented to the left or right of fixation. During the retrieval phase, shapes from encoding were presented at fixation and participants classified each item as previously "moving" or "stationary" within each visual field. Consistent with previous fMRI findings, true memory but not false memory for motion activated motion processing region MT+, while both true memory and false memory activated later cortical processing regions. In addition, false memory but not true memory for motion activated language processing regions. The present findings indicate that true memory activates earlier visual regions to a greater degree than false memory, even under conditions of detailed retrieval. Thus, the dissociation between previous behavioral findings and fMRI findings do not appear to be task dependent. Future work will be needed to assess whether the same pattern of true memory and false memory activity is observed for different sensory modalities. Copyright © 2013 Elsevier Ltd. All rights reserved.
Carriot, Jérome; Jamali, Mohsen; Chacron, Maurice J.
2014-01-01
It is widely believed that sensory systems are optimized for processing stimuli occurring in the natural environment. However, it remains unknown whether this principle applies to the vestibular system, which contributes to essential brain functions ranging from the most automatic reflexes to spatial perception and motor coordination. Here we quantified, for the first time, the statistics of natural vestibular inputs experienced by freely moving human subjects during typical everyday activities. Although previous studies have found that the power spectra of natural signals across sensory modalities decay as a power law (i.e., as 1/fα), we found that this did not apply to natural vestibular stimuli. Instead, power decreased slowly at lower and more rapidly at higher frequencies for all motion dimensions. We further establish that this unique stimulus structure is the result of active motion as well as passive biomechanical filtering occurring before any neural processing. Notably, the transition frequency (i.e., frequency at which power starts to decrease rapidly) was lower when subjects passively experienced sensory stimulation than when they actively controlled stimulation through their own movement. In contrast to signals measured at the head, the spectral content of externally generated (i.e., passive) environmental motion did follow a power law. Specifically, transformations caused by both motor control and biomechanics shape the statistics of natural vestibular stimuli before neural processing. We suggest that the unique structure of natural vestibular stimuli will have important consequences on the neural coding strategies used by this essential sensory system to represent self-motion in everyday life. PMID:24920638
Attention Modulates Visual-Tactile Interaction in Spatial Pattern Matching
Göschl, Florian; Engel, Andreas K.; Friese, Uwe
2014-01-01
Factors influencing crossmodal interactions are manifold and operate in a stimulus-driven, bottom-up fashion, as well as via top-down control. Here, we evaluate the interplay of stimulus congruence and attention in a visual-tactile task. To this end, we used a matching paradigm requiring the identification of spatial patterns that were concurrently presented visually on a computer screen and haptically to the fingertips by means of a Braille stimulator. Stimulation in our paradigm was always bimodal with only the allocation of attention being manipulated between conditions. In separate blocks of the experiment, participants were instructed to (a) focus on a single modality to detect a specific target pattern, (b) pay attention to both modalities to detect a specific target pattern, or (c) to explicitly evaluate if the patterns in both modalities were congruent or not. For visual as well as tactile targets, congruent stimulus pairs led to quicker and more accurate detection compared to incongruent stimulation. This congruence facilitation effect was more prominent under divided attention. Incongruent stimulation led to behavioral decrements under divided attention as compared to selectively attending a single sensory channel. Additionally, when participants were asked to evaluate congruence explicitly, congruent stimulation was associated with better performance than incongruent stimulation. Our results extend previous findings from audiovisual studies, showing that stimulus congruence also resulted in behavioral improvements in visuotactile pattern matching. The interplay of stimulus processing and attentional control seems to be organized in a highly flexible fashion, with the integration of signals depending on both bottom-up and top-down factors, rather than occurring in an ‘all-or-nothing’ manner. PMID:25203102
Robust Transient Dynamics and Brain Functions
Rabinovich, Mikhail I.; Varona, Pablo
2011-01-01
In the last few decades several concepts of dynamical systems theory (DST) have guided psychologists, cognitive scientists, and neuroscientists to rethink about sensory motor behavior and embodied cognition. A critical step in the progress of DST application to the brain (supported by modern methods of brain imaging and multi-electrode recording techniques) has been the transfer of its initial success in motor behavior to mental function, i.e., perception, emotion, and cognition. Open questions from research in genetics, ecology, brain sciences, etc., have changed DST itself and lead to the discovery of a new dynamical phenomenon, i.e., reproducible and robust transients that are at the same time sensitive to informational signals. The goal of this review is to describe a new mathematical framework – heteroclinic sequential dynamics – to understand self-organized activity in the brain that can explain certain aspects of robust itinerant behavior. Specifically, we discuss a hierarchy of coarse-grain models of mental dynamics in the form of kinetic equations of modes. These modes compete for resources at three levels: (i) within the same modality, (ii) among different modalities from the same family (like perception), and (iii) among modalities from different families (like emotion and cognition). The analysis of the conditions for robustness, i.e., the structural stability of transient (sequential) dynamics, give us the possibility to explain phenomena like the finite capacity of our sequential working memory – a vital cognitive function –, and to find specific dynamical signatures – different kinds of instabilities – of several brain functions and mental diseases. PMID:21716642
Johnson, Nicholas S.; Tix, John A.; Hlina, Benjamin L.; Wagner, C. Michael; Siefkes, Michael J.; Wang, Huiyong; Li, Weiming
2015-01-01
Spermiating male sea lamprey (Petromyzon marinus) release a sex pheromone, of which a component, 7α, 12α, 24-trihydoxy-3-one-5α-cholan-24-sulfate (3kPZS), has been identified and shown to induce long distance preference responses in ovulated females. However, other pheromone components exist, and when 3kPZS alone was used to control invasive sea lamprey populations in the Laurentian Great Lakes, trap catch increase was significant, but gains were generally marginal. We hypothesized that free-ranging sea lamprey populations discriminate between a partial and complete pheromone while migrating to spawning grounds and searching for mates at spawning grounds. As a means to test our hypothesis, and to test two possible uses of sex pheromones for sea lamprey control, we asked whether the full sex pheromone mixture released by males (spermiating male washings; SMW) is more effective than 3kPZS in capturing animals in traditional traps (1) en route to spawning grounds and (2) at spawning grounds. At locations where traps target sea lampreys en route to spawning grounds, SMW-baited traps captured significantly more sea lampreys than paired 3kPZS-baited traps (~10 % increase). At spawning grounds, no difference in trap catch was observed between 3kPZS and SMW-baited traps. The lack of an observed difference at spawning grounds may be attributed to increased pheromone competition and possible involvement of other sensory modalities to locate mates. Because fishes often rely on multiple and sometimes redundant sensory modalities for critical life history events, the addition of sex pheromones to traditionally used traps is not likely to work in all circumstances. In the case of the sea lamprey, sex pheromone application may increase catch when applied to specifically designed traps deployed in streams with low adult density and limited spawning habitat.
Callan, Daniel E.; Jones, Jeffery A.; Callan, Akiko
2014-01-01
Behavioral and neuroimaging studies have demonstrated that brain regions involved with speech production also support speech perception, especially under degraded conditions. The premotor cortex (PMC) has been shown to be active during both observation and execution of action (“Mirror System” properties), and may facilitate speech perception by mapping unimodal and multimodal sensory features onto articulatory speech gestures. For this functional magnetic resonance imaging (fMRI) study, participants identified vowels produced by a speaker in audio-visual (saw the speaker's articulating face and heard her voice), visual only (only saw the speaker's articulating face), and audio only (only heard the speaker's voice) conditions with varying audio signal-to-noise ratios in order to determine the regions of the PMC involved with multisensory and modality specific processing of visual speech gestures. The task was designed so that identification could be made with a high level of accuracy from visual only stimuli to control for task difficulty and differences in intelligibility. The results of the functional magnetic resonance imaging (fMRI) analysis for visual only and audio-visual conditions showed overlapping activity in inferior frontal gyrus and PMC. The left ventral inferior premotor cortex (PMvi) showed properties of multimodal (audio-visual) enhancement with a degraded auditory signal. The left inferior parietal lobule and right cerebellum also showed these properties. The left ventral superior and dorsal premotor cortex (PMvs/PMd) did not show this multisensory enhancement effect, but there was greater activity for the visual only over audio-visual conditions in these areas. The results suggest that the inferior regions of the ventral premotor cortex are involved with integrating multisensory information, whereas, more superior and dorsal regions of the PMC are involved with mapping unimodal (in this case visual) sensory features of the speech signal with articulatory speech gestures. PMID:24860526
Paladini, Rebecca E.; Diana, Lorenzo; Zito, Giuseppe A.; Nyffeler, Thomas; Wyss, Patric; Mosimann, Urs P.; Müri, René M.; Nef, Tobias
2018-01-01
Cross-modal spatial cueing can affect performance in a visual search task. For example, search performance improves if a visual target and an auditory cue originate from the same spatial location, and it deteriorates if they originate from different locations. Moreover, it has recently been postulated that multisensory settings, i.e., experimental settings, in which critical stimuli are concurrently presented in different sensory modalities (e.g., visual and auditory), may trigger asymmetries in visuospatial attention. Thereby, a facilitation has been observed for visual stimuli presented in the right compared to the left visual space. However, it remains unclear whether auditory cueing of attention differentially affects search performance in the left and the right hemifields in audio-visual search tasks. The present study investigated whether spatial asymmetries would occur in a search task with cross-modal spatial cueing. Participants completed a visual search task that contained no auditory cues (i.e., unimodal visual condition), spatially congruent, spatially incongruent, and spatially non-informative auditory cues. To further assess participants’ accuracy in localising the auditory cues, a unimodal auditory spatial localisation task was also administered. The results demonstrated no left/right asymmetries in the unimodal visual search condition. Both an additional incongruent, as well as a spatially non-informative, auditory cue resulted in lateral asymmetries. Thereby, search times were increased for targets presented in the left compared to the right hemifield. No such spatial asymmetry was observed in the congruent condition. However, participants’ performance in the congruent condition was modulated by their tone localisation accuracy. The findings of the present study demonstrate that spatial asymmetries in multisensory processing depend on the validity of the cross-modal cues, and occur under specific attentional conditions, i.e., when visual attention has to be reoriented towards the left hemifield. PMID:29293637
Tavassoli, Teresa; Hoekstra, Rosa A; Baron-Cohen, Simon
2014-01-01
Questionnaire-based studies suggest atypical sensory perception in over 90% of individuals with autism spectrum conditions (ASC). Sensory questionnaire-based studies in ASC mainly record parental reports of their child's sensory experience; less is known about sensory reactivity in adults with ASC. Given the DSM-5 criteria for ASC now include sensory reactivity, there is a need for an adult questionnaire investigating basic sensory functioning. We aimed to develop and validate the Sensory Perception Quotient (SPQ), which assesses basic sensory hyper- and hyposensitivity across all five modalities. A total of 359 adults with (n = 196) and without (n = 163) ASC were asked to fill in the SPQ, the Sensory Over-Responsivity Inventory (SensOR) and the Autism-Spectrum Quotient (AQ) online. Adults with ASC reported more sensory hypersensitivity on the SPQ compared to controls (P < .001). SPQ scores were correlated with AQ scores both across groups (r = .-38) and within the ASC (r = -.18) and control groups (r = -.15). Principal component analyses conducted separately in both groups indicated that one factor comprising 35 items consistently assesses sensory hypersensitivity. The SPQ showed high internal consistency for both the total SPQ (Cronbach's alpha = .92) and the reduced 35-item version (alpha = .93). The SPQ was significantly correlated with the SensOR across groups (r = -.46) and within the ASC (r = -.49) and control group (r = -.21). The SPQ shows good internal consistency and concurrent validity and differentiates between adults with and without ASC. Adults with ASC report more sensitivity to sensory stimuli on the SPQ. Finally, greater sensory sensitivity is associated with more autistic traits. The SPQ provides a new tool to measure individual differences on this dimension.
Sensory Function: Insights From Wave 2 of the National Social Life, Health, and Aging Project
Kern, David W.; Wroblewski, Kristen E.; Chen, Rachel C.; Schumm, L. Philip; McClintock, Martha K.
2014-01-01
Objectives. Sensory function, a critical component of quality of life, generally declines with age and influences health, physical activity, and social function. Sensory measures collected in Wave 2 of the National Social Life, Health, and Aging Project (NSHAP) survey focused on the personal impact of sensory function in the home environment and included: subjective assessment of vision, hearing, and touch, information on relevant home conditions and social sequelae as well as an improved objective assessment of odor detection. Method. Summary data were generated for each sensory category, stratified by age (62–90 years of age) and gender, with a focus on function in the home setting and the social consequences of sensory decrements in each modality. Results. Among both men and women, older age was associated with self-reported impairment of vision, hearing, and pleasantness of light touch. Compared with women, men reported significantly worse hearing and found light touch less appealing. There were no gender differences for vision. Overall, hearing loss seemed to have a greater impact on social function than did visual impairment. Discussion. Sensory function declines across age groups, with notable gender differences for hearing and light touch. Further analysis of sensory measures from NSHAP Wave 2 may provide important information on how sensory declines are related to health, social function, quality of life, morbidity, and mortality in this nationally representative sample of older adults. PMID:25360015
Current advances in orthodontic pain
Long, Hu; Wang, Yan; Jian, Fan; Liao, Li-Na; Yang, Xin; Lai, Wen-Li
2016-01-01
Orthodontic pain is an inflammatory pain that is initiated by orthodontic force-induced vascular occlusion followed by a cascade of inflammatory responses, including vascular changes, the recruitment of inflammatory and immune cells, and the release of neurogenic and pro-inflammatory mediators. Ultimately, endogenous analgesic mechanisms check the inflammatory response and the sensation of pain subsides. The orthodontic pain signal, once received by periodontal sensory endings, reaches the sensory cortex for pain perception through three-order neurons: the trigeminal neuron at the trigeminal ganglia, the trigeminal nucleus caudalis at the medulla oblongata and the ventroposterior nucleus at the thalamus. Many brain areas participate in the emotion, cognition and memory of orthodontic pain, including the insular cortex, amygdala, hippocampus, locus coeruleus and hypothalamus. A built-in analgesic neural pathway—periaqueductal grey and dorsal raphe—has an important role in alleviating orthodontic pain. Currently, several treatment modalities have been applied for the relief of orthodontic pain, including pharmacological, mechanical and behavioural approaches and low-level laser therapy. The effectiveness of nonsteroidal anti-inflammatory drugs for pain relief has been validated, but its effects on tooth movement are controversial. However, more studies are needed to verify the effectiveness of other modalities. Furthermore, gene therapy is a novel, viable and promising modality for alleviating orthodontic pain in the future. PMID:27341389
Explicit Encoding of Multimodal Percepts by Single Neurons in the Human Brain
Quiroga, Rodrigo Quian; Kraskov, Alexander; Koch, Christof; Fried, Itzhak
2010-01-01
Summary Different pictures of Marilyn Monroe can evoke the same percept, even if greatly modified as in Andy Warhol’s famous portraits. But how does the brain recognize highly variable pictures as the same percept? Various studies have provided insights into how visual information is processed along the “ventral pathway,” via both single-cell recordings in monkeys [1, 2] and functional imaging in humans [3, 4]. Interestingly, in humans, the same “concept” of Marilyn Monroe can be evoked with other stimulus modalities, for instance by hearing or reading her name. Brain imaging studies have identified cortical areas selective to voices [5, 6] and visual word forms [7, 8]. However, how visual, text, and sound information can elicit a unique percept is still largely unknown. By using presentations of pictures and of spoken and written names, we show that (1) single neurons in the human medial temporal lobe (MTL) respond selectively to representations of the same individual across different sensory modalities; (2) the degree of multimodal invariance increases along the hierarchical structure within the MTL; and (3) such neuronal representations can be generated within less than a day or two. These results demonstrate that single neurons can encode percepts in an explicit, selective, and invariant manner, even if evoked by different sensory modalities. PMID:19631538
Exorcising Grice's ghost: an empirical approach to studying intentional communication in animals.
Townsend, Simon W; Koski, Sonja E; Byrne, Richard W; Slocombe, Katie E; Bickel, Balthasar; Boeckle, Markus; Braga Goncalves, Ines; Burkart, Judith M; Flower, Tom; Gaunet, Florence; Glock, Hans Johann; Gruber, Thibaud; Jansen, David A W A M; Liebal, Katja; Linke, Angelika; Miklósi, Ádám; Moore, Richard; van Schaik, Carel P; Stoll, Sabine; Vail, Alex; Waller, Bridget M; Wild, Markus; Zuberbühler, Klaus; Manser, Marta B
2017-08-01
Language's intentional nature has been highlighted as a crucial feature distinguishing it from other communication systems. Specifically, language is often thought to depend on highly structured intentional action and mutual mindreading by a communicator and recipient. Whilst similar abilities in animals can shed light on the evolution of intentionality, they remain challenging to detect unambiguously. We revisit animal intentional communication and suggest that progress in identifying analogous capacities has been complicated by (i) the assumption that intentional (that is, voluntary) production of communicative acts requires mental-state attribution, and (ii) variation in approaches investigating communication across sensory modalities. To move forward, we argue that a framework fusing research across modalities and species is required. We structure intentional communication into a series of requirements, each of which can be operationalised, investigated empirically, and must be met for purposive, intentionally communicative acts to be demonstrated. Our unified approach helps elucidate the distribution of animal intentional communication and subsequently serves to clarify what is meant by attributions of intentional communication in animals and humans. © 2016 Cambridge Philosophical Society.
Cross-modal tactile-taste interactions in food evaluations
Slocombe, B. G.; Carmichael, D.A.; Simner, J.
2016-01-01
Detecting the taste components within a flavoured substance relies on exposing chemoreceptors within the mouth to the chemical components of ingested food. In our paper, we show that the evaluation of taste components can also be influenced by the tactile quality of the food. We first discuss how multisensory factors might influence taste, flavour and smell for both typical and atypical (synaesthetic) populations and we then present two empirical studies showing tactile-taste interactions in the general population. We asked a group of average adults to evaluate the taste components of flavoured food substances, whilst we presented simultaneous cross-sensory visuo-tactile cues within the eating environment. Specifically, we presented foodstuffs between subjects that were otherwise identical but had a rough versus smooth surface, or were served on a rough versus smooth serving-plate. We found no effect of the serving-plate, but we found the rough/smoothness of the foodstuff itself significantly influenced perception: food was rated as significantly more sour if it had a rough (vs. smooth) surface. In modifying taste perception via ostensibly unrelated dimensions, we demonstrate that the detection of tastes within flavours may be influenced by higher level cross-sensory cues. Finally, we suggest that the direction of our cross-sensory associations may speak to the types of hedonic mapping found both in normal multisensory integration, and in the unusual condition of synaesthesia. PMID:26169315
Neural Responses to Complex Auditory Rhythms: The Role of Attending
Chapin, Heather L.; Zanto, Theodore; Jantzen, Kelly J.; Kelso, Scott J. A.; Steinberg, Fred; Large, Edward W.
2010-01-01
The aim of this study was to explore the role of attention in pulse and meter perception using complex rhythms. We used a selective attention paradigm in which participants attended to either a complex auditory rhythm or a visually presented word list. Performance on a reproduction task was used to gauge whether participants were attending to the appropriate stimulus. We hypothesized that attention to complex rhythms – which contain no energy at the pulse frequency – would lead to activations in motor areas involved in pulse perception. Moreover, because multiple repetitions of a complex rhythm are needed to perceive a pulse, activations in pulse-related areas would be seen only after sufficient time had elapsed for pulse perception to develop. Selective attention was also expected to modulate activity in sensory areas specific to the modality. We found that selective attention to rhythms led to increased BOLD responses in basal ganglia, and basal ganglia activity was observed only after the rhythms had cycled enough times for a stable pulse percept to develop. These observations suggest that attention is needed to recruit motor activations associated with the perception of pulse in complex rhythms. Moreover, attention to the auditory stimulus enhanced activity in an attentional sensory network including primary auditory cortex, insula, anterior cingulate, and prefrontal cortex, and suppressed activity in sensory areas associated with attending to the visual stimulus. PMID:21833279
Reading Disability: A New Look at an Old Issue.
ERIC Educational Resources Information Center
Lipa, Sally E.
1983-01-01
Research is reviewed on the role of neurological differences in reading disabilities, with emphasis on findings of a right hemisphere processing dominance, as well as a relationship to temporal order processing. Research is noted not to have supported the sensory modality theory. (CL)
Tuning the speed-accuracy trade-off to maximize reward rate in multisensory decision-making.
Drugowitsch, Jan; DeAngelis, Gregory C; Angelaki, Dora E; Pouget, Alexandre
2015-06-19
For decisions made under time pressure, effective decision making based on uncertain or ambiguous evidence requires efficient accumulation of evidence over time, as well as appropriately balancing speed and accuracy, known as the speed/accuracy trade-off. For simple unimodal stimuli, previous studies have shown that human subjects set their speed/accuracy trade-off to maximize reward rate. We extend this analysis to situations in which information is provided by multiple sensory modalities. Analyzing previously collected data (Drugowitsch et al., 2014), we show that human subjects adjust their speed/accuracy trade-off to produce near-optimal reward rates. This trade-off can change rapidly across trials according to the sensory modalities involved, suggesting that it is represented by neural population codes rather than implemented by slow neuronal mechanisms such as gradual changes in synaptic weights. Furthermore, we show that deviations from the optimal speed/accuracy trade-off can be explained by assuming an incomplete gradient-based learning of these trade-offs.
Asymmetries of the human social brain in the visual, auditory and chemical modalities
Brancucci, Alfredo; Lucci, Giuliana; Mazzatenta, Andrea; Tommasi, Luca
2008-01-01
Structural and functional asymmetries are present in many regions of the human brain responsible for motor control, sensory and cognitive functions and communication. Here, we focus on hemispheric asymmetries underlying the domain of social perception, broadly conceived as the analysis of information about other individuals based on acoustic, visual and chemical signals. By means of these cues the brain establishes the border between ‘self’ and ‘other’, and interprets the surrounding social world in terms of the physical and behavioural characteristics of conspecifics essential for impression formation and for creating bonds and relationships. We show that, considered from the standpoint of single- and multi-modal sensory analysis, the neural substrates of the perception of voices, faces, gestures, smells and pheromones, as evidenced by modern neuroimaging techniques, are characterized by a general pattern of right-hemispheric functional asymmetry that might benefit from other aspects of hemispheric lateralization rather than constituting a true specialization for social information. PMID:19064350
Hypothesized eye movements of neurolinguistic programming: a statistical artifact.
Farmer, A; Rooney, R; Cunningham, J R
1985-12-01
Neurolinguistic programming's hypothesized eye-movements were measured independently from videotapes of 30 subjects, aged 15 to 76 yr., who were asked to recall visual pictures, recorded audio sounds, and textural objects. chi 2 indicated that subjects' responses were significantly different from those predicted. When chi 2 comparisons were weighted by number of eye positions assigned to each modality (3 visual, 3 auditory, 1 kinesthetic), subjects' responses did not differ significantly from the expected pattern. These data indicate that the eye-movement hypothesis may represent randomly occurring rather than sensory-modality-related positions.
Is Red Heavier Than Yellow Even for Blind?
Barilari, Marco; de Heering, Adélaïde; Crollen, Virginie; Collignon, Olivier; Bottini, Roberto
2018-01-01
Across cultures and languages, people find similarities between the products of different senses in mysterious ways. By studying what is called cross-modal correspondences, cognitive psychologists discovered that lemons are fast rather than slow, boulders are sour, and red is heavier than yellow. Are these cross-modal correspondences established via sensory perception or can they be learned merely through language? We contribute to this debate by demonstrating that early blind people who lack the perceptual experience of color also think that red is heavier than yellow but to a lesser extent than sighted do.
No Sensory Compensation for Olfactory Memory: Differences between Blind and Sighted People.
Sorokowska, Agnieszka; Karwowski, Maciej
2017-01-01
Blindness can be a driving force behind a variety of changes in sensory systems. When vision is missing, other modalities and higher cognitive functions can become hyper-developed through a mechanism called sensory compensation. Overall, previous studies suggest that olfactory memory in blind people can be better than that of the sighted individuals. Better performance of blind individuals in other-sensory modalities was hypothesized to be a result of, among others, intense perceptual training. At the same time, if the superiority of blind people in olfactory abilities indeed results from training, their scores should not decrease with age to such an extent as among the sighted people. Here, this hypothesis was tested in a large sample of 94 blind individuals. Olfactory memory was assessed using the Test for Olfactory Memory, comprising episodic odor recognition (discriminating previously presented odors from new odors) and two forms of semantic memory (cued and free identification of odors). Regarding episodic olfactory memory, we observed an age-related decline in correct hits in blind participants, but an age-related increase in false alarms in sighted participants. Further, age moderated the between-group differences for correct hits, but the direction of the observed effect was contrary to our expectations. The difference between blind and sighted individuals younger than 40 years old was non-significant, but older sighted individuals outperformed their blind counterparts. In conclusion, we found no positive effect of visual impairment on olfactory memory. We suggest that daily perceptual training is not enough to increase olfactory memory function in blind people.
Yang, G; Luo, Y; Baad-Hansen, L; Wang, K; Arendt-Nielsen, L; Xie, Q-F; Svensson, P
2013-11-01
Ethnic differences in pain experiences have been widely assessed in various pathological and experimental conditions. However, limited sensory modalities have been described in previous research, and the affective-motivational factors have so far been estimated to be the main mediator for the ethnic differences. This study aimed to detect the ethnic differences of oro-facial somatosensory profiles related to the sensory-discriminative dimension in healthy volunteers. The standardised quantitative sensory testing battery developed by the German Research Network on Neuropathic Pain was performed bilaterally in the infraorbital and mental regions on age- and gender-matched healthy Chinese and Danes, 29 participants each group. The influences of ethnicity, gender and test site on the somatosensory profile were evaluated by three-way anova. The ethnic disparities were also presented by Z-scores. Compared to Danes, Chinese were more sensitive to thermal detection, thermal pain, mechanical deep pain and mechanical pain rating parameters (P < 0·002) related to small fibre functions. However, the inverse results were observed for mechanical tactile modality related to large fibre function (P < 0·001) and wind-up ratio (P = 0·006). Women presented higher sensitivity compared to men. The mean Z-scores of all the parameters from Chinese group were in the normal zone created by Danish Caucasians' means and SDs. The ethnic disparities in somatosensory profile illustrated the necessity of establishing the reference data for different ethnic groups and possibly individual pain management strategies for the different ethnic groups. © 2013 John Wiley & Sons Ltd.
No Sensory Compensation for Olfactory Memory: Differences between Blind and Sighted People
Sorokowska, Agnieszka; Karwowski, Maciej
2017-01-01
Blindness can be a driving force behind a variety of changes in sensory systems. When vision is missing, other modalities and higher cognitive functions can become hyper-developed through a mechanism called sensory compensation. Overall, previous studies suggest that olfactory memory in blind people can be better than that of the sighted individuals. Better performance of blind individuals in other-sensory modalities was hypothesized to be a result of, among others, intense perceptual training. At the same time, if the superiority of blind people in olfactory abilities indeed results from training, their scores should not decrease with age to such an extent as among the sighted people. Here, this hypothesis was tested in a large sample of 94 blind individuals. Olfactory memory was assessed using the Test for Olfactory Memory, comprising episodic odor recognition (discriminating previously presented odors from new odors) and two forms of semantic memory (cued and free identification of odors). Regarding episodic olfactory memory, we observed an age-related decline in correct hits in blind participants, but an age-related increase in false alarms in sighted participants. Further, age moderated the between-group differences for correct hits, but the direction of the observed effect was contrary to our expectations. The difference between blind and sighted individuals younger than 40 years old was non-significant, but older sighted individuals outperformed their blind counterparts. In conclusion, we found no positive effect of visual impairment on olfactory memory. We suggest that daily perceptual training is not enough to increase olfactory memory function in blind people. PMID:29276494
Does working memory capacity predict cross-modally induced failures of awareness?
Kreitz, Carina; Furley, Philip; Simons, Daniel J; Memmert, Daniel
2016-01-01
People often fail to notice unexpected stimuli when they are focusing attention on another task. Most studies of this phenomenon address visual failures induced by visual attention tasks (inattentional blindness). Yet, such failures also occur within audition (inattentional deafness), and people can even miss unexpected events in one sensory modality when focusing attention on tasks in another modality. Such cross-modal failures are revealing because they suggest the existence of a common, central resource limitation. And, such central limits might be predicted from individual differences in cognitive capacity. We replicated earlier evidence, establishing substantial rates of inattentional deafness during a visual task and inattentional blindness during an auditory task. However, neither individual working memory capacity nor the ability to perform the primary task predicted noticing in either modality. Thus, individual differences in cognitive capacity did not predict failures of awareness even though the failures presumably resulted from central resource limitations. Copyright © 2015 Elsevier Inc. All rights reserved.
Darbin, Olivier; Gubler, Coral; Naritoku, Dean; Dees, Daniel; Martino, Anthony; Adams, Elizabeth
2016-01-01
This study describes a cost-effective screening protocol for parkinsonism based on combined objective and subjective monitoring of balance function. Objective evaluation of balance function was performed using a game industry balance board and an automated analyses of the dynamic of the center of pressure in time, frequency, and non-linear domains collected during short series of stand up tests with different modalities and severity of sensorial deprivation. The subjective measurement of balance function was performed using the Dizziness Handicap Inventory questionnaire. Principal component analyses on both objective and subjective measurements of balance function allowed to obtained a specificity and selectivity for parkinsonian patients (vs. healthy subjects) of 0.67 and 0.71 respectively. The findings are discussed regarding the relevance of cost-effective balance-based screening system as strategy to meet the needs of broader and earlier screening for parkinsonism in communities with limited access to healthcare.
Motor Skill Learning in Children with Developmental Coordination Disorder
ERIC Educational Resources Information Center
Bo, Jin; Lee, Chi-Mei
2013-01-01
Children with Developmental Coordination Disorder (DCD) are characterized as having motor difficulties and learning impairment that may last well into adolescence and adulthood. Although behavioral deficits have been identified in many domains such as visuo-spatial processing, kinesthetic perception, and cross-modal sensory integration, recent…
Visual consciousness and bodily self-consciousness.
Faivre, Nathan; Salomon, Roy; Blanke, Olaf
2015-02-01
In recent years, consciousness has become a central topic in cognitive neuroscience. This review focuses on the relation between bodily self-consciousness - the feeling of being a subject in a body - and visual consciousness - the subjective experience associated with the perception of visual signals. Findings from clinical and experimental work have shown that bodily self-consciousness depends on specific brain networks and is related to the integration of signals from multiple sensory modalities including vision. In addition, recent experiments have shown that visual consciousness is shaped by the body, including vestibular, tactile, proprioceptive, and motor signals. Several lines of evidence suggest reciprocal relationships between vision and bodily signals, indicating that a comprehensive understanding of visual and bodily self-consciousness requires studying them in unison.
Kampmann, Peter; Kirchner, Frank
2014-01-01
With the increasing complexity of robotic missions and the development towards long-term autonomous systems, the need for multi-modal sensing of the environment increases. Until now, the use of tactile sensor systems has been mostly based on sensing one modality of forces in the robotic end-effector. The use of a multi-modal tactile sensory system is motivated, which combines static and dynamic force sensor arrays together with an absolute force measurement system. This publication is focused on the development of a compact sensor interface for a fiber-optic sensor array, as optic measurement principles tend to have a bulky interface. Mechanical, electrical and software approaches are combined to realize an integrated structure that provides decentralized data pre-processing of the tactile measurements. Local behaviors are implemented using this setup to show the effectiveness of this approach. PMID:24743158
Impact of auditory-visual bimodality on lexical retrieval in Alzheimer's disease patients.
Simoes Loureiro, Isabelle; Lefebvre, Laurent
2015-01-01
The aim of this study was to generalize the positive impact of auditory-visual bimodality on lexical retrieval in Alzheimer's disease (AD) patients. In practice, the naming skills of healthy elderly persons improve when additional sensory signals are included. The hypothesis of this study was that the same influence would be observable in AD patients. Sixty elderly patients separated into three groups (healthy subjects, stage 1 AD patients, and stage 2 AD patients) were tested with a battery of naming tasks comprising three different modalities: a visual modality, an auditory modality, and a visual and auditory modality (bimodality). Our results reveal the positive influence of bimodality on the accuracy with which bimodal items are named (when compared with unimodal items) and their latency (when compared with unimodal auditory items). These results suggest that multisensory enrichment can improve lexical retrieval in AD patients.
School-aged children can benefit from audiovisual semantic congruency during memory encoding.
Heikkilä, Jenni; Tiippana, Kaisa
2016-05-01
Although we live in a multisensory world, children's memory has been usually studied concentrating on only one sensory modality at a time. In this study, we investigated how audiovisual encoding affects recognition memory. Children (n = 114) from three age groups (8, 10 and 12 years) memorized auditory or visual stimuli presented with a semantically congruent, incongruent or non-semantic stimulus in the other modality during encoding. Subsequent recognition memory performance was better for auditory or visual stimuli initially presented together with a semantically congruent stimulus in the other modality than for stimuli accompanied by a non-semantic stimulus in the other modality. This congruency effect was observed for pictures presented with sounds, for sounds presented with pictures, for spoken words presented with pictures and for written words presented with spoken words. The present results show that semantically congruent multisensory experiences during encoding can improve memory performance in school-aged children.
Age differences in suprathreshold sensory function.
Heft, Marc W; Robinson, Michael E
2014-02-01
While there is general agreement that vision and audition decline with aging, observations for the somatosensory senses and taste are less clear. The purpose of this study was to assess age differences in multimodal sensory perception in healthy, community-dwelling participants. Participants (100 females and 78 males aged 20-89 years) judged the magnitudes of sensations associated with graded levels of thermal, tactile, and taste stimuli in separate testing sessions using a cross-modality matching (CMM) procedure. During each testing session, participants also rated words that describe magnitudes of percepts associated with differing-level sensory stimuli. The words provided contextual anchors for the sensory ratings, and the word-rating task served as a control for the CMM. The mean sensory ratings were used as dependent variables in a MANOVA for each sensory domain, with age and sex as between-subject variables. These analyses were repeated with the grand means for the word ratings as a covariate to control for the rating task. The results of this study suggest that there are modest age differences for somatosensory and taste domains. While the magnitudes of these differences are mediated somewhat by age differences in the rating task, differences in warm temperature, tactile, and salty taste persist.
A review of invasive and non-invasive sensory feedback in upper limb prostheses.
Svensson, Pamela; Wijk, Ulrika; Björkman, Anders; Antfolk, Christian
2017-06-01
The constant challenge to restore sensory feedback in prosthetic hands has provided several research solutions, but virtually none has reached clinical fruition. A prosthetic hand with sensory feedback that closely imitates an intact hand and provides a natural feeling may induce the prosthetic hand to be included in the body image and also reinforces the control of the prosthesis. Areas covered: This review presents non-invasive sensory feedback systems such as mechanotactile, vibrotactile, electrotactile and combinational systems which combine the modalities; multi-haptic feedback. Invasive sensory feedback has been tried less, because of the inherent risk, but it has successfully shown to restore some afferent channels. In this review, invasive methods are also discussed, both extraneural and intraneural electrodes, such as cuff electrodes and transverse intrafascicular multichannel electrodes. The focus of the review is on non-invasive methods of providing sensory feedback to upper-limb amputees. Expert commentary: Invoking embodiment has shown to be of importance for the control of prosthesis and acceptance by the prosthetic wearers. It is a challenge to provide conscious feedback to cover the lost sensibility of a hand, not be overwhelming and confusing for the user, and to integrate technology within the constraint of a wearable prosthesis.
Neurobiology of Sensory Overresponsivity in Youth With Autism Spectrum Disorders.
Green, Shulamite A; Hernandez, Leanna; Tottenham, Nim; Krasileva, Kate; Bookheimer, Susan Y; Dapretto, Mirella
2015-08-01
More than half of youth with autism spectrum disorders (ASDs) have sensory overresponsivity (SOR), an extreme negative reaction to sensory stimuli. However, little is known about the neurobiological basis of SOR, and there are few effective treatments. Understanding whether SOR is due to an initial heightened sensory response or to deficits in regulating emotional reactions to stimuli has important implications for intervention. To determine differences in brain responses, habituation, and connectivity during exposure to mildly aversive sensory stimuli in youth with ASDs and SOR compared with youth with ASDs without SOR and compared with typically developing control subjects. Functional magnetic resonance imaging was used to examine brain responses and habituation to mildly aversive auditory and tactile stimuli in 19 high-functioning youths with ASDs and 19 age- and IQ-matched, typically developing youths (age range, 9-17 years). Brain activity was related to parents' ratings of children's SOR symptoms. Functional connectivity between the amygdala and orbitofrontal cortex was compared between ASDs subgroups with and without SOR and typically developing controls without SOR. The study dates were March 2012 through February 2014. Relative increases in blood oxygen level-dependent signal response across the whole brain and within the amygdala during exposure to sensory stimuli compared with fixation, as well as correlation between blood oxygen level-dependent signal change in the amygdala and orbitofrontal cortex. The mean age in both groups was 14 years and the majority in both groups (16 of 19 each) were male. Compared with neurotypical control participants, participants with ASDs displayed stronger activation in primary sensory cortices and the amygdala (P < .05, corrected). This activity was positively correlated with SOR symptoms after controlling for anxiety. The ASDs with SOR subgroup had decreased neural habituation to stimuli in sensory cortices and the amygdala compared with groups without SOR. Youth with ASDs without SOR showed a pattern of amygdala downregulation, with negative connectivity between the amygdala and orbitofrontal cortex (thresholded at z > 1.70, P < .05). Results demonstrate that youth with ASDs and SOR show sensorilimbic hyperresponsivity to mildly aversive tactile and auditory stimuli, particularly to multiple modalities presented simultaneously, and show that this hyperresponsivity is due to failure to habituate. In addition, findings suggest that a subset of youth with ASDs can regulate their responses through prefrontal downregulation of amygdala activity. Implications for intervention include minimizing exposure to multiple sensory modalities and building coping strategies for regulating emotional response to stimuli.
Sounds can boost the awareness of visual events through attention without cross-modal integration.
Pápai, Márta Szabina; Soto-Faraco, Salvador
2017-01-31
Cross-modal interactions can lead to enhancement of visual perception, even for visual events below awareness. However, the underlying mechanism is still unclear. Can purely bottom-up cross-modal integration break through the threshold of awareness? We used a binocular rivalry paradigm to measure perceptual switches after brief flashes or sounds which, sometimes, co-occurred. When flashes at the suppressed eye coincided with sounds, perceptual switches occurred the earliest. Yet, contrary to the hypothesis of cross-modal integration, this facilitation never surpassed the assumption of probability summation of independent sensory signals. A follow-up experiment replicated the same pattern of results using silent gaps embedded in continuous noise, instead of sounds. This manipulation should weaken putative sound-flash integration, although keep them salient as bottom-up attention cues. Additional results showed that spatial congruency between flashes and sounds did not determine the effectiveness of cross-modal facilitation, which was again not better than probability summation. Thus, the present findings fail to fully support the hypothesis of bottom-up cross-modal integration, above and beyond the independent contribution of two transient signals, as an account for cross-modal enhancement of visual events below level of awareness.
Probing consciousness in a sensory-disconnected paralyzed patient.
Rohaut, Benjamin; Raimondo, Federico; Galanaud, Damien; Valente, Mélanie; Sitt, Jacobo Diego; Naccache, Lionel
2017-01-01
Diagnosis of consciousness can be very challenging in some clinical situations such as severe sensory-motor impairments. We report the case study of a patient who presented a total "locked-in syndrome" associated with and a multi-sensory deafferentation (visual, auditory and tactile modalities) following a protuberantial infarction. In spite of this severe and extreme disconnection from the external world, we could detect reliable evidence of consciousness using a multivariate analysis of his high-density resting state electroencephalogram. This EEG-based diagnosis was eventually confirmed by the clinical evolution of the patient. This approach illustrates the potential importance of functional brain-imaging data to improve diagnosis of consciousness and of cognitive abilities in critical situations in which the behavioral channel is compromised such as deafferented locked-in syndrome.
The computational worm: spatial orientation and its neuronal basis in C. elegans.
Lockery, Shawn R
2011-10-01
Spatial orientation behaviors in animals are fundamental for survival but poorly understood at the neuronal level. The nematode Caenorhabditis elegans orients to a wide range of stimuli and has a numerically small and well-described nervous system making it advantageous for investigating the mechanisms of spatial orientation. Recent work by the C. elegans research community has identified essential computational elements of the neural circuits underlying two orientation strategies that operate in five different sensory modalities. Analysis of these circuits reveals novel motifs including simple circuits for computing temporal derivatives of sensory input and for integrating sensory input with behavioral state to generate adaptive behavior. These motifs constitute hypotheses concerning the identity and functionality of circuits controlling spatial orientation in higher organisms. Copyright © 2011 Elsevier Ltd. All rights reserved.
Utilizing sensory prediction errors for movement intention decoding: A new methodology
Nakamura, Keigo; Ando, Hideyuki
2018-01-01
We propose a new methodology for decoding movement intentions of humans. This methodology is motivated by the well-documented ability of the brain to predict sensory outcomes of self-generated and imagined actions using so-called forward models. We propose to subliminally stimulate the sensory modality corresponding to a user’s intended movement, and decode a user’s movement intention from his electroencephalography (EEG), by decoding for prediction errors—whether the sensory prediction corresponding to a user’s intended movement matches the subliminal sensory stimulation we induce. We tested our proposal in a binary wheelchair turning task in which users thought of turning their wheelchair either left or right. We stimulated their vestibular system subliminally, toward either the left or the right direction, using a galvanic vestibular stimulator and show that the decoding for prediction errors from the EEG can radically improve movement intention decoding performance. We observed an 87.2% median single-trial decoding accuracy across tested participants, with zero user training, within 96 ms of the stimulation, and with no additional cognitive load on the users because the stimulation was subliminal. PMID:29750195
Concept Representation Reflects Multimodal Abstraction: A Framework for Embodied Semantics
Fernandino, Leonardo; Binder, Jeffrey R.; Desai, Rutvik H.; Pendl, Suzanne L.; Humphries, Colin J.; Gross, William L.; Conant, Lisa L.; Seidenberg, Mark S.
2016-01-01
Recent research indicates that sensory and motor cortical areas play a significant role in the neural representation of concepts. However, little is known about the overall architecture of this representational system, including the role played by higher level areas that integrate different types of sensory and motor information. The present study addressed this issue by investigating the simultaneous contributions of multiple sensory-motor modalities to semantic word processing. With a multivariate fMRI design, we examined activation associated with 5 sensory-motor attributes—color, shape, visual motion, sound, and manipulation—for 900 words. Regions responsive to each attribute were identified using independent ratings of the attributes' relevance to the meaning of each word. The results indicate that these aspects of conceptual knowledge are encoded in multimodal and higher level unimodal areas involved in processing the corresponding types of information during perception and action, in agreement with embodied theories of semantics. They also reveal a hierarchical system of abstracted sensory-motor representations incorporating a major division between object interaction and object perception processes. PMID:25750259
Gomez-Ramirez, Manuel; Higgins, Beth A; Rycroft, Jane A; Owen, Gail N; Mahoney, Jeannette; Shpaner, Marina; Foxe, John J
2007-01-01
: Ingestion of the nonproteinic amino acid theanine (5-N-ethylglutamine) has been shown to increase oscillatory brain activity in the so-called alpha band (8-14 Hz) during resting electroencephalographic recordings in humans. Independently, alpha band activity has been shown to be a key component in selective attentional processes. Here, we set out to assess whether theanine would cause modulation of anticipatory alpha activity during selective attentional deployments to stimuli in different sensory modalities, a paradigm in which robust alpha attention effects have previously been established. : Electrophysiological data from 168 scalp electrode channels were recorded while participants performed a standard intersensory attentional cuing task. : As in previous studies, significantly greater alpha band activity was measured over parieto-occipital scalp for attentional deployments to the auditory modality than to the visual modality. Theanine ingestion resulted in a substantial overall decrease in background alpha levels relative to placebo while subjects were actively performing this demanding attention task. Despite this decrease in background alpha activity, attention-related alpha effects were significantly greater for the theanine condition. : This increase of attention-related anticipatory alpha over the right parieto-occipital scalp suggests that theanine may have a specific effect on the brain's attention circuitry. We conclude that theanine has clear psychoactive properties, and that it represents a potentially interesting, naturally occurring compound for further study, as it relates to the brain's attentional system.
Hearing, feeling or seeing a beat recruits a supramodal network in the auditory dorsal stream.
Araneda, Rodrigo; Renier, Laurent; Ebner-Karestinos, Daniela; Dricot, Laurence; De Volder, Anne G
2017-06-01
Hearing a beat recruits a wide neural network that involves the auditory cortex and motor planning regions. Perceiving a beat can potentially be achieved via vision or even touch, but it is currently not clear whether a common neural network underlies beat processing. Here, we used functional magnetic resonance imaging (fMRI) to test to what extent the neural network involved in beat processing is supramodal, that is, is the same in the different sensory modalities. Brain activity changes in 27 healthy volunteers were monitored while they were attending to the same rhythmic sequences (with and without a beat) in audition, vision and the vibrotactile modality. We found a common neural network for beat detection in the three modalities that involved parts of the auditory dorsal pathway. Within this network, only the putamen and the supplementary motor area (SMA) showed specificity to the beat, while the brain activity in the putamen covariated with the beat detection speed. These results highlighted the implication of the auditory dorsal stream in beat detection, confirmed the important role played by the putamen in beat detection and indicated that the neural network for beat detection is mostly supramodal. This constitutes a new example of convergence of the same functional attributes into one centralized representation in the brain. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Franosch, Jan-Moritz P; Urban, Sebastian; van Hemmen, J Leo
2013-12-01
How can an animal learn from experience? How can it train sensors, such as the auditory or tactile system, based on other sensory input such as the visual system? Supervised spike-timing-dependent plasticity (supervised STDP) is a possible answer. Supervised STDP trains one modality using input from another one as "supervisor." Quite complex time-dependent relationships between the senses can be learned. Here we prove that under very general conditions, supervised STDP converges to a stable configuration of synaptic weights leading to a reconstruction of primary sensory input.
Gender Differences in the Primary Representational System according to Neurolinguistic Programming.
ERIC Educational Resources Information Center
Cassiere, M. F.; And Others
Neurolinguistic Programming (NLP) is a currently popular therapeutic modality in which individuals organize information through three basic sensory systems, one of which is the Primary Representational System (PRS). This study was designed to investigate gender differences in PRS according to the predicate preference method. It was expected that…
The Tactile Continuity Illusion
ERIC Educational Resources Information Center
Kitagawa, Norimichi; Igarashi, Yuka; Kashino, Makio
2009-01-01
We can perceive the continuity of an object or event by integrating spatially/temporally discrete sensory inputs. The mechanism underlying this perception of continuity has intrigued many researchers and has been well documented in both the visual and auditory modalities. The present study shows for the first time to our knowledge that an illusion…
Khan, Fehmeda Farrukh; Numan, Ahsan; Khawaja, Khadija Irfan; Atif, Ali; Fatima, Aziz; Masud, Faisal
2015-01-01
Early diagnosis of distal peripheral neuropathy (DSPN) the commonest diabetes complications, helps prevent significant morbidity. Clinical parameters are useful for detection, but subjectivity and lack of operator proficiency often results in inaccuracies. Comparative diagnostic accuracy of Diabetic Neuropathy Symptom (DNS) score and Diabetic Neuropathy Examination (DNE) score in detecting DSPN confirmed by nerve conduction studies (NCS) has not been evaluated. This study compares the performance of these scores in predicting the presence of electro physiologically proven DSPN. The objective of this, study was to compare the diagnostic accuracy of DNS and DNE scores in detecting NCS proven DSPN in type-2 diabetics, and to determine the frequency of sub-clinical DSPN among type-2 diabetics. In this cross-sectional study the DNS score and DNE score were determined in 110 diagnosed type-2 diabetic patients. NCS were carried out and amplitudes, velocities and latencies of sensory and motor nerves in lower limb were recorded. Comparison between the two clinical diagnostic modalities and NCS using Pearson's chi square test showed a significant association between NCS and DNE scores (p-value =.003, specificity 93%). The DNS score performed poorly in comparison (p-value = .068, specificity 77%). When the two scores were taken in combination the specificity in diagnosing DSPN was greater (p-value = .018, specificity 96%) than either alone. 33% of patients had subclinical neuropathy. DNE score alone and in combination with DNS score is reliable in predicting DSPN and is more specific than DNS score in evaluating DSPN. Both tests lack sensitivity. Patients without any evidence of clinical neuropathy manifest abnormalities on NCS.
Clinical presentation, imaging findings, and prognosis of spinal dural arteriovenous fistula.
Lee, Jookyung; Lim, Young-Min; Suh, Dae Chul; Rhim, Seung Chul; Kim, Sang Joon; Kim, Kwang-Kuk
2016-04-01
Spinal dural arteriovenous fistula (SDAVF) is a relatively common acquired vascular malformation of the spinal cord. Assessment of a SDAVF is often difficult because of non-specific findings on non-invasive imaging modalities. Diagnosis of a SDAVF is often delayed, and some patients receive unnecessary treatment and treatment delays, often resulting in a poor outcome. The aim of this study was to characterize the clinical presentation, typical imaging findings, and long-term outcome of SDAVF. Forty patients (13 women, 27 men; mean age 58.18 ± standard deviation 14.75 years) who were treated at our hospital from June 1992 to March 2014 were retrospectively reviewed. We investigated the baseline characteristics, clinical presentation, imaging findings, treatment modalities, and outcome of the patients. The most common clinical presentation was a sensory symptom (80%), followed by motor weakness (70%), and sphincter dysfunction (62.5%). Roughly one-third (32.5%) of patients had a stepwise progression of fluctuating weakness and sensory symptoms, but the most common presentation was chronic progressive myelopathic symptoms (47.5%). Thirty-four patients (85%) had T2 signal change on the spinal cord MRI, indicative of cord edema. Thirty-eight patients had typical perimedullary vessel flow voids on T2-weighted MRI. Twenty-eight patients were treated with endovascular embolization, five patients underwent surgery, and four patients underwent both. Clinical outcome was determined by severity of initial deficit (p=0.008), extent of cord edema (p=0.010), treatment failure (p=0.004), and a residual fistula (p=0.017). SDAVF causes a treatable myelopathy, so early diagnosis and intervention is essential. Copyright © 2015 Elsevier Ltd. All rights reserved.
The synaptic pharmacology underlying sensory processing in the superior colliculus.
Binns, K E
1999-10-01
The superior colliculus (SC) is one of the most ancient regions of the vertebrate central sensory system. In this hub afferents from several sensory pathways converge, and an extensive range of neural circuits enable primary sensory processing, multi-sensory integration and the generation of motor commands for orientation behaviours. The SC has a laminar structure and is usually considered in two parts; the superficial visual layers and the deep multi-modal/motor layers. Neurones in the superficial layers integrate visual information from the retina, cortex and other sources, while the deep layers draw together data from many cortical and sub-cortical sensory areas, including the superficial layers, to generate motor commands. Functional studies in anaesthetized subjects and in slice preparations have used pharmacological tools to probe some of the SC's interacting circuits. The studies reviewed here reveal important roles for ionotropic glutamate receptors in the mediation of sensory inputs to the SC and in transmission between the superficial and deep layers. N-methyl-D-aspartate receptors appear to have special responsibility for the temporal matching of retinal and cortical activity in the superficial layers and for the integration of multiple sensory data-streams in the deep layers. Sensory responses are shaped by intrinsic inhibitory mechanisms mediated by GABA(A) and GABA(B) receptors and influenced by nicotinic acetylcholine receptors. These sensory and motor-command activities of SC neurones are modulated by levels of arousal through extrinsic connections containing GABA, serotonin and other transmitters. It is possible to naturally stimulate many of the SC's sensory and non-sensory inputs either independently or simultaneously and this brain area is an ideal location in which to study: (a) interactions between inputs from the same sensory system; (b) the integration of inputs from several sensory systems; and (c) the influence of non-sensory systems on sensory processing.
Modulation of shark prey capture kinematics in response to sensory deprivation.
Gardiner, Jayne M; Atema, Jelle; Hueter, Robert E; Motta, Philip J
2017-02-01
The ability of predators to modulate prey capture in response to the size, location, and behavior of prey is critical to successful feeding on a variety of prey types. Modulating in response to changes in sensory information may be critical to successful foraging in a variety of environments. Three shark species with different feeding morphologies and behaviors were filmed using high-speed videography while capturing live prey: the ram-feeding blacktip shark, the ram-biting bonnethead, and the suction-feeding nurse shark. Sharks were examined intact and after sensory information was blocked (olfaction, vision, mechanoreception, and electroreception, alone and in combination), to elucidate the contribution of the senses to the kinematics of prey capture. In response to sensory deprivation, the blacktip shark demonstrated the greatest amount of modulation, followed by the nurse shark. In the absence of olfaction, blacktip sharks open the jaws slowly, suggestive of less motivation. Without lateral line cues, blacktip sharks capture prey from greater horizontal angles using increased ram. When visual cues are absent, blacktip sharks elevate the head earlier and to a greater degree, allowing them to overcome imprecise position of the prey relative to the mouth, and capture prey using decreased ram, while suction remains unchanged. When visual cues are absent, nurse sharks open the mouth wider, extend the labial cartilages further, and increase suction while simultaneously decreasing ram. Unlike some bony fish, neither species switches feeding modalities (i.e. from ram to suction or vice versa). Bonnetheads failed to open the mouth when electrosensory cues were blocked, but otherwise little to no modulation was found in this species. These results suggest that prey capture may be less plastic in elasmobranchs than in bony fishes, possibly due to anatomical differences, and that the ability to modulate feeding kinematics in response to available sensory information varies by species, rather than by feeding modality. Copyright © 2016 Elsevier GmbH. All rights reserved.
Training Modalities to Increase Sensorimotor Adaptability
NASA Technical Reports Server (NTRS)
Bloomberg, J. J.; Mulavara, A. P.; Peters, B. T.; Brady, R.; Audas, C.; Cohen, H. S.
2009-01-01
During the acute phase of adaptation to novel gravitational environments, sensorimotor disturbances have the potential to disrupt the ability of astronauts to perform required mission tasks. The goal of our current series of studies is develop a sensorimotor adaptability (SA) training program designed to facilitate recovery of functional capabilities when astronauts transition to different gravitational environments. The project has conducted a series of studies investigating the efficacy of treadmill training combined with a variety of sensory challenges (incongruent visual input, support surface instability) designed to increase adaptability. SA training using a treadmill combined with exposure to altered visual input was effective in producing increased adaptability in a more complex over-ground ambulatory task on an obstacle course. This confirms that for a complex task like walking, treadmill training contains enough of the critical features of overground walking to be an effective training modality. SA training can be optimized by using a periodized training schedule. Test sessions that each contain short-duration exposures to multiple perturbation stimuli allows subjects to acquire a greater ability to rapidly reorganize appropriate response strategies when encountering a novel sensory environment. Using a treadmill mounted on top of a six degree-of-freedom motion base platform we investigated locomotor training responses produced by subjects introduced to a dynamic walking surface combined with alterations in visual flow. Subjects who received this training had improved locomotor performance and faster reaction times when exposed to the novel sensory stimuli compared to control subjects. Results also demonstrate that individual sensory biases (i.e. increased visual dependency) can predict adaptive responses to novel sensory environments suggesting that individual training prescription can be developed to enhance adaptability. These data indicate that SA training can be effectively integrated with treadmill exercise and optimized to provide a unique system that combines multiple training requirements in a single countermeasure system. Learning Objectives: The development of a new countermeasure approach that enhances sensorimotor adaptability will be discussed.
Priming within and across modalities: exploring the nature of rCBF increases and decreases.
Badgaiyan, R D; Schacter, D L; Alpert, N M
2001-02-01
Neuroimaging studies suggest that within-modality priming is associated with reduced regional cerebral blood flow (rCBF) in the extrastriate area, whereas cross-modality priming is associated with increased rCBF in prefrontal cortex. To characterize the nature of rCBF changes in within- and cross-modality priming, we conducted two neuroimaging experiments using positron emission tomography (PET). In experiment 1, rCBF changes in within-modality auditory priming on a word stem completion task were observed under same- and different-voice conditions. Both conditions were associated with decreased rCBF in extrastriate cortex. In the different-voice condition there were additional rCBF changes in the middle temporal gyrus and prefrontal cortex. Results suggest that the extrastriate involvement in within-modality priming is sensitive to a change in sensory modality of target stimuli between study and test, but not to a change in the feature of a stimulus within the same modality. In experiment 2, we studied cross-modality priming on a visual stem completion test after encoding under full- and divided-attention conditions. Increased rCBF in the anterior prefrontal cortex was observed in the full- but not in the divided-attention condition. Because explicit retrieval is compromised after encoding under the divided-attention condition, prefrontal involvement in cross-modality priming indicates recruitment of an aspect of explicit retrieval mechanism. The aspect of explicit retrieval that is most likely to be involved in cross-modality priming is the familiarity effect. Copyright 2001 Academic Press.
Salame, Talal H; Blinkhorn, Antony; Karami, Zahra
2018-01-01
Quantitative Sensory Testing (QST) has been used in clinical and experimental settings to establish sensory assessment for different types of pains, and may be a useful tool for the assessment of orofacial pain, but this premise needs to be tested. The aim of the study was to evaluate responses to thermal stimuli between painful and non-painful facial sites in subjects with orofacial pain using QST. A total of 60 participants (5o females: 28-83 years; 10 males: 44-81 years) with unilateral orofacial pain were recruited from the Orofacial Pain Clinic at the Pain Management and Research Centre, Royal North Shore Hospital, Sydney, Australia. The study followed the methods of limits of the German Research Network testing four modalities of thermal thresholds, the Warm Sensation, the Cold Sensation, the Heat Pain and the Cold Pain using a TSA-II Neurosensory Analyser. The results were compared to the results from the unaffected side of the same patient on the same area and a single t test statistical analysis was performed, where a p value of less than 0.05 was considered significant. The Mean Difference for Cold Sensation between the pain side and the non-pain side was 0.48 °C ± 1.5 (t= 2.466, p=0.017), 0.68 °C ± 2.04 for Warm Sensation (t= -2.573, p= 0.013), 2.56 °C ± 2.74 for Cold Pain (t= 7.238, p<0.001) and -1.21 °C ± 2.59 for Hot Pain (t= -3.639, p=0.001). The study showed that QST methods using thermal stimuli could be used to evaluate sensory dysfunction in orofacial pain patients using the specific parameters of cool and warm sensation, and cold and hot pain.
Carriot, Jérome; Jamali, Mohsen; Chacron, Maurice J; Cullen, Kathleen E
2014-06-11
It is widely believed that sensory systems are optimized for processing stimuli occurring in the natural environment. However, it remains unknown whether this principle applies to the vestibular system, which contributes to essential brain functions ranging from the most automatic reflexes to spatial perception and motor coordination. Here we quantified, for the first time, the statistics of natural vestibular inputs experienced by freely moving human subjects during typical everyday activities. Although previous studies have found that the power spectra of natural signals across sensory modalities decay as a power law (i.e., as 1/f(α)), we found that this did not apply to natural vestibular stimuli. Instead, power decreased slowly at lower and more rapidly at higher frequencies for all motion dimensions. We further establish that this unique stimulus structure is the result of active motion as well as passive biomechanical filtering occurring before any neural processing. Notably, the transition frequency (i.e., frequency at which power starts to decrease rapidly) was lower when subjects passively experienced sensory stimulation than when they actively controlled stimulation through their own movement. In contrast to signals measured at the head, the spectral content of externally generated (i.e., passive) environmental motion did follow a power law. Specifically, transformations caused by both motor control and biomechanics shape the statistics of natural vestibular stimuli before neural processing. We suggest that the unique structure of natural vestibular stimuli will have important consequences on the neural coding strategies used by this essential sensory system to represent self-motion in everyday life. Copyright © 2014 the authors 0270-6474/14/348347-11$15.00/0.
Relational Associative Learning Induces Cross-Modal Plasticity in Early Visual Cortex
Headley, Drew B.; Weinberger, Norman M.
2015-01-01
Neurobiological theories of memory posit that the neocortex is a storage site of declarative memories, a hallmark of which is the association of two arbitrary neutral stimuli. Early sensory cortices, once assumed uninvolved in memory storage, recently have been implicated in associations between neutral stimuli and reward or punishment. We asked whether links between neutral stimuli also could be formed in early visual or auditory cortices. Rats were presented with a tone paired with a light using a sensory preconditioning paradigm that enabled later evaluation of successful association. Subjects that acquired this association developed enhanced sound evoked potentials in their primary and secondary visual cortices. Laminar recordings localized this potential to cortical Layers 5 and 6. A similar pattern of activation was elicited by microstimulation of primary auditory cortex in the same subjects, consistent with a cortico-cortical substrate of association. Thus, early sensory cortex has the capability to form neutral stimulus associations. This plasticity may constitute a declarative memory trace between sensory cortices. PMID:24275832
Multi-sensory integration in a small brain
NASA Astrophysics Data System (ADS)
Gepner, Ruben; Wolk, Jason; Gershow, Marc
Understanding how fluctuating multi-sensory stimuli are integrated and transformed in neural circuits has proved a difficult task. To address this question, we study the sensori-motor transformations happening in the brain of the Drosophila larva, a tractable model system with about 10,000 neurons. Using genetic tools that allow us to manipulate the activity of individual brain cells through their transparent body, we observe the stochastic decisions made by freely-behaving animals as their visual and olfactory environments fluctuate independently. We then use simple linear-nonlinear models to correlate outputs with relevant features in the inputs, and adaptive filtering processes to track changes in these relevant parameters used by the larva's brain to make decisions. We show how these techniques allow us to probe how statistics of stimuli from different sensory modalities combine to affect behavior, and can potentially guide our understanding of how neural circuits are anatomically and functionally integrated. Supported by NIH Grant 1DP2EB022359 and NSF Grant PHY-1455015.
The associations between multisensory temporal processing and symptoms of schizophrenia.
Stevenson, Ryan A; Park, Sohee; Cochran, Channing; McIntosh, Lindsey G; Noel, Jean-Paul; Barense, Morgan D; Ferber, Susanne; Wallace, Mark T
2017-01-01
Recent neurobiological accounts of schizophrenia have included an emphasis on changes in sensory processing. These sensory and perceptual deficits can have a cascading effect onto higher-level cognitive processes and clinical symptoms. One form of sensory dysfunction that has been consistently observed in schizophrenia is altered temporal processing. In this study, we investigated temporal processing within and across the auditory and visual modalities in individuals with schizophrenia (SCZ) and age-matched healthy controls. Individuals with SCZ showed auditory and visual temporal processing abnormalities, as well as multisensory temporal processing dysfunction that extended beyond that attributable to unisensory processing dysfunction. Most importantly, these multisensory temporal deficits were associated with the severity of hallucinations. This link between atypical multisensory temporal perception and clinical symptomatology suggests that clinical symptoms of schizophrenia may be at least partly a result of cascading effects from (multi)sensory disturbances. These results are discussed in terms of underlying neural bases and the possible implications for remediation. Copyright © 2016 Elsevier B.V. All rights reserved.
Auditory-visual object recognition time suggests specific processing for animal sounds.
Suied, Clara; Viaud-Delmon, Isabelle
2009-01-01
Recognizing an object requires binding together several cues, which may be distributed across different sensory modalities, and ignoring competing information originating from other objects. In addition, knowledge of the semantic category of an object is fundamental to determine how we should react to it. Here we investigate the role of semantic categories in the processing of auditory-visual objects. We used an auditory-visual object-recognition task (go/no-go paradigm). We compared recognition times for two categories: a biologically relevant one (animals) and a non-biologically relevant one (means of transport). Participants were asked to react as fast as possible to target objects, presented in the visual and/or the auditory modality, and to withhold their response for distractor objects. A first main finding was that, when participants were presented with unimodal or bimodal congruent stimuli (an image and a sound from the same object), similar reaction times were observed for all object categories. Thus, there was no advantage in the speed of recognition for biologically relevant compared to non-biologically relevant objects. A second finding was that, in the presence of a biologically relevant auditory distractor, the processing of a target object was slowed down, whether or not it was itself biologically relevant. It seems impossible to effectively ignore an animal sound, even when it is irrelevant to the task. These results suggest a specific and mandatory processing of animal sounds, possibly due to phylogenetic memory and consistent with the idea that hearing is particularly efficient as an alerting sense. They also highlight the importance of taking into account the auditory modality when investigating the way object concepts of biologically relevant categories are stored and retrieved.
Butler, Blake E; Chabot, Nicole; Lomber, Stephen G
2016-09-01
The superior colliculus (SC) is a midbrain structure central to orienting behaviors. The organization of descending projections from sensory cortices to the SC has garnered much attention; however, rarely have projections from multiple modalities been quantified and contrasted, allowing for meaningful conclusions within a single species. Here, we examine corticotectal projections from visual, auditory, somatosensory, motor, and limbic cortices via retrograde pathway tracers injected throughout the superficial and deep layers of the cat SC. As anticipated, the majority of cortical inputs to the SC originate in the visual cortex. In fact, each field implicated in visual orienting behavior makes a substantial projection. Conversely, only one area of the auditory orienting system, the auditory field of the anterior ectosylvian sulcus (fAES), and no area involved in somatosensory orienting, shows significant corticotectal inputs. Although small relative to visual inputs, the projection from the fAES is of particular interest, as it represents the only bilateral cortical input to the SC. This detailed, quantitative study allows for comparison across modalities in an animal that serves as a useful model for both auditory and visual perception. Moreover, the differences in patterns of corticotectal projections between modalities inform the ways in which orienting systems are modulated by cortical feedback. J. Comp. Neurol. 524:2623-2642, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Filling-in visual motion with sounds.
Väljamäe, A; Soto-Faraco, S
2008-10-01
Information about the motion of objects can be extracted by multiple sensory modalities, and, as a consequence, object motion perception typically involves the integration of multi-sensory information. Often, in naturalistic settings, the flow of such information can be rather discontinuous (e.g. a cat racing through the furniture in a cluttered room is partly seen and partly heard). This study addressed audio-visual interactions in the perception of time-sampled object motion by measuring adaptation after-effects. We found significant auditory after-effects following adaptation to unisensory auditory and visual motion in depth, sampled at 12.5 Hz. The visually induced (cross-modal) auditory motion after-effect was eliminated if visual adaptors flashed at half of the rate (6.25 Hz). Remarkably, the addition of the high-rate acoustic flutter (12.5 Hz) to this ineffective, sparsely time-sampled, visual adaptor restored the auditory after-effect to a level comparable to what was seen with high-rate bimodal adaptors (flashes and beeps). Our results suggest that this auditory-induced reinstatement of the motion after-effect from the poor visual signals resulted from the occurrence of sound-induced illusory flashes. This effect was found to be dependent both on the directional congruency between modalities and on the rate of auditory flutter. The auditory filling-in of time-sampled visual motion supports the feasibility of using reduced frame rate visual content in multisensory broadcasting and virtual reality applications.
Biases in rhythmic sensorimotor coordination: effects of modality and intentionality.
Debats, Nienke B; Ridderikhoff, Arne; de Boer, Betteco J; Peper, C Lieke E
2013-08-01
Sensorimotor biases were examined for intentional (tracking task) and unintentional (distractor task) rhythmic coordination. The tracking task involved unimanual tracking of either an oscillating visual signal or the passive movements of the contralateral hand (proprioceptive signal). In both conditions the required coordination patterns (isodirectional and mirror-symmetric) were defined relative to the body midline and the hands were not visible. For proprioceptive tracking the two patterns did not differ in stability, whereas for visual tracking the isodirectional pattern was performed more stably than the mirror-symmetric pattern. However, when visual feedback about the unimanual hand movements was provided during visual tracking, the isodirectional pattern ceased to be dominant. Together these results indicated that the stability of the coordination patterns did not depend on the modality of the target signal per se, but on the combination of sensory signals that needed to be processed (unimodal vs. cross-modal). The distractor task entailed rhythmic unimanual movements during which a rhythmic visual or proprioceptive distractor signal had to be ignored. The observed biases were similar as for intentional coordination, suggesting that intentionality did not affect the underlying sensorimotor processes qualitatively. Intentional tracking was characterized by active sensory pursuit, through muscle activity in the passively moved arm (proprioceptive tracking task) and rhythmic eye movements (visual tracking task). Presumably this pursuit afforded predictive information serving the coordination process. Copyright © 2013 Elsevier B.V. All rights reserved.
'Bodily precision': a predictive coding account of individual differences in interoceptive accuracy.
Ainley, Vivien; Apps, Matthew A J; Fotopoulou, Aikaterini; Tsakiris, Manos
2016-11-19
Individuals differ in their awareness of afferent information from within their bodies, which is typically assessed by a heartbeat perception measure of 'interoceptive accuracy' (IAcc). Neural and behavioural correlates of this trait have been investigated, but a theoretical explanation has yet to be presented. Building on recent models that describe interoception within the free energy/predictive coding framework, this paper applies similar principles to IAcc, proposing that individual differences in IAcc depend on 'precision' in interoceptive systems, i.e. the relative weight accorded to 'prior' representations and 'prediction errors' (that part of incoming interoceptive sensation not accounted for by priors), at various levels within the cortical hierarchy and between modalities. Attention has the effect of optimizing precision both within and between sensory modalities. Our central assumption is that people with high IAcc are able, with attention, to prioritize interoception over other sensory modalities and can thus adjust the relative precision of their interoceptive priors and prediction errors, where appropriate, given their personal history. This characterization explains key findings within the interoception literature; links results previously seen as unrelated or contradictory; and may have important implications for understanding cognitive, behavioural and psychopathological consequences of both high and low interoceptive awareness.This article is part of the themed issue 'Interoception beyond homeostasis: affect, cognition and mental health'. © 2016 The Author(s).
Older users, multimodal reminders and assisted living technology.
Warnock, David; McGee-Lennon, Marilyn; Brewster, Stephen
2012-09-01
The primary users of assisted living technology are older people who are likely to have one or more sensory impairments. Multimodal technology allows users to interact via non-impaired senses and provides alternative ways to interact if primary interaction methods fail. An empirical user study was carried out with older participants which evaluated the performance, disruptiveness and subjective workload of visual, audio, tactile and olfactory notifications then compared the results with earlier findings in younger participants. It was found that disruption and subjective workload were not affected by modality, although some modalities were more effective at delivering information accurately. It is concluded that although further studies need to be carried out in a real-world settings, the findings support the argument for multiple modalities in assisted living technology.
Attention to olfaction. A psychophysical investigation.
Spence, C; McGlone, F P; Kettenmann, B; Kobal, G
2001-06-01
Olfaction is unique among the senses in that signals from the peripheral sensory receptors bypass the thalamus on their way to the cortex. The fact that olfactory stimuli are not gated by the thalamus has led some researchers to suggest that people may be unable to selectively direct their attention toward the olfactory modality. We examined this issue in an experiment where participants made speeded intensity (strong vs weak)-discrimination responses to an unpredictable sequence of olfactory and visual stimuli. Attention was directed to either olfaction or to vision by means of an informative cue that predicted the likely modality for the upcoming target on the majority of trials. Participants responded more rapidly when the target was presented in the expected rather than the unexpected modality, showing that people can selectively attend to olfaction.
2014-01-01
Background Questionnaire-based studies suggest atypical sensory perception in over 90% of individuals with autism spectrum conditions (ASC). Sensory questionnaire-based studies in ASC mainly record parental reports of their child’s sensory experience; less is known about sensory reactivity in adults with ASC. Given the DSM-5 criteria for ASC now include sensory reactivity, there is a need for an adult questionnaire investigating basic sensory functioning. We aimed to develop and validate the Sensory Perception Quotient (SPQ), which assesses basic sensory hyper- and hyposensitivity across all five modalities. Methods A total of 359 adults with (n = 196) and without (n = 163) ASC were asked to fill in the SPQ, the Sensory Over-Responsivity Inventory (SensOR) and the Autism-Spectrum Quotient (AQ) online. Results Adults with ASC reported more sensory hypersensitivity on the SPQ compared to controls (P < .001). SPQ scores were correlated with AQ scores both across groups (r = .-38) and within the ASC (r = -.18) and control groups (r = -.15). Principal component analyses conducted separately in both groups indicated that one factor comprising 35 items consistently assesses sensory hypersensitivity. The SPQ showed high internal consistency for both the total SPQ (Cronbach’s alpha = .92) and the reduced 35-item version (alpha = .93). The SPQ was significantly correlated with the SensOR across groups (r = -.46) and within the ASC (r = -.49) and control group (r = -.21). Conclusions The SPQ shows good internal consistency and concurrent validity and differentiates between adults with and without ASC. Adults with ASC report more sensitivity to sensory stimuli on the SPQ. Finally, greater sensory sensitivity is associated with more autistic traits. The SPQ provides a new tool to measure individual differences on this dimension. PMID:24791196
Selectivity and Longevity of Peripheral-Nerve and Machine Interfaces: A Review
Ghafoor, Usman; Kim, Sohee; Hong, Keum-Shik
2017-01-01
For those individuals with upper-extremity amputation, a daily normal living activity is no longer possible or it requires additional effort and time. With the aim of restoring their sensory and motor functions, theoretical and technological investigations have been carried out in the field of neuroprosthetic systems. For transmission of sensory feedback, several interfacing modalities including indirect (non-invasive), direct-to-peripheral-nerve (invasive), and cortical stimulation have been applied. Peripheral nerve interfaces demonstrate an edge over the cortical interfaces due to the sensitivity in attaining cortical brain signals. The peripheral nerve interfaces are highly dependent on interface designs and are required to be biocompatible with the nerves to achieve prolonged stability and longevity. Another criterion is the selection of nerves that allows minimal invasiveness and damages as well as high selectivity for a large number of nerve fascicles. In this paper, we review the nerve-machine interface modalities noted above with more focus on peripheral nerve interfaces, which are responsible for provision of sensory feedback. The invasive interfaces for recording and stimulation of electro-neurographic signals include intra-fascicular, regenerative-type interfaces that provide multiple contact channels to a group of axons inside the nerve and the extra-neural-cuff-type interfaces that enable interaction with many axons around the periphery of the nerve. Section Current Prosthetic Technology summarizes the advancements made to date in the field of neuroprosthetics toward the achievement of a bidirectional nerve-machine interface with more focus on sensory feedback. In the Discussion section, the authors propose a hybrid interface technique for achieving better selectivity and long-term stability using the available nerve interfacing techniques. PMID:29163122
Caillé, Soline; Samson, Alain; Wirth, Jérémie; Diéval, Jean-Baptiste; Vidal, Stéphane; Cheynier, Véronique
2010-02-15
It is widely accepted that oxygen contributes to wine development by impacting its colour, aromatic bouquet, and mouth-feel properties. The wine industry can now also take advantage of engineered solutions to deliver known amounts of oxygen into bottles through the closures. This study was aimed at monitoring the influence of oxygen pick-up, before (micro-oxygenation, Mox) and after (nano-oxygenation) bottling, on wine sensory evolution. Red Grenache wines were prepared either by flash release (FR) or traditional soaking (Trad) and with or without Mox during elevage (FR+noMox, FR+Mox, Trad+noMox, Trad+Mox). The rate of nano oxygenation was controlled by combining consistent oxygen transfer rate (OTR) closures and different oxygen controlled storage conditions. Wine sensory characteristics were analyzed by sensory profile, at bottling (T0) and after 5 and 10 months of ageing, by a panel of trained judges. Effects of winemaking techniques and OTR were analyzed by multivariate analysis (principal component analysis and agglomerative hierarchical clustering) and analysis of variance. Results showed that, at bottling, Trad wines were perceived more animal and FR wines more bitter and astringent. Mox wines showed more orange shade. At 5 and 10 months, visual and olfactory differences were observed according to the OTR levels: modalities with higher oxygen ingress were darker and fruitier but also perceived significantly less animal than modalities with lower oxygen. Along the 10 months of ageing, the influence of OTR became more important as shown by increased significance levels of the observed differences. As the mouth-feel properties of the wines were mainly dictated by winemaking techniques, OTR had only little impact on "in mouth" attributes. Copyright 2009 Elsevier B.V. All rights reserved.
Multi-modal pain measurements in infants
Worley, A.; Fabrizi, L.; Boyd, S.; Slater, R.
2012-01-01
A non-invasive integrated method was developed to measure neural and behavioural responses to peripheral sensory and noxious stimulation in human infants. The introduction of a novel event-detection interface allows synchronous recording of: (i) muscle and central nervous system activity with surface electromyography (EMG), scalp electroencephalography (EEG) and near-infrared spectroscopy (NIRS); (ii) behavioural responses with video-recording and (iii) autonomic responses (heart rate, oxygen saturation, respiratory rate and cardiovascular activity) with electrocardiography (ECG) and pulse oximetry. The system can detect noxious heel lance and touch stimuli with precision (33 μs and 624 μs respectively) and accuracy (523 μs and 256 μs) and has 100% sensitivity and specificity for both types of stimulation. Its ability to detect response latencies accurately was demonstrated by a shift in latency of the vertex potential of 20.7 ± 15.7 ms (n = 6 infants), following touch of the heel and of the shoulder, reflecting the distance between the two sites. This integrated system has provided reliable and reproducible measurements of responses to sensory and noxious stimulation in human infants on more than 100 test occasions. PMID:22285660
Multi-modal pain measurements in infants.
Worley, A; Fabrizi, L; Boyd, S; Slater, R
2012-04-15
A non-invasive integrated method was developed to measure neural and behavioural responses to peripheral sensory and noxious stimulation in human infants. The introduction of a novel event-detection interface allows synchronous recording of: (i) muscle and central nervous system activity with surface electromyography (EMG), scalp electroencephalography (EEG) and near-infrared spectroscopy (NIRS); (ii) behavioural responses with video-recording and (iii) autonomic responses (heart rate, oxygen saturation, respiratory rate and cardiovascular activity) with electrocardiography (ECG) and pulse oximetry. The system can detect noxious heel lance and touch stimuli with precision (33 μs and 624 μs respectively) and accuracy (523 μs and 256 μs) and has 100% sensitivity and specificity for both types of stimulation. Its ability to detect response latencies accurately was demonstrated by a shift in latency of the vertex potential of 20.7 ± 15.7 ms (n=6 infants), following touch of the heel and of the shoulder, reflecting the distance between the two sites. This integrated system has provided reliable and reproducible measurements of responses to sensory and noxious stimulation in human infants on more than 100 test occasions. Copyright © 2012 Elsevier B.V. All rights reserved.
Watanabe, Keiko; Masaoka, Yuri; Kawamura, Mitsuru; Yoshida, Masaki; Koiwa, Nobuyoshi; Yoshikawa, Akira; Kubota, Satomi; Ida, Masahiro; Ono, Kenjiro; Izumizaki, Masahiko
2018-01-01
Autobiographical odor memory (AM-odor) accompanied by a sense of realism of a specific memory elicits strong emotions. AM-odor differs from memory triggered by other sensory modalities, possibly because olfaction involves a unique sensory process. Here, we examined the orbitofrontal cortex (OFC), using functional magnetic resonance imaging (fMRI) to determine which OFC subregions are related to AM-odor. Both AM-odor and a control odor successively increased subjective ratings of comfortableness and pleasantness. Importantly, AM-odor also increased arousal levels and the vividness of memories, and was associated with a deep and slow breathing pattern. fMRI analysis indicated robust activation in the left posterior OFC (L-POFC). Connectivity between the POFC and whole brain regions was estimated using psychophysiological interaction analysis (PPI). We detected several trends in connectivity between L-POFC and bilateral precuneus, bilateral rostral dorsal anterior cingulate cortex (rdACC), and left parahippocampus, which will be useful for targeting our hypotheses for future investigations. The slow breathing observed in AM-odor was correlated with rdACC activation. Odor associated with emotionally significant autobiographical memories was accompanied by slow and deep breathing, possibly involving rdACC processing.
Complex dynamics of semantic memory access in reading.
Baggio, Giosué; Fonseca, André
2012-02-07
Understanding a word in context relies on a cascade of perceptual and conceptual processes, starting with modality-specific input decoding, and leading to the unification of the word's meaning into a discourse model. One critical cognitive event, turning a sensory stimulus into a meaningful linguistic sign, is the access of a semantic representation from memory. Little is known about the changes that activating a word's meaning brings about in cortical dynamics. We recorded the electroencephalogram (EEG) while participants read sentences that could contain a contextually unexpected word, such as 'cold' in 'In July it is very cold outside'. We reconstructed trajectories in phase space from single-trial EEG time series, and we applied three nonlinear measures of predictability and complexity to each side of the semantic access boundary, estimated as the onset time of the N400 effect evoked by critical words. Relative to controls, unexpected words were associated with larger prediction errors preceding the onset of the N400. Accessing the meaning of such words produced a phase transition to lower entropy states, in which cortical processing becomes more predictable and more regular. Our study sheds new light on the dynamics of information flow through interfaces between sensory and memory systems during language processing.
Sensory modulation disorder symptoms in anorexia nervosa and bulimia nervosa: A pilot study.
Brand-Gothelf, Ayelet; Parush, Shula; Eitan, Yehudith; Admoni, Shai; Gur, Eitan; Stein, Daniel
2016-01-01
Individuals with anorexia nervosa (AN) and bulimia nervosa (BN) may exhibit reduced ability to modulate sensory, physiological, and affective responses. The aim of the present study is to assess sensory modulation disorder (SMD) symptoms in patients with AN and BN. We assessed female adolescent and young adult inpatients with restrictive type anorexia nervosa (AN-R; n = 20) and BN (n = 20) evaluated in the acute stage of their illness, and 27 female controls. Another group of 20 inpatients with AN-R was assessed on admission and discharge, upon achieving their required weight. Participants completed standardized questionnaires assessing the severity of their eating disorder (ED) and the sensory responsiveness questionnaire (SRQ). Inpatients with AN-R demonstrated elevated overall sensory over-responsiveness as well as elevated scores on the taste/gustatory, vestibular/kinesthetic and somatosensory/tactile SRQ modalities compared with patients with BN and controls. Significant correlations between the severity of sensory over-responsiveness and ED-related symptomatology were found in acutely-ill patients with AN-R and to a lesser extent, following weight restoration. Elevated sensory over-responsiveness was retained in weight-restored inpatients with AN-R. Inpatients with BN demonstrated greater sensory under-responsiveness in the intensity subscale of the SRQ, but not in the frequency and combined SRQ dimensions. Female inpatients with AN-R exhibited sensory over-responsiveness both in the acute stage of their illness and following weight restoration, suggesting that sensory over-responsiveness may represent a trait related to the illness itself above and beyond the influence of malnutrition. The finding for sensory under-responsiveness in BN is less consistent. © 2015 Wiley Periodicals, Inc.
Origin and early evolution of neural circuits for the control of ciliary locomotion.
Jékely, Gáspár
2011-03-22
Behaviour evolved before nervous systems. Various single-celled eukaryotes (protists) and the ciliated larvae of sponges devoid of neurons can display sophisticated behaviours, including phototaxis, gravitaxis or chemotaxis. In single-celled eukaryotes, sensory inputs directly influence the motor behaviour of the cell. In swimming sponge larvae, sensory cells influence the activity of cilia on the same cell, thereby steering the multicellular larva. In these organisms, the efficiency of sensory-to-motor transformation (defined as the ratio of sensory cells to total cell number) is low. With the advent of neurons, signal amplification and fast, long-range communication between sensory and motor cells became possible. This may have first occurred in a ciliated swimming stage of the first eumetazoans. The first axons may have had en passant synaptic contacts to several ciliated cells to improve the efficiency of sensory-to-motor transformation, thereby allowing a reduction in the number of sensory cells tuned for the same input. This could have allowed the diversification of sensory modalities and of the behavioural repertoire. I propose that the first nervous systems consisted of combined sensory-motor neurons, directly translating sensory input into motor output on locomotor ciliated cells and steering muscle cells. Neuronal circuitry with low levels of integration has been retained in cnidarians and in the ciliated larvae of some marine invertebrates. This parallel processing stage could have been the starting point for the evolution of more integrated circuits performing the first complex computations such as persistence or coincidence detection. The sensory-motor nervous systems of cnidarians and ciliated larvae of diverse phyla show that brains, like all biological structures, are not irreducibly complex.
How is an ideal satiating yogurt described? A case study with added-protein yogurts.
Morell, P; Piqueras-Fiszman, B; Hernando, I; Fiszman, S
2015-12-01
Protein is recognized as the macronutrient with the highest satiating ability. Yogurt can be an excellent basis for designing satiating food as it is protein-based food product. Five different set-type yogurts were formulated by adding extra skim milk powder (MP), whey protein concentrate (WPC), calcium caseinate (CAS) or a blend of whey protein concentrate with calcium caseinate (CAS-WPC). A control yogurt without extra protein content was also prepared. Differences in sensory perceptions (through CATA questions) were related to the consumers' expected satiating ability and liking scores (of several modalities). In addition, an "Ideal satiating yogurt" was included in the CATA question to perform a penalty analysis to show potential directions for yogurt reformulation and to relate sensory and non-sensory yogurt characteristics to satiating capacity. Copyright © 2015 Elsevier Ltd. All rights reserved.
Alpha-Band Rhythms in Visual Task Performance: Phase-Locking by Rhythmic Sensory Stimulation
de Graaf, Tom A.; Gross, Joachim; Paterson, Gavin; Rusch, Tessa; Sack, Alexander T.; Thut, Gregor
2013-01-01
Oscillations are an important aspect of neuronal activity. Interestingly, oscillatory patterns are also observed in behaviour, such as in visual performance measures after the presentation of a brief sensory event in the visual or another modality. These oscillations in visual performance cycle at the typical frequencies of brain rhythms, suggesting that perception may be closely linked to brain oscillations. We here investigated this link for a prominent rhythm of the visual system (the alpha-rhythm, 8–12 Hz) by applying rhythmic visual stimulation at alpha-frequency (10.6 Hz), known to lead to a resonance response in visual areas, and testing its effects on subsequent visual target discrimination. Our data show that rhythmic visual stimulation at 10.6 Hz: 1) has specific behavioral consequences, relative to stimulation at control frequencies (3.9 Hz, 7.1 Hz, 14.2 Hz), and 2) leads to alpha-band oscillations in visual performance measures, that 3) correlate in precise frequency across individuals with resting alpha-rhythms recorded over parieto-occipital areas. The most parsimonious explanation for these three findings is entrainment (phase-locking) of ongoing perceptually relevant alpha-band brain oscillations by rhythmic sensory events. These findings are in line with occipital alpha-oscillations underlying periodicity in visual performance, and suggest that rhythmic stimulation at frequencies of intrinsic brain-rhythms can be used to reveal influences of these rhythms on task performance to study their functional roles. PMID:23555873
Semantic Relevance, Domain Specificity and the Sensory/Functional Theory of Category-Specificity
ERIC Educational Resources Information Center
Sartori, Giuseppe; Gnoato, Francesca; Mariani, Ilenia; Prioni, Sara; Lombardi, Luigi
2007-01-01
According to the sensory/functional theory of semantic memory, Living items rely more on Sensory knowledge than Non-living ones. The sensory/functional explanation of category-specificity assumes that semantic features are organised on the basis of their content. We report here a study on DAT patients with impaired performance on Living items and…
ERIC Educational Resources Information Center
Boets, Bart; Wouters, Jan; van Wieringen, Astrid; De Smedt, Bert; Ghesquiere, Pol
2008-01-01
The general magnocellular theory postulates that dyslexia is the consequence of a multimodal deficit in the processing of transient and dynamic stimuli. In the auditory modality, this deficit has been hypothesized to interfere with accurate speech perception, and subsequently disrupt the development of phonological and later reading and spelling…
The Impact of Vision in Spatial Coding
ERIC Educational Resources Information Center
Papadopoulos, Konstantinos; Koustriava, Eleni
2011-01-01
The aim of this study is to examine the performance in coding and representing of near-space in relation to vision status (blindness vs. normal vision) and sensory modality (touch vs. vision). Forty-eight children and teenagers participated. Sixteen of the participants were totally blind or had only light perception, 16 were blindfolded sighted…
Medial Auditory Thalamic Stimulation as a Conditioned Stimulus for Eyeblink Conditioning in Rats
ERIC Educational Resources Information Center
Campolattaro, Matthew M.; Halverson, Hunter E.; Freeman, John H.
2007-01-01
The neural pathways that convey conditioned stimulus (CS) information to the cerebellum during eyeblink conditioning have not been fully delineated. It is well established that pontine mossy fiber inputs to the cerebellum convey CS-related stimulation for different sensory modalities (e.g., auditory, visual, tactile). Less is known about the…
The Sensory Modality Used for Learning Affects Grades
ERIC Educational Resources Information Center
Ramirez, Beatriz U.
2011-01-01
Second-year undergraduate students from 2008, 2009, and 2010 cohorts were asked to respond a questionnaire to determine their learning style preferences, the VARK questionnaire (where V is visual, A is aural, R is reading-writing, and K is kinesthetic), which was translated into Spanish by the author. The translated questionnaire was tested for…
Unfurling the Wings of Flight: Clarifying "The What" and "The Why" of Mental Imagery Use in Dance
ERIC Educational Resources Information Center
Fisher, Vicky J.
2017-01-01
This article provides clarification regarding "the what" and "the why" of mental imagery use in dance. It proposes that mental images are invoked across sensory modalities and often combine internal and external perspectives. The content of images ranges from "direct" body oriented simulations along a continuum…
Modality Switching Cost during Property Verification by 7 Years of Age
ERIC Educational Resources Information Center
Ambrosi, Solene; Kalenine, Solene; Blaye, Agnes; Bonthoux, Francoise
2011-01-01
Recent studies in neuroimagery and cognitive psychology support the view of sensory-motor based knowledge: when processing an object concept, neural systems would re-enact previous experiences with this object. In this experiment, a conceptual switching cost paradigm derived from Pecher, Zeelenberg, and Barsalou (2003, 2004) was used to…
The Relationship between Parameters of Long-Latency Evoked Potentials in a Multisensory Design.
Hernández, Oscar H; García-Martínez, Rolando; Monteón, Victor
2016-10-01
In previous papers, we have shown that parameters of the omitted stimulus potential (OSP), which occurs at the end of a train of sensory stimuli, strongly depend on the modality. A train of stimuli also produces long-latency evoked potentials (LLEP) at the beginning of the train. This study is an extension of the OSP research, and it tested the relationship between parameters (ie, rate of rise, amplitude, and peak latency) of the P2 waves when trains of auditory, visual, or somatosensory stimuli were applied. The dynamics of the first 3 potentials in the train, related to habituation, were also studied. Twenty healthy young college volunteers participated in the study. As in the OSP, the P2 was faster and higher for auditory than for visual or somatosensory stimuli. The first P2 was swifter and higher than the second and the third potentials. The strength of habituation depends on the sensory modality and the parameter used. All these findings support the view that many long-latency brain potentials could share neural mechanisms related to wave generation. © EEG and Clinical Neuroscience Society (ECNS) 2015.
Ickes, Chelsea M; Cadwallader, Keith R
2017-11-01
This study identified and quantitated perceived sensory differences between 7 premium rums and 2 mixing rums using a hybrid of the Quantitative Descriptive Analysis and Spectrum methods. In addition, the results of this study validated the previously developed rum flavor wheel created from web-based materials. Results showed that the use of the rum flavor wheel aided in sensory term generation, as 17 additional terms were generated after the wheel was provided to panelists. Thirty-eight sensory terms encompassing aroma, aroma-by-mouth, mouthfeel, taste and aftertaste modalities, were generated and evaluated by the panel. Of the finalized terms, only 5 did not exist previously on the rum flavor wheel. Twenty attributes were found to be significantly different among rums. The majority of rums showed similar aroma profiles with the exception of 2 rums, which were characterized by higher perceived intensities of brown sugar, caramel, vanilla, and chocolate aroma, caramel, maple, and vanilla aroma-by-mouth and caramel aftertaste. These results demonstrate the previously developed rum flavor wheel can be used to adequately describe the flavor profile of rum. Additionally, results of this study document the sensory differences among premium rums and may be used to correlate with analytical data to better understand how changes in chemical composition of the product affect sensory perception. © 2017 Institute of Food Technologists®.
Modality-independent representations of small quantities based on brain activation patterns.
Damarla, Saudamini Roy; Cherkassky, Vladimir L; Just, Marcel Adam
2016-04-01
Machine learning or MVPA (Multi Voxel Pattern Analysis) studies have shown that the neural representation of quantities of objects can be decoded from fMRI patterns, in cases where the quantities were visually displayed. Here we apply these techniques to investigate whether neural representations of quantities depicted in one modality (say, visual) can be decoded from brain activation patterns evoked by quantities depicted in the other modality (say, auditory). The main finding demonstrated, for the first time, that quantities of dots were decodable by a classifier that was trained on the neural patterns evoked by quantities of auditory tones, and vice-versa. The representations that were common across modalities were mainly right-lateralized in frontal and parietal regions. A second finding was that the neural patterns in parietal cortex that represent quantities were common across participants. These findings demonstrate a common neuronal foundation for the representation of quantities across sensory modalities and participants and provide insight into the role of parietal cortex in the representation of quantity information. © 2016 Wiley Periodicals, Inc.
Multimodal sensorimotor system in unicellular zoospores of a fungus.
Swafford, Andrew J M; Oakley, Todd H
2018-01-19
Complex sensory systems often underlie critical behaviors, including avoiding predators and locating prey, mates and shelter. Multisensory systems that control motor behavior even appear in unicellular eukaryotes, such as Chlamydomonas , which are important laboratory models for sensory biology. However, we know of no unicellular opisthokonts that control motor behavior using a multimodal sensory system. Therefore, existing single-celled models for multimodal sensorimotor integration are very distantly related to animals. Here, we describe a multisensory system that controls the motor function of unicellular fungal zoospores. We found that zoospores of Allomyces arbusculus exhibit both phototaxis and chemotaxis. Furthermore, we report that closely related Allomyces species respond to either the chemical or the light stimuli presented in this study, not both, and likely do not share this multisensory system. This diversity of sensory systems within Allomyces provides a rare example of a comparative framework that can be used to examine the evolution of sensory systems following the gain/loss of available sensory modalities. The tractability of Allomyces and related fungi as laboratory organisms will facilitate detailed mechanistic investigations into the genetic underpinnings of novel photosensory systems, and how multisensory systems may have functioned in early opisthokonts before multicellularity allowed for the evolution of specialized cell types. © 2018. Published by The Company of Biologists Ltd.
Functional sensorial complementation during host orientation in an Asilidae parasitoid larva.
Pueyrredon, J M; Crespo, J E; Castelo, M K
2017-10-01
Changes in environmental conditions influence the performance of organisms in every aspect of their life. Being capable of accurately sensing these changes allow organisms to better adapt. The detection of environmental conditions involves different sensory modalities. There are many studies on the morphology of different sensory structures but not so many studies showing their function. Here we studied the morphology of different sensory structures in the larva of a dipteran parasitoid. We occluded the putative sensory structures coupling the morphology with their function. First, we could develop a non-invasive method in which we occluded the putative sensorial structures annulling their function temporarily. Regarding their functionality, we found that larvae of Mallophora ruficauda require simultaneously of the sensilla found both in the antennae and those of the maxillary palps in order to orient to its host. When either both antennae or both maxillary palps were occluded, no orientation to the host was observed. We also found that these structures are not involved in the acceptance of the host because high and similar proportion of parasitized hosts was found in host acceptance experiments. We propose that other sensilla could be involved in host acceptance and discuss how the different sensilla in the antennae and maxillary palps complement each other to provide larvae with the information for locating its host.
Task-dependent modulation of the visual sensory thalamus assists visual-speech recognition.
Díaz, Begoña; Blank, Helen; von Kriegstein, Katharina
2018-05-14
The cerebral cortex modulates early sensory processing via feed-back connections to sensory pathway nuclei. The functions of this top-down modulation for human behavior are poorly understood. Here, we show that top-down modulation of the visual sensory thalamus (the lateral geniculate body, LGN) is involved in visual-speech recognition. In two independent functional magnetic resonance imaging (fMRI) studies, LGN response increased when participants processed fast-varying features of articulatory movements required for visual-speech recognition, as compared to temporally more stable features required for face identification with the same stimulus material. The LGN response during the visual-speech task correlated positively with the visual-speech recognition scores across participants. In addition, the task-dependent modulation was present for speech movements and did not occur for control conditions involving non-speech biological movements. In face-to-face communication, visual speech recognition is used to enhance or even enable understanding what is said. Speech recognition is commonly explained in frameworks focusing on cerebral cortex areas. Our findings suggest that task-dependent modulation at subcortical sensory stages has an important role for communication: Together with similar findings in the auditory modality the findings imply that task-dependent modulation of the sensory thalami is a general mechanism to optimize speech recognition. Copyright © 2018. Published by Elsevier Inc.
Top-down influence on the visual cortex of the blind during sensory substitution
Murphy, Matthew C.; Nau, Amy C.; Fisher, Christopher; Kim, Seong-Gi; Schuman, Joel S.; Chan, Kevin C.
2017-01-01
Visual sensory substitution devices provide a non-surgical and flexible approach to vision rehabilitation in the blind. These devices convert images taken by a camera into cross-modal sensory signals that are presented as a surrogate for direct visual input. While previous work has demonstrated that the visual cortex of blind subjects is recruited during sensory substitution, the cognitive basis of this activation remains incompletely understood. To test the hypothesis that top-down input provides a significant contribution to this activation, we performed functional MRI scanning in 11 blind (7 acquired and 4 congenital) and 11 sighted subjects under two conditions: passive listening of image-encoded soundscapes before sensory substitution training and active interpretation of the same auditory sensory substitution signals after a 10-minute training session. We found that the modulation of visual cortex activity due to active interpretation was significantly stronger in the blind over sighted subjects. In addition, congenitally blind subjects showed stronger task-induced modulation in the visual cortex than acquired blind subjects. In a parallel experiment, we scanned 18 blind (11 acquired and 7 congenital) and 18 sighted subjects at rest to investigate alterations in functional connectivity due to visual deprivation. The results demonstrated that visual cortex connectivity of the blind shifted away from sensory networks and toward known areas of top-down input. Taken together, our data support the model of the brain, including the visual system, as a highly flexible task-based and not sensory-based machine. PMID:26584776
Gohil, Krutika; Bluschke, Annet; Roessner, Veit; Stock, Ann-Kathrin; Beste, Christian
2017-10-01
Many everyday tasks require executive functions to achieve a certain goal. Quite often, this requires the integration of information derived from different sensory modalities. Children are less likely to integrate information from different modalities and, at the same time, also do not command fully developed executive functions, as compared to adults. Yet still, the role of developmental age-related effects on multisensory integration processes has not been examined within the context of multicomponent behavior until now (i.e., the concatenation of different executive subprocesses). This is problematic because differences in multisensory integration might actually explain a significant amount of the developmental effects that have traditionally been attributed to changes in executive functioning. In a system, neurophysiological approach combining electroencephaloram (EEG) recordings and source localization analyses, we therefore examined this question. The results show that differences in how children and adults accomplish multicomponent behavior do not solely depend on developmental differences in executive functioning. Instead, the observed developmental differences in response selection processes (reflected by the P3 ERP) were largely dependent on the complexity of integrating temporally separated stimuli from different modalities. This effect was related to activation differences in medial frontal and inferior parietal cortices. Primary perceptual gating or attentional selection processes (P1 and N1 ERPs) were not affected. The results show that differences in multisensory integration explain parts of transformations in cognitive processes between childhood and adulthood that have traditionally been attributed to changes in executive functioning, especially when these require the integration of multiple modalities during response selection. Hum Brain Mapp 38:4933-4945, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Designing sensory-substitution devices: Principles, pitfalls and potential1
Kristjánsson, Árni; Moldoveanu, Alin; Jóhannesson, Ómar I.; Balan, Oana; Spagnol, Simone; Valgeirsdóttir, Vigdís Vala; Unnthorsson, Rúnar
2016-01-01
An exciting possibility for compensating for loss of sensory function is to augment deficient senses by conveying missing information through an intact sense. Here we present an overview of techniques that have been developed for sensory substitution (SS) for the blind, through both touch and audition, with special emphasis on the importance of training for the use of such devices, while highlighting potential pitfalls in their design. One example of a pitfall is how conveying extra information about the environment risks sensory overload. Related to this, the limits of attentional capacity make it important to focus on key information and avoid redundancies. Also, differences in processing characteristics and bandwidth between sensory systems severely constrain the information that can be conveyed. Furthermore, perception is a continuous process and does not involve a snapshot of the environment. Design of sensory substitution devices therefore requires assessment of the nature of spatiotemporal continuity for the different senses. Basic psychophysical and neuroscientific research into representations of the environment and the most effective ways of conveying information should lead to better design of sensory substitution systems. Sensory substitution devices should emphasize usability, and should not interfere with other inter- or intramodal perceptual function. Devices should be task-focused since in many cases it may be impractical to convey too many aspects of the environment. Evidence for multisensory integration in the representation of the environment suggests that researchers should not limit themselves to a single modality in their design. Finally, we recommend active training on devices, especially since it allows for externalization, where proximal sensory stimulation is attributed to a distinct exterior object. PMID:27567755
Designing sensory-substitution devices: Principles, pitfalls and potential1.
Kristjánsson, Árni; Moldoveanu, Alin; Jóhannesson, Ómar I; Balan, Oana; Spagnol, Simone; Valgeirsdóttir, Vigdís Vala; Unnthorsson, Rúnar
2016-09-21
An exciting possibility for compensating for loss of sensory function is to augment deficient senses by conveying missing information through an intact sense. Here we present an overview of techniques that have been developed for sensory substitution (SS) for the blind, through both touch and audition, with special emphasis on the importance of training for the use of such devices, while highlighting potential pitfalls in their design. One example of a pitfall is how conveying extra information about the environment risks sensory overload. Related to this, the limits of attentional capacity make it important to focus on key information and avoid redundancies. Also, differences in processing characteristics and bandwidth between sensory systems severely constrain the information that can be conveyed. Furthermore, perception is a continuous process and does not involve a snapshot of the environment. Design of sensory substitution devices therefore requires assessment of the nature of spatiotemporal continuity for the different senses. Basic psychophysical and neuroscientific research into representations of the environment and the most effective ways of conveying information should lead to better design of sensory substitution systems. Sensory substitution devices should emphasize usability, and should not interfere with other inter- or intramodal perceptual function. Devices should be task-focused since in many cases it may be impractical to convey too many aspects of the environment. Evidence for multisensory integration in the representation of the environment suggests that researchers should not limit themselves to a single modality in their design. Finally, we recommend active training on devices, especially since it allows for externalization, where proximal sensory stimulation is attributed to a distinct exterior object.
Sensory function: insights from Wave 2 of the National Social Life, Health, and Aging Project.
Pinto, Jayant M; Kern, David W; Wroblewski, Kristen E; Chen, Rachel C; Schumm, L Philip; McClintock, Martha K
2014-11-01
Sensory function, a critical component of quality of life, generally declines with age and influences health, physical activity, and social function. Sensory measures collected in Wave 2 of the National Social Life, Health, and Aging Project (NSHAP) survey focused on the personal impact of sensory function in the home environment and included: subjective assessment of vision, hearing, and touch, information on relevant home conditions and social sequelae as well as an improved objective assessment of odor detection. Summary data were generated for each sensory category, stratified by age (62-90 years of age) and gender, with a focus on function in the home setting and the social consequences of sensory decrements in each modality. Among both men and women, older age was associated with self-reported impairment of vision, hearing, and pleasantness of light touch. Compared with women, men reported significantly worse hearing and found light touch less appealing. There were no gender differences for vision. Overall, hearing loss seemed to have a greater impact on social function than did visual impairment. Sensory function declines across age groups, with notable gender differences for hearing and light touch. Further analysis of sensory measures from NSHAP Wave 2 may provide important information on how sensory declines are related to health, social function, quality of life, morbidity, and mortality in this nationally representative sample of older adults. © The Author 2014. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Karkare, K; Taly, Arun B; Sinha, Sanjib; Rao, S
2011-01-01
Focused studies on sensory manifestations, especially pain and paresthesia in Guillain-Barre' (GB) syndrome are few and far between. To study the sensory manifestations in GB syndrome during 10 days of hospitalization with clinico-electrophysiological correlation. The study included 60 non-consecutive patients with GB syndrome, fulfilling National Institute of Neurological and Communicative Disorders and Stroke (NINCDS) criteria for GB syndrome. Data especially related to clinical and electrophysiological evidence of sensory involvement were analyzed. Pain was assessed using a) visual analogue paraesthesias (Vapar), b) visual analogue for pain (Vap) and c) verbal rating scale for pain (Verp). Sensory symptoms were widely prevalent: paraesthesia in 45 (75%) patients and pain in 30 (50%) patients. Impairment of different sensory modalities included: pain in 8 (13.3%), joint position sense in 14 (23.3%), and vibration in 11 (18.3%). Electrophysiological evidence of abnormal sensory nerve conduction was noted in 35 (58.3%) patients. Pain assessment using Vapar, Vap and Verp for from Day 1 to Day 10 of hospitalization revealed that from Day 7 onwards the degree and frequency of sensory symptoms and signs decreased. On comparing various clinico-electrophysiological parameters among patients of GB syndrome with and without pain and paresthesia. Presence of respiratory distress correlated with pain and paresthesia (P=0.02). Sensory manifestations in GB syndrome are often under-recognized and under-emphasized. This study analyzed the evolution and the profile of pain and paresthesia in GB syndrome during hospitalization. Knowledge, especially about evolution of pain and paresthesia during hospitalization might improve understanding and patient care.