Sample records for crossmodal congruency task

  1. Oscillatory signatures of crossmodal congruence effects: An EEG investigation employing a visuotactile pattern matching paradigm.

    PubMed

    Göschl, Florian; Friese, Uwe; Daume, Jonathan; König, Peter; Engel, Andreas K

    2015-08-01

    Coherent percepts emerge from the accurate combination of inputs from the different sensory systems. There is an ongoing debate about the neurophysiological mechanisms of crossmodal interactions in the brain, and it has been proposed that transient synchronization of neurons might be of central importance. Oscillatory activity in lower frequency ranges (<30Hz) has been implicated in mediating long-range communication as typically studied in multisensory research. In the current study, we recorded high-density electroencephalograms while human participants were engaged in a visuotactile pattern matching paradigm and analyzed oscillatory power in the theta- (4-7Hz), alpha- (8-13Hz) and beta-bands (13-30Hz). Employing the same physical stimuli, separate tasks of the experiment either required the detection of predefined targets in visual and tactile modalities or the explicit evaluation of crossmodal stimulus congruence. Analysis of the behavioral data showed benefits for congruent visuotactile stimulus combinations. Differences in oscillatory dynamics related to crossmodal congruence within the two tasks were observed in the beta-band for crossmodal target detection, as well as in the theta-band for congruence evaluation. Contrasting ongoing activity preceding visuotactile stimulation between the two tasks revealed differences in the alpha- and beta-bands. Source reconstruction of between-task differences showed prominent involvement of premotor cortex, supplementary motor area, somatosensory association cortex and the supramarginal gyrus. These areas not only exhibited more involvement in the pre-stimulus interval for target detection compared to congruence evaluation, but were also crucially involved in post-stimulus differences related to crossmodal stimulus congruence within the detection task. These results add to the increasing evidence that low frequency oscillations are functionally relevant for integration in distributed brain networks, as demonstrated for crossmodal interactions in visuotactile pattern matching in the current study. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. A matter of attention: Crossmodal congruence enhances and impairs performance in a novel trimodal matching paradigm.

    PubMed

    Misselhorn, Jonas; Daume, Jonathan; Engel, Andreas K; Friese, Uwe

    2016-07-29

    A novel crossmodal matching paradigm including vision, audition, and somatosensation was developed in order to investigate the interaction between attention and crossmodal congruence in multisensory integration. To that end, all three modalities were stimulated concurrently while a bimodal focus was defined blockwise. Congruence between stimulus intensity changes in the attended modalities had to be evaluated. We found that crossmodal congruence improved performance if both, the attended modalities and the task-irrelevant distractor were congruent. If the attended modalities were incongruent, the distractor impaired performance due to its congruence relation to one of the attended modalities. Between attentional conditions, magnitudes of crossmodal enhancement or impairment differed. Largest crossmodal effects were seen in visual-tactile matching, intermediate effects for audio-visual and smallest effects for audio-tactile matching. We conclude that differences in crossmodal matching likely reflect characteristics of multisensory neural network architecture. We discuss our results with respect to the timing of perceptual processing and state hypotheses for future physiological studies. Finally, etiological questions are addressed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. The time-course of the cross-modal semantic modulation of visual picture processing by naturalistic sounds and spoken words.

    PubMed

    Chen, Yi-Chuan; Spence, Charles

    2013-01-01

    The time-course of cross-modal semantic interactions between pictures and either naturalistic sounds or spoken words was compared. Participants performed a speeded picture categorization task while hearing a task-irrelevant auditory stimulus presented at various stimulus onset asynchronies (SOAs) with respect to the visual picture. Both naturalistic sounds and spoken words gave rise to cross-modal semantic congruency effects (i.e., facilitation by semantically congruent sounds and inhibition by semantically incongruent sounds, as compared to a baseline noise condition) when the onset of the sound led that of the picture by 240 ms or more. Both naturalistic sounds and spoken words also gave rise to inhibition irrespective of their semantic congruency when presented within 106 ms of the onset of the picture. The peak of this cross-modal inhibitory effect occurred earlier for spoken words than for naturalistic sounds. These results therefore demonstrate that the semantic priming of visual picture categorization by auditory stimuli only occurs when the onset of the sound precedes that of the visual stimulus. The different time-courses observed for naturalistic sounds and spoken words likely reflect the different processing pathways to access the relevant semantic representations.

  4. Perceived Odor-Taste Congruence Influences Intensity and Pleasantness Differently.

    PubMed

    Amsellem, Sherlley; Ohla, Kathrin

    2016-10-01

    The role of congruence in cross-modal interactions has received little attention. In most experiments involving cross-modal pairs, congruence is conceived of as a binary process according to which cross-modal pairs are categorized as perceptually and/or semantically matching or mismatching. The present study investigated whether odor-taste congruence can be perceived gradually and whether congruence impacts other facets of subjective experience, that is, intensity, pleasantness, and familiarity. To address these questions, we presented food odorants (chicken, orange, and 3 mixtures of the 2) and tastants (savory-salty and sour-sweet) in pairs varying in congruence. Participants were to report the perceived congruence of the pairs along with intensity, pleasantness, and familiarity. We found that participants could perceive distinct congruence levels, thereby favoring a multilevel account of congruence perception. In addition, familiarity and pleasantness followed the same pattern as the congruence while intensity was highest for the most congruent and the most incongruent pairs whereas intensities of the intermediary-congruent pairs were reduced. Principal component analysis revealed that pleasantness and familiarity form one dimension of the phenomenological experience of odor-taste pairs that was orthogonal to intensity. The results bear implications for the understanding the behavioral underpinnings of perseverance of habitual food choices. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. Extending the Body to Virtual Tools Using a Robotic Surgical Interface: Evidence from the Crossmodal Congruency Task

    PubMed Central

    Sengül, Ali; van Elk, Michiel; Rognini, Giulio; Aspell, Jane Elizabeth; Bleuler, Hannes; Blanke, Olaf

    2012-01-01

    The effects of real-world tool use on body or space representations are relatively well established in cognitive neuroscience. Several studies have shown, for example, that active tool use results in a facilitated integration of multisensory information in peripersonal space, i.e. the space directly surrounding the body. However, it remains unknown to what extent similar mechanisms apply to the use of virtual-robotic tools, such as those used in the field of surgical robotics, in which a surgeon may use bimanual haptic interfaces to control a surgery robot at a remote location. This paper presents two experiments in which participants used a haptic handle, originally designed for a commercial surgery robot, to control a virtual tool. The integration of multisensory information related to the virtual-robotic tool was assessed by means of the crossmodal congruency task, in which subjects responded to tactile vibrations applied to their fingers while ignoring visual distractors superimposed on the tip of the virtual-robotic tool. Our results show that active virtual-robotic tool use changes the spatial modulation of the crossmodal congruency effects, comparable to changes in the representation of peripersonal space observed during real-world tool use. Moreover, when the virtual-robotic tools were held in a crossed position, the visual distractors interfered strongly with tactile stimuli that was connected with the hand via the tool, reflecting a remapping of peripersonal space. Such remapping was not only observed when the virtual-robotic tools were actively used (Experiment 1), but also when passively held the tools (Experiment 2). The present study extends earlier findings on the extension of peripersonal space from physical and pointing tools to virtual-robotic tools using techniques from haptics and virtual reality. We discuss our data with respect to learning and human factors in the field of surgical robotics and discuss the use of new technologies in the field of cognitive neuroscience. PMID:23227142

  6. Extending the body to virtual tools using a robotic surgical interface: evidence from the crossmodal congruency task.

    PubMed

    Sengül, Ali; van Elk, Michiel; Rognini, Giulio; Aspell, Jane Elizabeth; Bleuler, Hannes; Blanke, Olaf

    2012-01-01

    The effects of real-world tool use on body or space representations are relatively well established in cognitive neuroscience. Several studies have shown, for example, that active tool use results in a facilitated integration of multisensory information in peripersonal space, i.e. the space directly surrounding the body. However, it remains unknown to what extent similar mechanisms apply to the use of virtual-robotic tools, such as those used in the field of surgical robotics, in which a surgeon may use bimanual haptic interfaces to control a surgery robot at a remote location. This paper presents two experiments in which participants used a haptic handle, originally designed for a commercial surgery robot, to control a virtual tool. The integration of multisensory information related to the virtual-robotic tool was assessed by means of the crossmodal congruency task, in which subjects responded to tactile vibrations applied to their fingers while ignoring visual distractors superimposed on the tip of the virtual-robotic tool. Our results show that active virtual-robotic tool use changes the spatial modulation of the crossmodal congruency effects, comparable to changes in the representation of peripersonal space observed during real-world tool use. Moreover, when the virtual-robotic tools were held in a crossed position, the visual distractors interfered strongly with tactile stimuli that was connected with the hand via the tool, reflecting a remapping of peripersonal space. Such remapping was not only observed when the virtual-robotic tools were actively used (Experiment 1), but also when passively held the tools (Experiment 2). The present study extends earlier findings on the extension of peripersonal space from physical and pointing tools to virtual-robotic tools using techniques from haptics and virtual reality. We discuss our data with respect to learning and human factors in the field of surgical robotics and discuss the use of new technologies in the field of cognitive neuroscience.

  7. Crossmodal attention switching: auditory dominance in temporal discrimination tasks.

    PubMed

    Lukas, Sarah; Philipp, Andrea M; Koch, Iring

    2014-11-01

    Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Audiovisual semantic interactions between linguistic and nonlinguistic stimuli: The time-courses and categorical specificity.

    PubMed

    Chen, Yi-Chuan; Spence, Charles

    2018-04-30

    We examined the time-courses and categorical specificity of the crossmodal semantic congruency effects elicited by naturalistic sounds and spoken words on the processing of visual pictures (Experiment 1) and printed words (Experiment 2). Auditory cues were presented at 7 different stimulus onset asynchronies (SOAs) with respect to the visual targets, and participants made speeded categorization judgments (living vs. nonliving). Three common effects were observed across 2 experiments: Both naturalistic sounds and spoken words induced a slowly emerging congruency effect when leading by 250 ms or more in the congruent compared with the incongruent condition, and a rapidly emerging inhibitory effect when leading by 250 ms or less in the incongruent condition as opposed to the noise condition. Only spoken words that did not match the visual targets elicited an additional inhibitory effect when leading by 100 ms or when presented simultaneously. Compared with nonlinguistic stimuli, the crossmodal congruency effects associated with linguistic stimuli occurred over a wider range of SOAs and occurred at a more specific level of the category hierarchy (i.e., the basic level) than was required by the task. A comprehensive framework is proposed to provide a dynamic view regarding how meaning is extracted during the processing of visual or auditory linguistic and nonlinguistic stimuli, therefore contributing to our understanding of multisensory semantic processing in humans. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  9. Brain activity during divided and selective attention to auditory and visual sentence comprehension tasks

    PubMed Central

    Moisala, Mona; Salmela, Viljami; Salo, Emma; Carlson, Synnöve; Vuontela, Virve; Salonen, Oili; Alho, Kimmo

    2015-01-01

    Using functional magnetic resonance imaging (fMRI), we measured brain activity of human participants while they performed a sentence congruence judgment task in either the visual or auditory modality separately, or in both modalities simultaneously. Significant performance decrements were observed when attention was divided between the two modalities compared with when one modality was selectively attended. Compared with selective attention (i.e., single tasking), divided attention (i.e., dual-tasking) did not recruit additional cortical regions, but resulted in increased activity in medial and lateral frontal regions which were also activated by the component tasks when performed separately. Areas involved in semantic language processing were revealed predominantly in the left lateral prefrontal cortex by contrasting incongruent with congruent sentences. These areas also showed significant activity increases during divided attention in relation to selective attention. In the sensory cortices, no crossmodal inhibition was observed during divided attention when compared with selective attention to one modality. Our results suggest that the observed performance decrements during dual-tasking are due to interference of the two tasks because they utilize the same part of the cortex. Moreover, semantic dual-tasking did not appear to recruit additional brain areas in comparison with single tasking, and no crossmodal inhibition was observed during intermodal divided attention. PMID:25745395

  10. Brain activity during divided and selective attention to auditory and visual sentence comprehension tasks.

    PubMed

    Moisala, Mona; Salmela, Viljami; Salo, Emma; Carlson, Synnöve; Vuontela, Virve; Salonen, Oili; Alho, Kimmo

    2015-01-01

    Using functional magnetic resonance imaging (fMRI), we measured brain activity of human participants while they performed a sentence congruence judgment task in either the visual or auditory modality separately, or in both modalities simultaneously. Significant performance decrements were observed when attention was divided between the two modalities compared with when one modality was selectively attended. Compared with selective attention (i.e., single tasking), divided attention (i.e., dual-tasking) did not recruit additional cortical regions, but resulted in increased activity in medial and lateral frontal regions which were also activated by the component tasks when performed separately. Areas involved in semantic language processing were revealed predominantly in the left lateral prefrontal cortex by contrasting incongruent with congruent sentences. These areas also showed significant activity increases during divided attention in relation to selective attention. In the sensory cortices, no crossmodal inhibition was observed during divided attention when compared with selective attention to one modality. Our results suggest that the observed performance decrements during dual-tasking are due to interference of the two tasks because they utilize the same part of the cortex. Moreover, semantic dual-tasking did not appear to recruit additional brain areas in comparison with single tasking, and no crossmodal inhibition was observed during intermodal divided attention.

  11. Cross-modal perception of rhythm in music and dance by cochlear implant users.

    PubMed

    Vongpaisal, Tara; Monaghan, Melanie

    2014-05-01

    Two studies examined adult cochlear implant (CI) users' ability to match auditory rhythms occurring in music to visual rhythms occurring in dance (Cha Cha, Slow Swing, Tango and Jive). In Experiment 1, adults CI users (n = 10) and hearing controls matched a music excerpt to choreographed dance sequences presented as silent videos. In Experiment 2, participants matched a silent video of a dance sequence to music excerpts. CI users were successful in detecting timing congruencies across music and dance at well above-chance levels suggesting that they were able to process distinctive auditory and visual rhythm patterns that characterized each style. However, they were better able to detect cross-modal timing congruencies when the reference was an auditory rhythm than when the reference was a visual rhythm. Learning strategies that encourage cross-modal learning of musical rhythms may have applications in developing novel rehabilitative strategies to enhance music perception and appreciation outcomes of child implant users.

  12. Multimodal lexical processing in auditory cortex is literacy skill dependent.

    PubMed

    McNorgan, Chris; Awati, Neha; Desroches, Amy S; Booth, James R

    2014-09-01

    Literacy is a uniquely human cross-modal cognitive process wherein visual orthographic representations become associated with auditory phonological representations through experience. Developmental studies provide insight into how experience-dependent changes in brain organization influence phonological processing as a function of literacy. Previous investigations show a synchrony-dependent influence of letter presentation on individual phoneme processing in superior temporal sulcus; others demonstrate recruitment of primary and associative auditory cortex during cross-modal processing. We sought to determine whether brain regions supporting phonological processing of larger lexical units (monosyllabic words) over larger time windows is sensitive to cross-modal information, and whether such effects are literacy dependent. Twenty-two children (age 8-14 years) made rhyming judgments for sequentially presented word and pseudoword pairs presented either unimodally (auditory- or visual-only) or cross-modally (audiovisual). Regression analyses examined the relationship between literacy and congruency effects (overlapping orthography and phonology vs. overlapping phonology-only). We extend previous findings by showing that higher literacy is correlated with greater congruency effects in auditory cortex (i.e., planum temporale) only for cross-modal processing. These skill effects were specific to known words and occurred over a large time window, suggesting that multimodal integration in posterior auditory cortex is critical for fluent reading. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Vocal and visual stimulation, congruence and lateralization affect brain oscillations in interspecies emotional positive and negative interactions.

    PubMed

    Balconi, Michela; Vanutelli, Maria Elide

    2016-01-01

    The present research explored the effect of cross-modal integration of emotional cues (auditory and visual (AV)) compared with only visual (V) emotional cues in observing interspecies interactions. The brain activity was monitored when subjects processed AV and V situations, which represented an emotional (positive or negative), interspecies (human-animal) interaction. Congruence (emotionally congruous or incongruous visual and auditory patterns) was also modulated. electroencephalography brain oscillations (from delta to beta) were analyzed and the cortical source localization (by standardized Low Resolution Brain Electromagnetic Tomography) was applied to the data. Frequency band (mainly low-frequency delta and theta) showed a significant brain activity increasing in response to negative compared to positive interactions within the right hemisphere. Moreover, differences were found based on stimulation type, with an increased effect for AV compared with V. Finally, delta band supported a lateralized right dorsolateral prefrontal cortex (DLPFC) activity in response to negative and incongruous interspecies interactions, mainly for AV. The contribution of cross-modality, congruence (incongruous patterns), and lateralization (right DLPFC) in response to interspecies emotional interactions was discussed at light of a "negative lateralized effect."

  14. Sounds can boost the awareness of visual events through attention without cross-modal integration.

    PubMed

    Pápai, Márta Szabina; Soto-Faraco, Salvador

    2017-01-31

    Cross-modal interactions can lead to enhancement of visual perception, even for visual events below awareness. However, the underlying mechanism is still unclear. Can purely bottom-up cross-modal integration break through the threshold of awareness? We used a binocular rivalry paradigm to measure perceptual switches after brief flashes or sounds which, sometimes, co-occurred. When flashes at the suppressed eye coincided with sounds, perceptual switches occurred the earliest. Yet, contrary to the hypothesis of cross-modal integration, this facilitation never surpassed the assumption of probability summation of independent sensory signals. A follow-up experiment replicated the same pattern of results using silent gaps embedded in continuous noise, instead of sounds. This manipulation should weaken putative sound-flash integration, although keep them salient as bottom-up attention cues. Additional results showed that spatial congruency between flashes and sounds did not determine the effectiveness of cross-modal facilitation, which was again not better than probability summation. Thus, the present findings fail to fully support the hypothesis of bottom-up cross-modal integration, above and beyond the independent contribution of two transient signals, as an account for cross-modal enhancement of visual events below level of awareness.

  15. Time-compressed spoken word primes crossmodally enhance processing of semantically congruent visual targets.

    PubMed

    Mahr, Angela; Wentura, Dirk

    2014-02-01

    Findings from three experiments support the conclusion that auditory primes facilitate the processing of related targets. In Experiments 1 and 2, we employed a crossmodal Stroop color identification task with auditory color words (as primes) and visual color patches (as targets). Responses were faster for congruent priming, in comparison to neutral or incongruent priming. This effect also emerged for different levels of time compression of the auditory primes (to 30 % and 10 % of the original length; i.e., 120 and 40 ms) and turned out to be even more pronounced under high-perceptual-load conditions (Exps. 1 and 2). In Experiment 3, target-present or -absent decisions for brief target displays had to be made, thereby ruling out response-priming processes as a cause of the congruency effects. Nevertheless, target detection (d') was increased by congruent primes (30 % compression) in comparison to incongruent or neutral primes. Our results suggest semantic object-based auditory-visual interactions, which rapidly increase the denoted target object's salience. This would apply, in particular, to complex visual scenes.

  16. Evidence of a visual-to-auditory cross-modal sensory gating phenomenon as reflected by the human P50 event-related brain potential modulation.

    PubMed

    Lebib, Riadh; Papo, David; de Bode, Stella; Baudonnière, Pierre Marie

    2003-05-08

    We investigated the existence of a cross-modal sensory gating reflected by the modulation of an early electrophysiological index, the P50 component. We analyzed event-related brain potentials elicited by audiovisual speech stimuli manipulated along two dimensions: congruency and discriminability. The results showed that the P50 was attenuated when visual and auditory speech information were redundant (i.e. congruent), in comparison with this same event-related potential component elicited with discrepant audiovisual dubbing. When hard to discriminate, however, bimodal incongruent speech stimuli elicited a similar pattern of P50 attenuation. We concluded to the existence of a visual-to-auditory cross-modal sensory gating phenomenon. These results corroborate previous findings revealing a very early audiovisual interaction during speech perception. Finally, we postulated that the sensory gating system included a cross-modal dimension.

  17. Crossmodal representation of a functional robotic hand arises after extensive training in healthy participants.

    PubMed

    Marini, Francesco; Tagliabue, Chiara F; Sposito, Ambra V; Hernandez-Arieta, Alejandro; Brugger, Peter; Estévez, Natalia; Maravita, Angelo

    2014-01-01

    The way in which humans represent their own bodies is critical in guiding their interactions with the environment. To achieve successful body-space interactions, the body representation is strictly connected with that of the space immediately surrounding it through efficient visuo-tactile crossmodal integration. Such a body-space integrated representation is not fixed, but can be dynamically modulated by the use of external tools. Our study aims to explore the effect of using a complex tool, namely a functional prosthesis, on crossmodal visuo-tactile spatial interactions in healthy participants. By using the crossmodal visuo-tactile congruency paradigm, we found that prolonged training with a mechanical hand capable of distal hand movements and providing sensory feedback induces a pattern of interference, which is not observed after a brief training, between visual stimuli close to the prosthesis and touches on the body. These results suggest that after extensive, but not short, training the functional prosthesis acquires a visuo-tactile crossmodal representation akin to real limbs. This finding adds to previous evidence for the embodiment of functional prostheses in amputees, and shows that their use may also improve the crossmodal combination of somatosensory feedback delivered by the prosthesis with visual stimuli in the space around it, thus effectively augmenting the patients' visuomotor abilities. © 2013 Published by Elsevier Ltd.

  18. Assessing implicit odor localization in humans using a cross-modal spatial cueing paradigm.

    PubMed

    Moessnang, Carolin; Finkelmeyer, Andreas; Vossen, Alexandra; Schneider, Frank; Habel, Ute

    2011-01-01

    Navigation based on chemosensory information is one of the most important skills in the animal kingdom. Studies on odor localization suggest that humans have lost this ability. However, the experimental approaches used so far were limited to explicit judgements, which might ignore a residual ability for directional smelling on an implicit level without conscious appraisal. A novel cueing paradigm was developed in order to determine whether an implicit ability for directional smelling exists. Participants performed a visual two-alternative forced choice task in which the target was preceded either by a side-congruent or a side-incongruent olfactory spatial cue. An explicit odor localization task was implemented in a second experiment. No effect of cue congruency on mean reaction times could be found. However, a time by condition interaction emerged, with significantly slower responses to congruently compared to incongruently cued targets at the beginning of the experiment. This cueing effect gradually disappeared throughout the course of the experiment. In addition, participants performed at chance level in the explicit odor localization task, thus confirming the results of previous research. The implicit cueing task suggests the existence of spatial information processing in the olfactory system. Response slowing after a side-congruent olfactory cue is interpreted as a cross-modal attentional interference effect. In addition, habituation might have led to a gradual disappearance of the cueing effect. It is concluded that under immobile conditions with passive monorhinal stimulation, humans are unable to explicitly determine the location of a pure odorant. Implicitly, however, odor localization seems to exert an influence on human behaviour. To our knowledge, these data are the first to show implicit effects of odor localization on overt human behaviour and thus support the hypothesis of residual directional smelling in humans. © 2011 Moessnang et al.

  19. Attention Modulates Visual-Tactile Interaction in Spatial Pattern Matching

    PubMed Central

    Göschl, Florian; Engel, Andreas K.; Friese, Uwe

    2014-01-01

    Factors influencing crossmodal interactions are manifold and operate in a stimulus-driven, bottom-up fashion, as well as via top-down control. Here, we evaluate the interplay of stimulus congruence and attention in a visual-tactile task. To this end, we used a matching paradigm requiring the identification of spatial patterns that were concurrently presented visually on a computer screen and haptically to the fingertips by means of a Braille stimulator. Stimulation in our paradigm was always bimodal with only the allocation of attention being manipulated between conditions. In separate blocks of the experiment, participants were instructed to (a) focus on a single modality to detect a specific target pattern, (b) pay attention to both modalities to detect a specific target pattern, or (c) to explicitly evaluate if the patterns in both modalities were congruent or not. For visual as well as tactile targets, congruent stimulus pairs led to quicker and more accurate detection compared to incongruent stimulation. This congruence facilitation effect was more prominent under divided attention. Incongruent stimulation led to behavioral decrements under divided attention as compared to selectively attending a single sensory channel. Additionally, when participants were asked to evaluate congruence explicitly, congruent stimulation was associated with better performance than incongruent stimulation. Our results extend previous findings from audiovisual studies, showing that stimulus congruence also resulted in behavioral improvements in visuotactile pattern matching. The interplay of stimulus processing and attentional control seems to be organized in a highly flexible fashion, with the integration of signals depending on both bottom-up and top-down factors, rather than occurring in an ‘all-or-nothing’ manner. PMID:25203102

  20. Visually induced gains in pitch discrimination: Linking audio-visual processing with auditory abilities.

    PubMed

    Møller, Cecilie; Højlund, Andreas; Bærentsen, Klaus B; Hansen, Niels Chr; Skewes, Joshua C; Vuust, Peter

    2018-05-01

    Perception is fundamentally a multisensory experience. The principle of inverse effectiveness (PoIE) states how the multisensory gain is maximal when responses to the unisensory constituents of the stimuli are weak. It is one of the basic principles underlying multisensory processing of spatiotemporally corresponding crossmodal stimuli that are well established at behavioral as well as neural levels. It is not yet clear, however, how modality-specific stimulus features influence discrimination of subtle changes in a crossmodally corresponding feature belonging to another modality. Here, we tested the hypothesis that reliance on visual cues to pitch discrimination follow the PoIE at the interindividual level (i.e., varies with varying levels of auditory-only pitch discrimination abilities). Using an oddball pitch discrimination task, we measured the effect of varying visually perceived vertical position in participants exhibiting a wide range of pitch discrimination abilities (i.e., musicians and nonmusicians). Visual cues significantly enhanced pitch discrimination as measured by the sensitivity index d', and more so in the crossmodally congruent than incongruent condition. The magnitude of gain caused by compatible visual cues was associated with individual pitch discrimination thresholds, as predicted by the PoIE. This was not the case for the magnitude of the congruence effect, which was unrelated to individual pitch discrimination thresholds, indicating that the pitch-height association is robust to variations in auditory skills. Our findings shed light on individual differences in multisensory processing by suggesting that relevant multisensory information that crucially aids some perceivers' performance may be of less importance to others, depending on their unisensory abilities.

  1. Crossmodal Congruency Benefits of Tactile and Visual Signalling

    DTIC Science & Technology

    2013-11-12

    modal information format seemed to produce faster and more accurate performance. The question of learning complex tactile communication signals...SECURITY CLASSIFICATION OF: We conducted an experiment in which tactile messages were created based on five common military arm and hand signals. We...compared response times and accuracy rates of novice individuals responding to visual and tactile representations of these messages, which were

  2. Congruency sequence effect in cross-task context: evidence for dimension-specific modulation.

    PubMed

    Lee, Jaeyong; Cho, Yang Seok

    2013-11-01

    The congruency sequence effect refers to a reduced congruency effect after incongruent trials relative to congruent trials. This modulation is thought to be, at least in part, due to the control mechanisms resolving conflict. The present study examined the nature of the control mechanisms by having participants perform two different tasks in an alternating way. When participants performed horizontal and vertical Simon tasks in Experiment 1A, and horizontal and vertical spatial Stroop task in Experiment 1B, no congruency sequence effect was obtained between the task congruencies. When the Simon task and spatial Stroop task were performed with different response sets in Experiment 2, no congruency sequence effect was obtained. However, in Experiment 3, in which the participants performed the horizontal Simon and spatial Stroop tasks with an identical response set, a significant congruency sequence effect was obtained between the task congruencies. In Experiment 4, no congruency sequence effect was obtained when participants performed two tasks having different task-irrelevant dimensions with the identical response set. The findings suggest inhibitory processing between the task-irrelevant dimension and response mode after conflict. © 2013 Elsevier B.V. All rights reserved.

  3. Cross-modal prediction changes the timing of conscious access during the motion-induced blindness.

    PubMed

    Chang, Acer Y C; Kanai, Ryota; Seth, Anil K

    2015-01-01

    Despite accumulating evidence that perceptual predictions influence perceptual content, the relations between these predictions and conscious contents remain unclear, especially for cross-modal predictions. We examined whether predictions of visual events by auditory cues can facilitate conscious access to the visual stimuli. We trained participants to learn associations between auditory cues and colour changes. We then asked whether congruency between auditory cues and target colours would speed access to consciousness. We did this by rendering a visual target subjectively invisible using motion-induced blindness and then gradually changing its colour while presenting congruent or incongruent auditory cues. Results showed that the visual target gained access to consciousness faster in congruent than in incongruent trials; control experiments excluded potentially confounding effects of attention and motor response. The expectation effect was gradually established over blocks suggesting a role for extensive training. Overall, our findings show that predictions learned through cross-modal training can facilitate conscious access to visual stimuli. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Dynamic Facial Expressions Prime the Processing of Emotional Prosody.

    PubMed

    Garrido-Vásquez, Patricia; Pell, Marc D; Paulmann, Silke; Kotz, Sonja A

    2018-01-01

    Evidence suggests that emotion is represented supramodally in the human brain. Emotional facial expressions, which often precede vocally expressed emotion in real life, can modulate event-related potentials (N100 and P200) during emotional prosody processing. To investigate these cross-modal emotional interactions, two lines of research have been put forward: cross-modal integration and cross-modal priming. In cross-modal integration studies, visual and auditory channels are temporally aligned, while in priming studies they are presented consecutively. Here we used cross-modal emotional priming to study the interaction of dynamic visual and auditory emotional information. Specifically, we presented dynamic facial expressions (angry, happy, neutral) as primes and emotionally-intoned pseudo-speech sentences (angry, happy) as targets. We were interested in how prime-target congruency would affect early auditory event-related potentials, i.e., N100 and P200, in order to shed more light on how dynamic facial information is used in cross-modal emotional prediction. Results showed enhanced N100 amplitudes for incongruently primed compared to congruently and neutrally primed emotional prosody, while the latter two conditions did not significantly differ. However, N100 peak latency was significantly delayed in the neutral condition compared to the other two conditions. Source reconstruction revealed that the right parahippocampal gyrus was activated in incongruent compared to congruent trials in the N100 time window. No significant ERP effects were observed in the P200 range. Our results indicate that dynamic facial expressions influence vocal emotion processing at an early point in time, and that an emotional mismatch between a facial expression and its ensuing vocal emotional signal induces additional processing costs in the brain, potentially because the cross-modal emotional prediction mechanism is violated in case of emotional prime-target incongruency.

  5. Audiovisual Modulation in Mouse Primary Visual Cortex Depends on Cross-Modal Stimulus Configuration and Congruency.

    PubMed

    Meijer, Guido T; Montijn, Jorrit S; Pennartz, Cyriel M A; Lansink, Carien S

    2017-09-06

    The sensory neocortex is a highly connected associative network that integrates information from multiple senses, even at the level of the primary sensory areas. Although a growing body of empirical evidence supports this view, the neural mechanisms of cross-modal integration in primary sensory areas, such as the primary visual cortex (V1), are still largely unknown. Using two-photon calcium imaging in awake mice, we show that the encoding of audiovisual stimuli in V1 neuronal populations is highly dependent on the features of the stimulus constituents. When the visual and auditory stimulus features were modulated at the same rate (i.e., temporally congruent), neurons responded with either an enhancement or suppression compared with unisensory visual stimuli, and their prevalence was balanced. Temporally incongruent tones or white-noise bursts included in audiovisual stimulus pairs resulted in predominant response suppression across the neuronal population. Visual contrast did not influence multisensory processing when the audiovisual stimulus pairs were congruent; however, when white-noise bursts were used, neurons generally showed response suppression when the visual stimulus contrast was high whereas this effect was absent when the visual contrast was low. Furthermore, a small fraction of V1 neurons, predominantly those located near the lateral border of V1, responded to sound alone. These results show that V1 is involved in the encoding of cross-modal interactions in a more versatile way than previously thought. SIGNIFICANCE STATEMENT The neural substrate of cross-modal integration is not limited to specialized cortical association areas but extends to primary sensory areas. Using two-photon imaging of large groups of neurons, we show that multisensory modulation of V1 populations is strongly determined by the individual and shared features of cross-modal stimulus constituents, such as contrast, frequency, congruency, and temporal structure. Congruent audiovisual stimulation resulted in a balanced pattern of response enhancement and suppression compared with unisensory visual stimuli, whereas incongruent or dissimilar stimuli at full contrast gave rise to a population dominated by response-suppressing neurons. Our results indicate that V1 dynamically integrates nonvisual sources of information while still attributing most of its resources to coding visual information. Copyright © 2017 the authors 0270-6474/17/378783-14$15.00/0.

  6. Effect of perceptual load on semantic access by speech in children.

    PubMed

    Jerger, Susan; Damian, Markus F; Mills, Candice; Bartlett, James; Tye-Murray, Nancy; Abdi, Hervé

    2013-04-01

    To examine whether semantic access by speech requires attention in children. Children (N = 200) named pictures and ignored distractors on a cross-modal (distractors: auditory-no face) or multimodal (distractors: auditory-static face and audiovisual-dynamic face) picture word task. The cross-modal task had a low load, and the multimodal task had a high load (i.e., respectively naming pictures displayed on a blank screen vs. below the talker's face on his T-shirt). Semantic content of distractors was manipulated to be related vs. unrelated to the picture (e.g., picture "dog" with distractors "bear" vs. "cheese"). If irrelevant semantic content manipulation influences naming times on both tasks despite variations in loads, Lavie's (2005) perceptual load model proposes that semantic access is independent of capacity-limited attentional resources; if, however, irrelevant content influences naming only on the cross-modal task (low load), the perceptual load model proposes that semantic access is dependent on attentional resources exhausted by the higher load task. Irrelevant semantic content affected performance for both tasks in 6- to 9-year-olds but only on the cross-modal task in 4- to 5-year-olds. The addition of visual speech did not influence results on the multimodal task. Younger and older children differ in dependence on attentional resources for semantic access by speech.

  7. Cross-modal working memory binding and word recognition skills: how specific is the link?

    PubMed

    Wang, Shinmin; Allen, Richard J

    2018-04-01

    Recent research has suggested that the creation of temporary bound representations of information from different sources within working memory uniquely relates to word recognition abilities in school-age children. However, it is unclear to what extent this link is attributable specifically to the binding ability for cross-modal information. This study examined the performance of Grade 3 (8-9 years old) children on binding tasks requiring either temporary association formation of two visual items (i.e., within-modal binding) or pairs of visually presented abstract shapes and auditorily presented nonwords (i.e., cross-modal binding). Children's word recognition skills were related to performance on the cross-modal binding task but not on the within-modal binding task. Further regression models showed that cross-modal binding memory was a significant predictor of word recognition when memory for its constituent elements, general abilities, and crucially, within-modal binding memory were taken into account. These findings may suggest a specific link between the ability to bind information across modalities within working memory and word recognition skills.

  8. The role of RT carry-over for congruence sequence effects in masked priming.

    PubMed

    Huber-Huber, Christoph; Ansorge, Ulrich

    2017-05-01

    The present study disentangles 2 sources of the congruence sequence effect with masked primes: congruence and response time of the previous trial (reaction time [RT] carry-over). Using arrows as primes and targets and a metacontrast masking procedure we found congruence as well as congruence sequence effects. In addition, congruence sequence effects decreased when RT carry-over was accounted for in a mixed model analysis, suggesting that RT carry-over contributes to congruence sequence effects in masked priming. Crucially, effects of previous trial congruence were not cancelled out completely indicating that RT carry-over and previous trial congruence are 2 sources feeding into the congruence sequence effect. A secondary task requiring response speed judgments demonstrated general awareness of response speed (Experiments 1), but removing this secondary task (Experiment 2) showed that RT carry-over effects were also present in single-task conditions. During (dual-task) prime-awareness test parts of both experiments, however, RT carry-over failed to modulate congruence effects, suggesting that some task sets of the participants can prevent the effect. The basic RT carry-over effects are consistent with the conflict adaptation account, with the adaptation to the statistics of the environment (ASE) model, and possibly with the temporal learning explanation. Additionally considering the task-dependence of RT carry-over, the results are most compatible with the conflict adaptation account. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Voice over: Audio-visual congruency and content recall in the gallery setting

    PubMed Central

    Fairhurst, Merle T.; Scott, Minnie; Deroy, Ophelia

    2017-01-01

    Experimental research has shown that pairs of stimuli which are congruent and assumed to ‘go together’ are recalled more effectively than an item presented in isolation. Will this multisensory memory benefit occur when stimuli are richer and longer, in an ecological setting? In the present study, we focused on an everyday situation of audio-visual learning and manipulated the relationship between audio guide tracks and viewed portraits in the galleries of the Tate Britain. By varying the gender and narrative style of the voice-over, we examined how the perceived congruency and assumed unity of the audio guide track with painted portraits affected subsequent recall. We show that tracks perceived as best matching the viewed portraits led to greater recall of both sensory and linguistic content. We provide the first evidence that manipulating crossmodal congruence and unity assumptions can effectively impact memory in a multisensory ecological setting, even in the absence of precise temporal alignment between sensory cues. PMID:28636667

  10. Voice over: Audio-visual congruency and content recall in the gallery setting.

    PubMed

    Fairhurst, Merle T; Scott, Minnie; Deroy, Ophelia

    2017-01-01

    Experimental research has shown that pairs of stimuli which are congruent and assumed to 'go together' are recalled more effectively than an item presented in isolation. Will this multisensory memory benefit occur when stimuli are richer and longer, in an ecological setting? In the present study, we focused on an everyday situation of audio-visual learning and manipulated the relationship between audio guide tracks and viewed portraits in the galleries of the Tate Britain. By varying the gender and narrative style of the voice-over, we examined how the perceived congruency and assumed unity of the audio guide track with painted portraits affected subsequent recall. We show that tracks perceived as best matching the viewed portraits led to greater recall of both sensory and linguistic content. We provide the first evidence that manipulating crossmodal congruence and unity assumptions can effectively impact memory in a multisensory ecological setting, even in the absence of precise temporal alignment between sensory cues.

  11. Neural substrate of initiation of cross-modal working memory retrieval.

    PubMed

    Zhang, Yangyang; Hu, Yang; Guan, Shuchen; Hong, Xiaolong; Wang, Zhaoxin; Li, Xianchun

    2014-01-01

    Cross-modal working memory requires integrating stimuli from different modalities and it is associated with co-activation of distributed networks in the brain. However, how brain initiates cross-modal working memory retrieval remains not clear yet. In the present study, we developed a cued matching task, in which the necessity for cross-modal/unimodal memory retrieval and its initiation time were controlled by a task cue appeared in the delay period. Using functional magnetic resonance imaging (fMRI), significantly larger brain activations were observed in the left lateral prefrontal cortex (l-LPFC), left superior parietal lobe (l-SPL), and thalamus in the cued cross-modal matching trials (CCMT) compared to those in the cued unimodal matching trials (CUMT). However, no significant differences in the brain activations prior to task cue were observed for sensory stimulation in the l-LPFC and l-SPL areas. Although thalamus displayed differential responses to the sensory stimulation between two conditions, the differential responses were not the same with responses to the task cues. These results revealed that the frontoparietal-thalamus network participated in the initiation of cross-modal working memory retrieval. Secondly, the l-SPL and thalamus showed differential activations between maintenance and working memory retrieval, which might be associated with the enhanced demand for cognitive resources.

  12. Lower pitch is larger, yet falling pitches shrink.

    PubMed

    Eitan, Zohar; Schupak, Asi; Gotler, Alex; Marks, Lawrence E

    2014-01-01

    Experiments using diverse paradigms, including speeded discrimination, indicate that pitch and visually-perceived size interact perceptually, and that higher pitch is congruent with smaller size. While nearly all of these studies used static stimuli, here we examine the interaction of dynamic pitch and dynamic size, using Garner's speeded discrimination paradigm. Experiment 1 examined the interaction of continuous rise/fall in pitch and increase/decrease in object size. Experiment 2 examined the interaction of static pitch and size (steady high/low pitches and large/small visual objects), using an identical procedure. Results indicate that static and dynamic auditory and visual stimuli interact in opposite ways. While for static stimuli (Experiment 2), higher pitch is congruent with smaller size (as suggested by earlier work), for dynamic stimuli (Experiment 1), ascending pitch is congruent with growing size, and descending pitch with shrinking size. In addition, while static stimuli (Experiment 2) exhibit both congruence and Garner effects, dynamic stimuli (Experiment 1) present congruence effects without Garner interference, a pattern that is not consistent with prevalent interpretations of Garner's paradigm. Our interpretation of these results focuses on effects of within-trial changes on processing in dynamic tasks and on the association of changes in apparent size with implied changes in distance. Results suggest that static and dynamic stimuli can differ substantially in their cross-modal mappings, and may rely on different processing mechanisms.

  13. How Children Use Emotional Prosody: Crossmodal Emotional Integration?

    ERIC Educational Resources Information Center

    Gil, Sandrine; Hattouti, Jamila; Laval, Virginie

    2016-01-01

    A crossmodal effect has been observed in the processing of facial and vocal emotion in adults and infants. For the first time, we assessed whether this effect is present in childhood by administering a crossmodal task similar to those used in seminal studies featuring emotional faces (i.e., a continuum of emotional expressions running from…

  14. Context Influences Holistic Processing of Non-face Objects in the Composite Task

    PubMed Central

    Richler, Jennifer J.; Bukach, Cindy M.; Gauthier, Isabel

    2013-01-01

    We explore whether holistic-like effects can be observed for non-face objects in novices as a result of the task context. We measure contextually-induced congruency effects for novel objects (Greebles) in a sequential matching selective attention task (composite task). When format at study was blocked, congruency effects were observed for study-misaligned, but not study-aligned, conditions (Experiment 1). However, congruency effects were observed in all conditions when study formats were randomized (Experiment 2), revealing that the presence of certain trial types (study-misaligned) in an experiment can induce congruency effects. In a dual task, a congruency effect for Greebles was induced in trials where a face was first encoded, only if it was aligned (Experiment 3). Thus, congruency effects can be induced by context that operates at the scale of the entire experiment or within a single trial. Implications for using the composite task to measure holistic processing are discussed. PMID:19304644

  15. Effect of Perceptual Load on Semantic Access by Speech in Children

    ERIC Educational Resources Information Center

    Jerger, Susan; Damian, Markus F.; Mills, Candice; Bartlett, James; Tye-Murray, Nancy; Abdi, Herve

    2013-01-01

    Purpose: To examine whether semantic access by speech requires attention in children. Method: Children ("N" = 200) named pictures and ignored distractors on a cross-modal (distractors: auditory-no face) or multimodal (distractors: auditory-static face and audiovisual- dynamic face) picture word task. The cross-modal task had a low load,…

  16. Judgments of auditory-visual affective congruence in adolescents with and without autism: a pilot study of a new task using fMRI.

    PubMed

    Loveland, Katherine A; Steinberg, Joel L; Pearson, Deborah A; Mansour, Rosleen; Reddoch, Stacy

    2008-10-01

    One of the most widely reported developmental deficits associated with autism is difficulty perceiving and expressing emotion appropriately. Brain activation associated with performance on a new task, the Emotional Congruence Task, requires judging affective congruence of facial expression and voice, compared with their sex congruence. Participants in this pilot study were adolescents with normal IQ (n = 5) and autism or without (n = 4) autism. In the emotional congruence condition, as compared to the sex congruence of voice and face, controls had significantly more activation than the Autism group in the orbitofrontal cortex, the superior temporal, parahippocampal, and posterior cingulate gyri and occipital regions. Unlike controls, the Autism group did not have significantly greater prefrontal activation during the emotional congruence condition, but did during the sex congruence condition. Results indicate the Emotional Congruence Task can be used successfully to assess brain activation and behavior associated with integration of auditory and visual information for emotion. While the numbers in the groups are small, the results suggest that brain activity while performing the Emotional Congruence Task differed between adolescents with and without autism in fronto-limbic areas and in the superior temporal region. These findings must be confirmed using larger samples of participants.

  17. Sequential roles of primary somatosensory cortex and posterior parietal cortex in tactile-visual cross-modal working memory: a single-pulse transcranial magnetic stimulation (spTMS) study.

    PubMed

    Ku, Yixuan; Zhao, Di; Hao, Ning; Hu, Yi; Bodner, Mark; Zhou, Yong-Di

    2015-01-01

    Both monkey neurophysiological and human EEG studies have shown that association cortices, as well as primary sensory cortical areas, play an essential role in sequential neural processes underlying cross-modal working memory. The present study aims to further examine causal and sequential roles of the primary sensory cortex and association cortex in cross-modal working memory. Individual MRI-based single-pulse transcranial magnetic stimulation (spTMS) was applied to bilateral primary somatosensory cortices (SI) and the contralateral posterior parietal cortex (PPC), while participants were performing a tactile-visual cross-modal delayed matching-to-sample task. Time points of spTMS were 300 ms, 600 ms, 900 ms after the onset of the tactile sample stimulus in the task. The accuracy of task performance and reaction time were significantly impaired when spTMS was applied to the contralateral SI at 300 ms. Significant impairment on performance accuracy was also observed when the contralateral PPC was stimulated at 600 ms. SI and PPC play sequential and distinct roles in neural processes of cross-modal associations and working memory. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Cross-modal links among vision, audition, and touch in complex environments.

    PubMed

    Ferris, Thomas K; Sarter, Nadine B

    2008-02-01

    This study sought to determine whether performance effects of cross-modal spatial links that were observed in earlier laboratory studies scale to more complex environments and need to be considered in multimodal interface design. It also revisits the unresolved issue of cross-modal cuing asymmetries. Previous laboratory studies employing simple cues, tasks, and/or targets have demonstrated that the efficiency of processing visual, auditory, and tactile stimuli is affected by the modality, lateralization, and timing of surrounding cues. Very few studies have investigated these cross-modal constraints in the context of more complex environments to determine whether they scale and how complexity affects the nature of cross-modal cuing asymmetries. Amicroworld simulation of battlefield operations with a complex task set and meaningful visual, auditory, and tactile stimuli was used to investigate cuing effects for all cross-modal pairings. Significant asymmetric performance effects of cross-modal spatial links were observed. Auditory cues shortened response latencies for collocated visual targets but visual cues did not do the same for collocated auditory targets. Responses to contralateral (rather than ipsilateral) targets were faster for tactually cued auditory targets and each visual-tactile cue-target combination, suggesting an inhibition-of-return effect. The spatial relationships between multimodal cues and targets significantly affect target response times in complex environments. The performance effects of cross-modal links and the observed cross-modal cuing asymmetries need to be examined in more detail and considered in future interface design. The findings from this study have implications for the design of multimodal and adaptive interfaces and for supporting attention management in complex, data-rich domains.

  19. A Psychological Experiment on the Correspondence between Colors and Voiced Vowels in Non-synesthetes'

    NASA Astrophysics Data System (ADS)

    Miyahara, Tomoko; Koda, Ai; Sekiguchi, Rikuko; Amemiya, Toshihiko

    In this study, we investigated the nature of cross-modal associations between colors and vowels. In Experiment 1, we examined the patterns of synesthetic correspondence between colors and vowels in a perceptual similarity experiment. The results were as follows: red was chosen for /a/, yellow was chosen for /i/, and blue was chosen for /o/ significantly more than any other vowels. Interestingly this pattern of correspondence is similar to the pattern of colored hearing reported by synesthetes. In Experiment 2, we investigated the robustness of these cross-modal associations using an implicit association test (IAT). A clear congruence effect was found. Participants responded faster in congruent conditions (/i/ and yellow, /o/ and blue) than in incongruent conditions (/i/ and blue, /o/ and yellow). This result suggests that the weak synesthesia between vowels and colors in non-synesthtes is not the fact of mere conscious choice, but reflects some underlying implicit associations.

  20. Effect of perceptual load on semantic access by speech in children

    PubMed Central

    Jerger, Susan; Damian, Markus F.; Mills, Candice; Bartlett, James; Tye-Murray, Nancy; Abdi, Hervè

    2013-01-01

    Purpose To examine whether semantic access by speech requires attention in children. Method Children (N=200) named pictures and ignored distractors on a cross-modal (distractors: auditory-no face) or multi-modal (distractors: auditory-static face and audiovisual-dynamic face) picture word task. The cross-modal had a low load, and the multi-modal had a high load [i.e., respectively naming pictures displayed 1) on a blank screen vs 2) below the talker’s face on his T-shirt]. Semantic content of distractors was manipulated to be related vs unrelated to picture (e.g., picture dog with distractors bear vs cheese). Lavie's (2005) perceptual load model proposes that semantic access is independent of capacity limited attentional resources if irrelevant semantic-content manipulation influences naming times on both tasks despite variations in loads but dependent on attentional resources exhausted by higher load task if irrelevant content influences naming only on cross-modal (low load). Results Irrelevant semantic content affected performance for both tasks in 6- to 9-year-olds, but only on cross-modal in 4–5-year-olds. The addition of visual speech did not influence results on the multi-modal task. Conclusion Younger and older children differ in dependence on attentional resources for semantic access by speech. PMID:22896045

  1. Infants are superior in implicit crossmodal learning and use other learning mechanisms than adults

    PubMed Central

    von Frieling, Marco; Röder, Brigitte

    2017-01-01

    During development internal models of the sensory world must be acquired which have to be continuously adapted later. We used event-related potentials (ERP) to test the hypothesis that infants extract crossmodal statistics implicitly while adults learn them when task relevant. Participants were passively exposed to frequent standard audio-visual combinations (A1V1, A2V2, p=0.35 each), rare recombinations of these standard stimuli (A1V2, A2V1, p=0.10 each), and a rare audio-visual deviant with infrequent auditory and visual elements (A3V3, p=0.10). While both six-month-old infants and adults differentiated between rare deviants and standards involving early neural processing stages only infants were sensitive to crossmodal statistics as indicated by a late ERP difference between standard and recombined stimuli. A second experiment revealed that adults differentiated recombined and standard combinations when crossmodal combinations were task relevant. These results demonstrate a heightened sensitivity for crossmodal statistics in infants and a change in learning mode from infancy to adulthood. PMID:28949291

  2. The picture superiority effect in a cross-modality recognition task.

    PubMed

    Stenbert, G; Radeborg, K; Hedman, L R

    1995-07-01

    Words and pictures were studied and recognition tests given in which each studied object was to be recognized in both word and picture format. The main dependent variable was the latency of the recognition decision. The purpose was to investigate the effects of study modality (word or picture), of congruence between study and test modalities, and of priming resulting from repeated testing. Experiments 1 and 2 used the same basic design, but the latter also varied retention interval. Experiment 3 added a manipulation of instructions to name studied objects, and Experiment 4 deviated from the others by presenting both picture and word referring to the same object together for study. The results showed that congruence between study and test modalities consistently facilitated recognition. Furthermore, items studied as pictures were more rapidly recognized than were items studied as words. With repeated testing, the second instance was affected by its predecessor, but the facilitating effect of picture-to-word priming exceeded that of word-to-picture priming. The finds suggest a two- stage recognition process, in which the first is based on perceptual familiarity and the second uses semantic links for a retrieval search. Common-code theories that grant privileged access to the semantic code for pictures or, alternatively, dual-code theories that assume mnemonic superiority for the image code are supported by the findings. Explanations of the picture superiority effect as resulting from dual encoding of pictures are not supported by the data.

  3. On the relative contributions of multisensory integration and crossmodal exogenous spatial attention to multisensory response enhancement.

    PubMed

    Van der Stoep, N; Spence, C; Nijboer, T C W; Van der Stigchel, S

    2015-11-01

    Two processes that can give rise to multisensory response enhancement (MRE) are multisensory integration (MSI) and crossmodal exogenous spatial attention. It is, however, currently unclear what the relative contribution of each of these is to MRE. We investigated this issue using two tasks that are generally assumed to measure MSI (a redundant target effect task) and crossmodal exogenous spatial attention (a spatial cueing task). One block of trials consisted of unimodal auditory and visual targets designed to provide a unimodal baseline. In two other blocks of trials, the participants were presented with spatially and temporally aligned and misaligned audiovisual (AV) targets (0, 50, 100, and 200ms SOA). In the integration block, the participants were instructed to respond to the onset of the first target stimulus that they detected (A or V). The instruction for the cueing block was to respond only to the onset of the visual targets. The targets could appear at one of three locations: left, center, and right. The participants were instructed to respond only to lateral targets. The results indicated that MRE was caused by MSI at 0ms SOA. At 50ms SOA, both crossmodal exogenous spatial attention and MSI contributed to the observed MRE, whereas the MRE observed at the 100 and 200ms SOAs was attributable to crossmodal exogenous spatial attention, alerting, and temporal preparation. These results therefore suggest that there may be a temporal window in which both MSI and exogenous crossmodal spatial attention can contribute to multisensory response enhancement. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Aging and the visual, haptic, and cross-modal perception of natural object shape.

    PubMed

    Norman, J Farley; Crabtree, Charles E; Norman, Hideko F; Moncrief, Brandon K; Herrmann, Molly; Kapley, Noah

    2006-01-01

    One hundred observers participated in two experiments designed to investigate aging and the perception of natural object shape. In the experiments, younger and older observers performed either a same/different shape discrimination task (experiment 1) or a cross-modal matching task (experiment 2). Quantitative effects of age were found in both experiments. The effect of age in experiment 1 was limited to cross-modal shape discrimination: there was no effect of age upon unimodal (ie within a single perceptual modality) shape discrimination. The effect of age in experiment 2 was eliminated when the older observers were either given an unlimited amount of time to perform the task or when the number of response alternatives was decreased. Overall, the results of the experiments reveal that older observers can effectively perceive 3-D shape from both vision and haptics.

  5. The contribution of perceptual factors and training on varying audiovisual integration capacity.

    PubMed

    Wilbiks, Jonathan M P; Dyson, Benjamin J

    2018-06-01

    The suggestion that the capacity of audiovisual integration has an upper limit of 1 was challenged in 4 experiments using perceptual factors and training to enhance the binding of auditory and visual information. Participants were required to note a number of specific visual dot locations that changed in polarity when a critical auditory stimulus was presented, under relatively fast (200-ms stimulus onset asynchrony [SOA]) and slow (700-ms SOA) rates of presentation. In Experiment 1, transient cross-modal congruency between the brightness of polarity change and pitch of the auditory tone was manipulated. In Experiment 2, sustained chunking was enabled on certain trials by connecting varying dot locations with vertices. In Experiment 3, training was employed to determine if capacity would increase through repeated experience with an intermediate presentation rate (450 ms). Estimates of audiovisual integration capacity (K) were larger than 1 during cross-modal congruency at slow presentation rates (Experiment 1), during perceptual chunking at slow and fast presentation rates (Experiment 2), and, during an intermediate presentation rate posttraining (Experiment 3). Finally, Experiment 4 showed a linear increase in K using SOAs ranging from 100 to 600 ms, suggestive of quantitative rather than qualitative changes in the mechanisms in audiovisual integration as a function of presentation rate. The data compromise the suggestion that the capacity of audiovisual integration is limited to 1 and suggest that the ability to bind sounds to sights is contingent on individual and environmental factors. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  6. Conflict Background Triggered Congruency Sequence Effects in Graphic Judgment Task

    PubMed Central

    Zhao, Liang; Wang, Yonghui

    2013-01-01

    Congruency sequence effects refer to the reduction of congruency effects when following an incongruent trial than following a congruent trial. The conflict monitoring account, one of the most influential contributions to this effect, assumes that the sequential modulations are evoked by response conflict. The present study aimed at exploring the congruency sequence effects in the absence of response conflict. We found congruency sequence effects occurred in graphic judgment task, in which the conflict stimuli acted as irrelevant information. The findings reveal that processing task-irrelevant conflict stimulus features could also induce sequential modulations of interference. The results do not support the interpretation of conflict monitoring and favor a feature integration account that the congruency sequence effects are attributed to the repetitions of stimulus and response features. PMID:23372766

  7. Interaction of Perceptual Grouping and Crossmodal Temporal Capture in Tactile Apparent-Motion

    PubMed Central

    Chen, Lihan; Shi, Zhuanghua; Müller, Hermann J.

    2011-01-01

    Previous studies have shown that in tasks requiring participants to report the direction of apparent motion, task-irrelevant mono-beeps can “capture” visual motion perception when the beeps occur temporally close to the visual stimuli. However, the contributions of the relative timing of multimodal events and the event structure, modulating uni- and/or crossmodal perceptual grouping, remain unclear. To examine this question and extend the investigation to the tactile modality, the current experiments presented tactile two-tap apparent-motion streams, with an SOA of 400 ms between successive, left-/right-hand middle-finger taps, accompanied by task-irrelevant, non-spatial auditory stimuli. The streams were shown for 90 seconds, and participants' task was to continuously report the perceived (left- or rightward) direction of tactile motion. In Experiment 1, each tactile stimulus was paired with an auditory beep, though odd-numbered taps were paired with an asynchronous beep, with audiotactile SOAs ranging from −75 ms to 75 ms. Perceived direction of tactile motion varied systematically with audiotactile SOA, indicative of a temporal-capture effect. In Experiment 2, two audiotactile SOAs—one short (75 ms), one long (325 ms)—were compared. The long-SOA condition preserved the crossmodal event structure (so the temporal-capture dynamics should have been similar to that in Experiment 1), but both beeps now occurred temporally close to the taps on one side (even-numbered taps). The two SOAs were found to produce opposite modulations of apparent motion, indicative of an influence of crossmodal grouping. In Experiment 3, only odd-numbered, but not even-numbered, taps were paired with auditory beeps. This abolished the temporal-capture effect and, instead, a dominant percept of apparent motion from the audiotactile side to the tactile-only side was observed independently of the SOA variation. These findings suggest that asymmetric crossmodal grouping leads to an attentional modulation of apparent motion, which inhibits crossmodal temporal-capture effects. PMID:21383834

  8. Age-related differences in audiovisual interactions of semantically different stimuli.

    PubMed

    Viggiano, Maria Pia; Giovannelli, Fabio; Giganti, Fiorenza; Rossi, Arianna; Metitieri, Tiziana; Rebai, Mohamed; Guerrini, Renzo; Cincotta, Massimo

    2017-01-01

    Converging results have shown that adults benefit from congruent multisensory stimulation in the identification of complex stimuli, whereas the developmental trajectory of the ability to integrate multisensory inputs in children is less well understood. In this study we explored the effects of audiovisual semantic congruency on identification of visually presented stimuli belonging to different categories, using a cross-modal approach. Four groups of children ranging in age from 6 to 13 years and adults were administered an object identification task of visually presented pictures belonging to living and nonliving entities. Stimuli were presented in visual, congruent audiovisual, incongruent audiovisual, and noise conditions. Results showed that children under 12 years of age did not benefit from multisensory presentation in speeding up the identification. In children the incoherent audiovisual condition had an interfering effect, especially for the identification of living things. These data suggest that the facilitating effect of the audiovisual interaction into semantic factors undergoes developmental changes and the consolidation of adult-like processing of multisensory stimuli begins in late childhood. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. Cross-modal transfer of the conditioned eyeblink response during interstimulus interval discrimination training in young rats

    PubMed Central

    Brown, Kevin L.; Stanton, Mark E.

    2008-01-01

    Eyeblink classical conditioning (EBC) was observed across a broad developmental period with tasks utilizing two interstimulus intervals (ISIs). In ISI discrimination, two distinct conditioned stimuli (CSs; light and tone) are reinforced with a periocular shock unconditioned stimulus (US) at two different CS-US intervals. Temporal uncertainty is identical in design with the exception that the same CS is presented at both intervals. Developmental changes in conditioning have been reported in each task beyond ages when single-ISI learning is well developed. The present study sought to replicate and extend these previous findings by testing each task at four separate ages. Consistent with previous findings, younger rats (postnatal day – PD - 23 and 30) trained in ISI discrimination showed evidence of enhanced cross-modal influence of the short CS-US pairing upon long CS conditioning relative to older subjects. ISI discrimination training at PD43-47 yielded outcomes similar to those in adults (PD65-71). Cross-modal transfer effects in this task therefore appear to diminish between PD30 and PD43-47. Comparisons of ISI discrimination with temporal uncertainty indicated that cross-modal transfer in ISI discrimination at the youngest ages did not represent complete generalization across CSs. ISI discrimination undergoes a more protracted developmental emergence than single-cue EBC and may be a more sensitive indicator of developmental disorders involving cerebellar dysfunction. PMID:18726989

  10. Short-term memory stores organized by information domain.

    PubMed

    Noyce, Abigail L; Cestero, Nishmar; Shinn-Cunningham, Barbara G; Somers, David C

    2016-04-01

    Vision and audition have complementary affinities, with vision excelling in spatial resolution and audition excelling in temporal resolution. Here, we investigated the relationships among the visual and auditory modalities and spatial and temporal short-term memory (STM) using change detection tasks. We created short sequences of visual or auditory items, such that each item within a sequence arose at a unique spatial location at a unique time. On each trial, two successive sequences were presented; subjects attended to either space (the sequence of locations) or time (the sequence of inter item intervals) and reported whether the patterns of locations or intervals were identical. Each subject completed blocks of unimodal trials (both sequences presented in the same modality) and crossmodal trials (Sequence 1 visual, Sequence 2 auditory, or vice versa) for both spatial and temporal tasks. We found a strong interaction between modality and task: Spatial performance was best on unimodal visual trials, whereas temporal performance was best on unimodal auditory trials. The order of modalities on crossmodal trials also mattered, suggesting that perceptual fidelity at encoding is critical to STM. Critically, no cost was attributable to crossmodal comparison: In both tasks, performance on crossmodal trials was as good as or better than on the weaker unimodal trials. STM representations of space and time can guide change detection in either the visual or the auditory modality, suggesting that the temporal or spatial organization of STM may supersede sensory-specific organization.

  11. Assessing the Effect of Musical Congruency on Wine Tasting in a Live Performance Setting

    PubMed Central

    Wang, Qian (Janice)

    2015-01-01

    At a wine tasting event with live classical music, we assessed whether participants would agree that certain wine and music pairings were congruent. We also assessed the effect of musical congruency on the wine tasting experience. The participants were given two wines to taste and two pieces of music—one chosen to match each wine—were performed live. Half of the participants tasted the wines while listening to the putatively more congruent music, the rest tasted the wines while listening to the putatively less congruent music. The participants rated the wine–music match and assessed the fruitiness, acidity, tannins, richness, complexity, length, and pleasantness of the wines. The results revealed that the music chosen to be congruent with each wine was indeed rated as a better match than the other piece of music. Furthermore, the music playing in the background also had a significant effect on the perceived acidity and fruitiness of the wines. These findings therefore provide further support for the view that music can modify the wine drinking experience. However, the present results leave open the question of whether the crossmodal congruency between music and wine itself has any overarching influence on the wine drinking experience. PMID:27433313

  12. Functional brain and age-related changes associated with congruency in task switching

    PubMed Central

    Eich, Teal S.; Parker, David; Liu, Dan; Oh, Hwamee; Razlighi, Qolamreza; Gazes, Yunglin; Habeck, Christian; Stern, Yaakov

    2016-01-01

    Alternating between completing two simple tasks, as opposed to completing only one task, has been shown to produce costs to performance and changes to neural patterns of activity, effects which are augmented in old age. Cognitive conflict may arise from factors other than switching tasks, however. Sensorimotor congruency (whether stimulus-response mappings are the same or different for the two tasks) has been shown to behaviorally moderate switch costs in older, but not younger adults. In the current study, we used fMRI to investigate the neurobiological mechanisms of response-conflict congruency effects within a task switching paradigm in older (N=75) and younger (N=62) adults. Behaviorally, incongruency moderated age-related differences in switch costs. Neurally, switch costs were associated with greater activation in the dorsal attention network for older relative to younger adults. We also found that older adults recruited an additional set of brain areas in the ventral attention network to a greater extent than did younger adults to resolve congruency-related response-conflict. These results suggest both a network and an age-based dissociation between congruency and switch costs in task switching. PMID:27520472

  13. Spatial Attention and Audiovisual Interactions in Apparent Motion

    ERIC Educational Resources Information Center

    Sanabria, Daniel; Soto-Faraco, Salvador; Spence, Charles

    2007-01-01

    In this study, the authors combined the cross-modal dynamic capture task (involving the horizontal apparent movement of visual and auditory stimuli) with spatial cuing in the vertical dimension to investigate the role of spatial attention in cross-modal interactions during motion perception. Spatial attention was manipulated endogenously, either…

  14. Control processes through the suppression of the automatic response activation triggered by task-irrelevant information in the Simon-type tasks.

    PubMed

    Kim, Sanga; Lee, Sang Ho; Cho, Yang Seok

    2015-11-01

    The congruency sequence effect, one of the indices of cognitive control, refers to a smaller congruency effect after an incongruent than congruent trial. Although the effect has been found across a variety of conflict tasks, there is not yet agreement on the underlying mechanism. The present study investigated the mechanism underlying cognitive control by using a cross-task paradigm. In Experiments 1, 2, and 3, participants performed a modified Simon task and a spatial Stroop task alternately in a trial-by-trial manner. The task-irrelevant dimension of the two tasks was perceptually and conceptually identical in Experiment 1, whereas it was perceptually different but conceptually identical in Experiment 2. The response sets for both tasks were different in Experiment 3. In Experiment 4, participants performed two Simon tasks with different task-relevant dimensions. In all experiments in which the task-irrelevant dimension and response mode were shared, significant congruency sequence effects were found between the two different congruencies, indicating that Simon-type conflicts were resolved by a control mechanism, which is specific to an abstract task-irrelevant stimulus spatial dimension. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. The effects of perceptual priming on 4-year-olds' haptic-to-visual cross-modal transfer.

    PubMed

    Kalagher, Hilary

    2013-01-01

    Four-year-old children often have difficulty visually recognizing objects that were previously experienced only haptically. This experiment attempts to improve their performance in these haptic-to-visual transfer tasks. Sixty-two 4-year-old children participated in priming trials in which they explored eight unfamiliar objects visually, haptically, or visually and haptically together. Subsequently, all children participated in the same haptic-to-visual cross-modal transfer task. In this task, children haptically explored the objects that were presented in the priming phase and then visually identified a match from among three test objects, each matching the object on only one dimension (shape, texture, or color). Children in all priming conditions predominantly made shape-based matches; however, the most shape-based matches were made in the Visual and Haptic condition. All kinds of priming provided the necessary memory traces upon which subsequent haptic exploration could build a strong enough representation to enable subsequent visual recognition. Haptic exploration patterns during the cross-modal transfer task are discussed and the detailed analyses provide a unique contribution to our understanding of the development of haptic exploratory procedures.

  16. Attentional reorienting triggers spatial asymmetries in a search task with cross-modal spatial cueing

    PubMed Central

    Paladini, Rebecca E.; Diana, Lorenzo; Zito, Giuseppe A.; Nyffeler, Thomas; Wyss, Patric; Mosimann, Urs P.; Müri, René M.; Nef, Tobias

    2018-01-01

    Cross-modal spatial cueing can affect performance in a visual search task. For example, search performance improves if a visual target and an auditory cue originate from the same spatial location, and it deteriorates if they originate from different locations. Moreover, it has recently been postulated that multisensory settings, i.e., experimental settings, in which critical stimuli are concurrently presented in different sensory modalities (e.g., visual and auditory), may trigger asymmetries in visuospatial attention. Thereby, a facilitation has been observed for visual stimuli presented in the right compared to the left visual space. However, it remains unclear whether auditory cueing of attention differentially affects search performance in the left and the right hemifields in audio-visual search tasks. The present study investigated whether spatial asymmetries would occur in a search task with cross-modal spatial cueing. Participants completed a visual search task that contained no auditory cues (i.e., unimodal visual condition), spatially congruent, spatially incongruent, and spatially non-informative auditory cues. To further assess participants’ accuracy in localising the auditory cues, a unimodal auditory spatial localisation task was also administered. The results demonstrated no left/right asymmetries in the unimodal visual search condition. Both an additional incongruent, as well as a spatially non-informative, auditory cue resulted in lateral asymmetries. Thereby, search times were increased for targets presented in the left compared to the right hemifield. No such spatial asymmetry was observed in the congruent condition. However, participants’ performance in the congruent condition was modulated by their tone localisation accuracy. The findings of the present study demonstrate that spatial asymmetries in multisensory processing depend on the validity of the cross-modal cues, and occur under specific attentional conditions, i.e., when visual attention has to be reoriented towards the left hemifield. PMID:29293637

  17. Integrating Conceptual Knowledge Within and Across Representational Modalities

    PubMed Central

    McNorgan, Chris; Reid, Jackie; McRae, Ken

    2011-01-01

    Research suggests that concepts are distributed across brain regions specialized for processing information from different sensorimotor modalities. Multimodal semantic models fall into one of two broad classes differentiated by the assumed hierarchy of convergence zones over which information is integrated. In shallow models, communication within- and between-modality is accomplished using either direct connectivity, or a central semantic hub. In deep models, modalities are connected via cascading integration sites with successively wider receptive fields. Four experiments provide the first direct behavioral tests of these models using speeded tasks involving feature inference and concept activation. Shallow models predict no within-modal versus cross-modal difference in either task, whereas deep models predict a within-modal advantage for feature inference, but a cross-modal advantage for concept activation. Experiments 1 and 2 used relatedness judgments to tap participants’ knowledge of relations for within- and cross-modal feature pairs. Experiments 3 and 4 used a dual feature verification task. The pattern of decision latencies across Experiments 1 to 4 is consistent with a deep integration hierarchy. PMID:21093853

  18. The informativity of sound modulates crossmodal facilitation of visual discrimination: a fMRI study.

    PubMed

    Li, Qi; Yu, Hongtao; Li, Xiujun; Sun, Hongzan; Yang, Jingjing; Li, Chunlin

    2017-01-18

    Many studies have investigated behavioral crossmodal facilitation when a visual stimulus is accompanied by a concurrent task-irrelevant sound. Lippert and colleagues reported that a concurrent task-irrelevant sound reduced the uncertainty of the timing of the visual display and improved perceptional responses (informative sound). However, the neural mechanism by which the informativity of sound affected crossmodal facilitation of visual discrimination remained unclear. In this study, we used event-related functional MRI to investigate the neural mechanisms underlying the role of informativity of sound in crossmodal facilitation of visual discrimination. Significantly faster reaction times were observed when there was an informative relationship between auditory and visual stimuli. The functional MRI results showed sound informativity-induced activation enhancement including the left fusiform gyrus and the right lateral occipital complex. Further correlation analysis showed that the right lateral occipital complex was significantly correlated with the behavioral benefit in reaction times. This suggests that this region was modulated by the informative relationship within audiovisual stimuli that was learnt during the experiment, resulting in late-stage multisensory integration and enhanced behavioral responses.

  19. Thermal-to-visible face recognition using partial least squares.

    PubMed

    Hu, Shuowen; Choi, Jonghyun; Chan, Alex L; Schwartz, William Robson

    2015-03-01

    Although visible face recognition has been an active area of research for several decades, cross-modal face recognition has only been explored by the biometrics community relatively recently. Thermal-to-visible face recognition is one of the most difficult cross-modal face recognition challenges, because of the difference in phenomenology between the thermal and visible imaging modalities. We address the cross-modal recognition problem using a partial least squares (PLS) regression-based approach consisting of preprocessing, feature extraction, and PLS model building. The preprocessing and feature extraction stages are designed to reduce the modality gap between the thermal and visible facial signatures, and facilitate the subsequent one-vs-all PLS-based model building. We incorporate multi-modal information into the PLS model building stage to enhance cross-modal recognition. The performance of the proposed recognition algorithm is evaluated on three challenging datasets containing visible and thermal imagery acquired under different experimental scenarios: time-lapse, physical tasks, mental tasks, and subject-to-camera range. These scenarios represent difficult challenges relevant to real-world applications. We demonstrate that the proposed method performs robustly for the examined scenarios.

  20. Multisensory perceptual learning of temporal order: audiovisual learning transfers to vision but not audition.

    PubMed

    Alais, David; Cass, John

    2010-06-23

    An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question. Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes. The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order may be optimised to object-centered rather than viewer-centered constraints.

  1. An extended research of crossmodal correspondence between color and sound in psychology and cognitive ergonomics.

    PubMed

    Sun, Xiuwen; Li, Xiaoling; Ji, Lingyu; Han, Feng; Wang, Huifen; Liu, Yang; Chen, Yao; Lou, Zhiyuan; Li, Zhuoyun

    2018-01-01

    Based on the existing research on sound symbolism and crossmodal correspondence, this study proposed an extended research on cross-modal correspondence between various sound attributes and color properties in a group of non-synesthetes. In Experiment 1, we assessed the associations between each property of sounds and colors. Twenty sounds with five auditory properties (pitch, roughness, sharpness, tempo and discontinuity), each varied in four levels, were used as the sound stimuli. Forty-nine colors with different hues, saturation and brightness were used to match to those sounds. Result revealed that besides pitch and tempo, roughness and sharpness also played roles in sound-color correspondence. Reaction times of sound-hue were a little longer than the reaction times of sound-lightness. In Experiment 2, a speeded target discrimination task was used to assess whether the associations between sound attributes and color properties could invoke natural cross-modal correspondence and improve participants' cognitive efficiency in cognitive tasks. Several typical sound-color pairings were selected according to the results of Experiment 1. Participants were divided into two groups (congruent and incongruent). In each trial participants had to judge whether the presented color could appropriately be associated with the sound stimuli. Result revealed that participants responded more quickly and accurately in the congruent group than in the incongruent group. It was also found that there was no significant difference in reaction times and error rates between sound-hue and sound-lightness. The results of Experiment 1 and 2 indicate the existence of a robust crossmodal correspondence between multiple attributes of sound and color, which also has strong influence on cognitive tasks. The inconsistency of the reaction times between sound-hue and sound-lightness in Experiment 1 and 2 is probably owing to the difference in experimental protocol, which indicates that the complexity of experiment design may be an important factor in crossmodal correspondence phenomena.

  2. An extended research of crossmodal correspondence between color and sound in psychology and cognitive ergonomics

    PubMed Central

    Sun, Xiuwen; Ji, Lingyu; Han, Feng; Wang, Huifen; Liu, Yang; Chen, Yao; Lou, Zhiyuan; Li, Zhuoyun

    2018-01-01

    Based on the existing research on sound symbolism and crossmodal correspondence, this study proposed an extended research on cross-modal correspondence between various sound attributes and color properties in a group of non-synesthetes. In Experiment 1, we assessed the associations between each property of sounds and colors. Twenty sounds with five auditory properties (pitch, roughness, sharpness, tempo and discontinuity), each varied in four levels, were used as the sound stimuli. Forty-nine colors with different hues, saturation and brightness were used to match to those sounds. Result revealed that besides pitch and tempo, roughness and sharpness also played roles in sound-color correspondence. Reaction times of sound-hue were a little longer than the reaction times of sound-lightness. In Experiment 2, a speeded target discrimination task was used to assess whether the associations between sound attributes and color properties could invoke natural cross-modal correspondence and improve participants’ cognitive efficiency in cognitive tasks. Several typical sound-color pairings were selected according to the results of Experiment 1. Participants were divided into two groups (congruent and incongruent). In each trial participants had to judge whether the presented color could appropriately be associated with the sound stimuli. Result revealed that participants responded more quickly and accurately in the congruent group than in the incongruent group. It was also found that there was no significant difference in reaction times and error rates between sound-hue and sound-lightness. The results of Experiment 1 and 2 indicate the existence of a robust crossmodal correspondence between multiple attributes of sound and color, which also has strong influence on cognitive tasks. The inconsistency of the reaction times between sound-hue and sound-lightness in Experiment 1 and 2 is probably owing to the difference in experimental protocol, which indicates that the complexity of experiment design may be an important factor in crossmodal correspondence phenomena. PMID:29507834

  3. Conflict control in task conflict and response conflict.

    PubMed

    Braverman, Ami; Meiran, Nachshon

    2015-03-01

    Studies have suggested that conflict control can modulate conflict effects in response to differing levels of conflict context. The current study probed, in two experiments of proportion congruence, the relevance of both task conflict (between a currently relevant task and irrelevant task alternatives) and response conflict (between a currently relevant response and irrelevant response alternatives) to conflict control. In Experiment 1, proportion congruence between blocks was manipulated and in Experiment 2, proportion congruence was manipulated between items. The response conflict effect was smaller when proportion of incongruence was high, regardless if task conflict or response conflict proportions were manipulated. These findings suggest that both task conflict and response conflict are monitored but that only response conflict is being influenced by this monitoring process. Theoretical implications are discussed.

  4. Suppression and Working Memory in Auditory Comprehension of L2 Narratives: Evidence from Cross-Modal Priming

    ERIC Educational Resources Information Center

    Wu, Shiyu; Ma, Zheng

    2016-01-01

    Using a cross-modal priming task, the present study explores whether Chinese-English bilinguals process goal related information during auditory comprehension of English narratives like native speakers. Results indicate that English native speakers adopted both mechanisms of suppression and enhancement to modulate the activation of goals and keep…

  5. Coherent emotional perception from body expressions and the voice.

    PubMed

    Yeh, Pei-Wen; Geangu, Elena; Reid, Vincent

    2016-10-01

    Perceiving emotion from multiple modalities enhances the perceptual sensitivity of an individual. This allows more accurate judgments of others' emotional states, which is crucial to appropriate social interactions. It is known that body expressions effectively convey emotional messages, although fewer studies have examined how this information is combined with the auditory cues. The present study used event-related potentials (ERP) to investigate the interaction between emotional body expressions and vocalizations. We also examined emotional congruency between auditory and visual information to determine how preceding visual context influences later auditory processing. Consistent with prior findings, a reduced N1 amplitude was observed in the audiovisual condition compared to an auditory-only condition. While this component was not sensitive to the modality congruency, the P2 was sensitive to the emotionally incompatible audiovisual pairs. Further, the direction of these congruency effects was different in terms of facilitation or suppression based on the preceding contexts. Overall, the results indicate a functionally dissociated mechanism underlying two stages of emotional processing whereby N1 is involved in cross-modal processing, whereas P2 is related to assessing a unifying perceptual content. These data also indicate that emotion integration can be affected by the specific emotion that is presented. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Auditory Emotion Word Primes Influence Emotional Face Categorization in Children and Adults, but Not Vice Versa.

    PubMed

    Vesker, Michael; Bahn, Daniela; Kauschke, Christina; Tschense, Monika; Degé, Franziska; Schwarzer, Gudrun

    2018-01-01

    In order to assess how the perception of audible speech and facial expressions influence one another for the perception of emotions, and how this influence might change over the course of development, we conducted two cross-modal priming experiments with three age groups of children (6-, 9-, and 12-years old), as well as college-aged adults. In Experiment 1, 74 children and 24 adult participants were tasked with categorizing photographs of emotional faces as positive or negative as quickly as possible after being primed with emotion words presented via audio in valence-congruent and valence-incongruent trials. In Experiment 2, 67 children and 24 adult participants carried out a similar categorization task, but with faces acting as visual primes, and emotion words acting as auditory targets. The results of Experiment 1 showed that participants made more errors when categorizing positive faces primed by negative words versus positive words, and that 6-year-old children are particularly sensitive to positive word primes, giving faster correct responses regardless of target valence. Meanwhile, the results of Experiment 2 did not show any congruency effects for priming by facial expressions. Thus, audible emotion words seem to exert an influence on the emotional categorization of faces, while faces do not seem to influence the categorization of emotion words in a significant way.

  7. Modality-specific effects on crosstalk in task switching: evidence from modality compatibility using bimodal stimulation.

    PubMed

    Stephan, Denise Nadine; Koch, Iring

    2016-11-01

    The present study was aimed at examining modality-specific influences in task switching. To this end, participants switched either between modality compatible tasks (auditory-vocal and visual-manual) or incompatible spatial discrimination tasks (auditory-manual and visual-vocal). In addition, auditory and visual stimuli were presented simultaneously (i.e., bimodally) in each trial, so that selective attention was required to process the task-relevant stimulus. The inclusion of bimodal stimuli enabled us to assess congruence effects as a converging measure of increased between-task interference. The tasks followed a pre-instructed sequence of double alternations (AABB), so that no explicit task cues were required. The results show that switching between two modality incompatible tasks increases both switch costs and congruence effects compared to switching between two modality compatible tasks. The finding of increased congruence effects in modality incompatible tasks supports our explanation in terms of ideomotor "backward" linkages between anticipated response effects and the stimuli that called for this response in the first place. According to this generalized ideomotor idea, the modality match between response effects and stimuli would prime selection of a response in the compatible modality. This priming would cause increased difficulties to ignore the competing stimulus and hence increases the congruence effect. Moreover, performance would be hindered when switching between modality incompatible tasks and facilitated when switching between modality compatible tasks.

  8. Integrating conceptual knowledge within and across representational modalities.

    PubMed

    McNorgan, Chris; Reid, Jackie; McRae, Ken

    2011-02-01

    Research suggests that concepts are distributed across brain regions specialized for processing information from different sensorimotor modalities. Multimodal semantic models fall into one of two broad classes differentiated by the assumed hierarchy of convergence zones over which information is integrated. In shallow models, communication within- and between-modality is accomplished using either direct connectivity, or a central semantic hub. In deep models, modalities are connected via cascading integration sites with successively wider receptive fields. Four experiments provide the first direct behavioral tests of these models using speeded tasks involving feature inference and concept activation. Shallow models predict no within-modal versus cross-modal difference in either task, whereas deep models predict a within-modal advantage for feature inference, but a cross-modal advantage for concept activation. Experiments 1 and 2 used relatedness judgments to tap participants' knowledge of relations for within- and cross-modal feature pairs. Experiments 3 and 4 used a dual-feature verification task. The pattern of decision latencies across Experiments 1-4 is consistent with a deep integration hierarchy. Copyright © 2010 Elsevier B.V. All rights reserved.

  9. Cross-Modal Attention Effects in the Vestibular Cortex during Attentive Tracking of Moving Objects.

    PubMed

    Frank, Sebastian M; Sun, Liwei; Forster, Lisa; Tse, Peter U; Greenlee, Mark W

    2016-12-14

    The midposterior fundus of the Sylvian fissure in the human brain is central to the cortical processing of vestibular cues. At least two vestibular areas are located at this site: the parietoinsular vestibular cortex (PIVC) and the posterior insular cortex (PIC). It is now well established that activity in sensory systems is subject to cross-modal attention effects. Attending to a stimulus in one sensory modality enhances activity in the corresponding cortical sensory system, but simultaneously suppresses activity in other sensory systems. Here, we wanted to probe whether such cross-modal attention effects also target the vestibular system. To this end, we used a visual multiple-object tracking task. By parametrically varying the number of tracked targets, we could measure the effect of attentional load on the PIVC and the PIC while holding the perceptual load constant. Participants performed the tracking task during functional magnetic resonance imaging. Results show that, compared with passive viewing of object motion, activity during object tracking was suppressed in the PIVC and enhanced in the PIC. Greater attentional load, induced by increasing the number of tracked targets, was associated with a corresponding increase in the suppression of activity in the PIVC. Activity in the anterior part of the PIC decreased with increasing load, whereas load effects were absent in the posterior PIC. Results of a control experiment show that attention-induced suppression in the PIVC is stronger than any suppression evoked by the visual stimulus per se. Overall, our results suggest that attention has a cross-modal modulatory effect on the vestibular cortex during visual object tracking. In this study we investigate cross-modal attention effects in the human vestibular cortex. We applied the visual multiple-object tracking task because it is known to evoke attentional load effects on neural activity in visual motion-processing and attention-processing areas. Here we demonstrate a load-dependent effect of attention on the activation in the vestibular cortex, despite constant visual motion stimulation. We find that activity in the parietoinsular vestibular cortex is more strongly suppressed the greater the attentional load on the visual tracking task. These findings suggest cross-modal attentional modulation in the vestibular cortex. Copyright © 2016 the authors 0270-6474/16/3612720-09$15.00/0.

  10. Cross-modal attention influences auditory contrast sensitivity: Decreasing visual load improves auditory thresholds for amplitude- and frequency-modulated sounds.

    PubMed

    Ciaramitaro, Vivian M; Chow, Hiu Mei; Eglington, Luke G

    2017-03-01

    We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation-two consecutive intervals of streams of visual letters-and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources). We hypothesized that if attention is a shared resource across vision and audition, an easier visual task should free up more attentional resources for auditory processing on an unrelated task, hence improving auditory thresholds. Auditory detection thresholds were lower-that is, auditory sensitivity was improved-for both amplitude- and frequency-modulated sounds when observers engaged in a less demanding (compared to a more demanding) visual task. In accord with previous work, our findings suggest that visual-task demands can influence the processing of auditory information on an unrelated concurrent task, providing support for shared attentional resources. More importantly, our results suggest that attending to information in a different modality, cross-modal attention, can influence basic auditory contrast sensitivity functions, highlighting potential similarities between basic mechanisms for visual and auditory attention.

  11. Cross-Modal Retrieval With CNN Visual Features: A New Baseline.

    PubMed

    Wei, Yunchao; Zhao, Yao; Lu, Canyi; Wei, Shikui; Liu, Luoqi; Zhu, Zhenfeng; Yan, Shuicheng

    2017-02-01

    Recently, convolutional neural network (CNN) visual features have demonstrated their powerful ability as a universal representation for various recognition tasks. In this paper, cross-modal retrieval with CNN visual features is implemented with several classic methods. Specifically, off-the-shelf CNN visual features are extracted from the CNN model, which is pretrained on ImageNet with more than one million images from 1000 object categories, as a generic image representation to tackle cross-modal retrieval. To further enhance the representational ability of CNN visual features, based on the pretrained CNN model on ImageNet, a fine-tuning step is performed by using the open source Caffe CNN library for each target data set. Besides, we propose a deep semantic matching method to address the cross-modal retrieval problem with respect to samples which are annotated with one or multiple labels. Extensive experiments on five popular publicly available data sets well demonstrate the superiority of CNN visual features for cross-modal retrieval.

  12. How children use emotional prosody: Crossmodal emotional integration?

    PubMed

    Gil, Sandrine; Hattouti, Jamila; Laval, Virginie

    2016-07-01

    A crossmodal effect has been observed in the processing of facial and vocal emotion in adults and infants. For the first time, we assessed whether this effect is present in childhood by administering a crossmodal task similar to those used in seminal studies featuring emotional faces (i.e., a continuum of emotional expressions running from happiness to sadness: 90% happy, 60% happy, 30% happy, neutral, 30% sad, 60% sad, 90% sad) and emotional prosody (i.e., sad vs. happy). Participants were 5-, 7-, and 9-year-old children and a control group of adult students. The children had a different pattern of results from the adults, with only the 9-year-olds exhibiting the crossmodal effect whatever the emotional condition. These results advance our understanding of emotional prosody processing and the efficiency of crossmodal integration in children and are discussed in terms of a developmental trajectory and factors that may modulate the efficiency of this effect in children. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  13. Increasing Working Memory Load Reduces Processing of Cross-Modal Task-Irrelevant Stimuli Even after Controlling for Task Difficulty and Executive Capacity

    PubMed Central

    Simon, Sharon S.; Tusch, Erich S.; Holcomb, Phillip J.; Daffner, Kirk R.

    2016-01-01

    The classic account of the load theory (LT) of attention suggests that increasing cognitive load leads to greater processing of task-irrelevant stimuli due to competition for limited executive resource that reduces the ability to actively maintain current processing priorities. Studies testing this hypothesis have yielded widely divergent outcomes. The inconsistent results may, in part, be related to variability in executive capacity (EC) and task difficulty across subjects in different studies. Here, we used a cross-modal paradigm to investigate whether augmented working memory (WM) load leads to increased early distracter processing, and controlled for the potential confounders of EC and task difficulty. Twenty-three young subjects were engaged in a primary visual WM task, under high and low load conditions, while instructed to ignore irrelevant auditory stimuli. Demands of the high load condition were individually titrated to make task difficulty comparable across subjects with differing EC. Event-related potentials (ERPs) were used to measure neural activity in response to stimuli presented in both the task relevant modality (visual) and task-irrelevant modality (auditory). Behavioral results indicate that the load manipulation and titration procedure of the primary visual task were successful. ERPs demonstrated that in response to visual target stimuli, there was a load-related increase in the posterior slow wave, an index of sustained attention and effort. Importantly, under high load, there was a decrease of the auditory N1 in response to distracters, a marker of early auditory processing. These results suggest that increased WM load is associated with enhanced attentional engagement and protection from distraction in a cross-modal setting, even after controlling for task difficulty and EC. Our findings challenge the classic LT and offer support for alternative models. PMID:27536226

  14. Increasing Working Memory Load Reduces Processing of Cross-Modal Task-Irrelevant Stimuli Even after Controlling for Task Difficulty and Executive Capacity.

    PubMed

    Simon, Sharon S; Tusch, Erich S; Holcomb, Phillip J; Daffner, Kirk R

    2016-01-01

    The classic account of the load theory (LT) of attention suggests that increasing cognitive load leads to greater processing of task-irrelevant stimuli due to competition for limited executive resource that reduces the ability to actively maintain current processing priorities. Studies testing this hypothesis have yielded widely divergent outcomes. The inconsistent results may, in part, be related to variability in executive capacity (EC) and task difficulty across subjects in different studies. Here, we used a cross-modal paradigm to investigate whether augmented working memory (WM) load leads to increased early distracter processing, and controlled for the potential confounders of EC and task difficulty. Twenty-three young subjects were engaged in a primary visual WM task, under high and low load conditions, while instructed to ignore irrelevant auditory stimuli. Demands of the high load condition were individually titrated to make task difficulty comparable across subjects with differing EC. Event-related potentials (ERPs) were used to measure neural activity in response to stimuli presented in both the task relevant modality (visual) and task-irrelevant modality (auditory). Behavioral results indicate that the load manipulation and titration procedure of the primary visual task were successful. ERPs demonstrated that in response to visual target stimuli, there was a load-related increase in the posterior slow wave, an index of sustained attention and effort. Importantly, under high load, there was a decrease of the auditory N1 in response to distracters, a marker of early auditory processing. These results suggest that increased WM load is associated with enhanced attentional engagement and protection from distraction in a cross-modal setting, even after controlling for task difficulty and EC. Our findings challenge the classic LT and offer support for alternative models.

  15. Distinct Olfactory Cross-Modal Effects on the Human Motor System

    PubMed Central

    Rossi, Simone; De Capua, Alberto; Pasqualetti, Patrizio; Ulivelli, Monica; Falzarano, Vincenzo; Bartalini, Sabina; Passero, Stefano; Nuti, Daniele

    2008-01-01

    Background Converging evidence indicates that action observation and action-related sounds activate cross-modally the human motor system. Since olfaction, the most ancestral sense, may have behavioural consequences on human activities, we causally investigated by transcranial magnetic stimulation (TMS) whether food odour could additionally facilitate the human motor system during the observation of grasping objects with alimentary valence, and the degree of specificity of these effects. Methodology/Principal Findings In a repeated-measure block design, carried out on 24 healthy individuals participating to three different experiments, we show that sniffing alimentary odorants immediately increases the motor potentials evoked in hand muscles by TMS of the motor cortex. This effect was odorant-specific and was absent when subjects were presented with odorants including a potentially noxious trigeminal component. The smell-induced corticospinal facilitation of hand muscles during observation of grasping was an additive effect which superimposed to that induced by the mere observation of grasping actions for food or non-food objects. The odour-induced motor facilitation took place only in case of congruence between the sniffed odour and the observed grasped food, and specifically involved the muscle acting as prime mover for hand/fingers shaping in the observed action. Conclusions/Significance Complex olfactory cross-modal effects on the human corticospinal system are physiologically demonstrable. They are odorant-specific and, depending on the experimental context, muscle- and action-specific as well. This finding implies potential new diagnostic and rehabilitative applications. PMID:18301777

  16. [Ventriloquism and audio-visual integration of voice and face].

    PubMed

    Yokosawa, Kazuhiko; Kanaya, Shoko

    2012-07-01

    Presenting synchronous auditory and visual stimuli in separate locations creates the illusion that the sound originates from the direction of the visual stimulus. Participants' auditory localization bias, called the ventriloquism effect, has revealed factors affecting the perceptual integration of audio-visual stimuli. However, many studies on audio-visual processes have focused on performance in simplified experimental situations, with a single stimulus in each sensory modality. These results cannot necessarily explain our perceptual behavior in natural scenes, where various signals exist within a single sensory modality. In the present study we report the contributions of a cognitive factor, that is, the audio-visual congruency of speech, although this factor has often been underestimated in previous ventriloquism research. Thus, we investigated the contribution of speech congruency on the ventriloquism effect using a spoken utterance and two videos of a talking face. The salience of facial movements was also manipulated. As a result, when bilateral visual stimuli are presented in synchrony with a single voice, cross-modal speech congruency was found to have a significant impact on the ventriloquism effect. This result also indicated that more salient visual utterances attracted participants' auditory localization. The congruent pairing of audio-visual utterances elicited greater localization bias than did incongruent pairing, whereas previous studies have reported little dependency on the reality of stimuli in ventriloquism. Moreover, audio-visual illusory congruency, owing to the McGurk effect, caused substantial visual interference to auditory localization. This suggests that a greater flexibility in responding to multi-sensory environments exists than has been previously considered.

  17. Content congruency and its interplay with temporal synchrony modulate integration between rhythmic audiovisual streams.

    PubMed

    Su, Yi-Huang

    2014-01-01

    Both lower-level stimulus factors (e.g., temporal proximity) and higher-level cognitive factors (e.g., content congruency) are known to influence multisensory integration. The former can direct attention in a converging manner, and the latter can indicate whether information from the two modalities belongs together. The present research investigated whether and how these two factors interacted in the perception of rhythmic, audiovisual (AV) streams derived from a human movement scenario. Congruency here was based on sensorimotor correspondence pertaining to rhythm perception. Participants attended to bimodal stimuli consisting of a humanlike figure moving regularly to a sequence of auditory beat, and detected a possible auditory temporal deviant. The figure moved either downwards (congruently) or upwards (incongruently) to the downbeat, while in both situations the movement was either synchronous with the beat, or lagging behind it. Greater cross-modal binding was expected to hinder deviant detection. Results revealed poorer detection for congruent than for incongruent streams, suggesting stronger integration in the former. False alarms increased in asynchronous stimuli only for congruent streams, indicating greater tendency for deviant report due to visual capture of asynchronous auditory events. In addition, a greater increase in perceived synchrony was associated with a greater reduction in false alarms for congruent streams, while the pattern was reversed for incongruent ones. These results demonstrate that content congruency as a top-down factor not only promotes integration, but also modulates bottom-up effects of synchrony. Results are also discussed regarding how theories of integration and attentional entrainment may be combined in the context of rhythmic multisensory stimuli.

  18. Neural differences between the processing of musical meaning conveyed by direction of pitch change and natural music in congenital amusia.

    PubMed

    Zhou, Linshu; Liu, Fang; Jing, Xiaoyi; Jiang, Cunmei

    2017-02-01

    Music is a unique communication system for human beings. Iconic musical meaning is one dimension of musical meaning, which emerges from musical information resembling sounds of objects, qualities of objects, or qualities of abstract concepts. The present study investigated whether congenital amusia, a disorder of musical pitch perception, impacts the processing of iconic musical meaning. With a cross-modal semantic priming paradigm, target images were primed by semantically congruent or incongruent musical excerpts, which were characterized by direction (upward or downward) of pitch change (Experiment 1), or were selected from natural music (Experiment 2). Twelve Mandarin-speaking amusics and 12 controls performed a recognition (implicit) and a semantic congruency judgment (explicit) task while their EEG waveforms were recorded. Unlike controls, amusics failed to elicit an N400 effect when musical meaning was represented by direction of pitch change, regardless of the nature of the tasks (implicit versus explicit). However, the N400 effect in response to musical meaning in natural musical excerpts was observed for both the groups in both types of tasks. These results indicate that amusics are able to process iconic musical meaning through multiple acoustic cues in natural musical excerpts, but not through the direction of pitch change. This is the first study to investigate the processing of musical meaning in congenital amusia, providing evidence in support of the "melodic contour deafness hypothesis" with regard to iconic musical meaning processing in this disorder. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Reading in the dark: neural correlates and cross-modal plasticity for learning to read entire words without visual experience.

    PubMed

    Sigalov, Nadine; Maidenbaum, Shachar; Amedi, Amir

    2016-03-01

    Cognitive neuroscience has long attempted to determine the ways in which cortical selectivity develops, and the impact of nature vs. nurture on it. Congenital blindness (CB) offers a unique opportunity to test this question as the brains of blind individuals develop without visual experience. Here we approach this question through the reading network. Several areas in the visual cortex have been implicated as part of the reading network, and one of the main ones among them is the VWFA, which is selective to the form of letters and words. But what happens in the CB brain? On the one hand, it has been shown that cross-modal plasticity leads to the recruitment of occipital areas, including the VWFA, for linguistic tasks. On the other hand, we have recently demonstrated VWFA activity for letters in contrast to other visual categories when the information is provided via other senses such as touch or audition. Which of these tasks is more dominant? By which mechanism does the CB brain process reading? Using fMRI and visual-to-auditory sensory substitution which transfers the topographical features of the letters we compare reading with semantic and scrambled conditions in a group of CB. We found activation in early auditory and visual cortices during the early processing phase (letter), while the later phase (word) showed VWFA and bilateral dorsal-intraparietal activations for words. This further supports the notion that many visual regions in general, even early visual areas, also maintain a predilection for task processing even when the modality is variable and in spite of putative lifelong linguistic cross-modal plasticity. Furthermore, we find that the VWFA is recruited preferentially for letter and word form, while it was not recruited, and even exhibited deactivation, for an immediately subsequent semantic task suggesting that despite only short sensory substitution experience orthographic task processing can dominate semantic processing in the VWFA. On a wider scope, this implies that at least in some cases cross-modal plasticity which enables the recruitment of areas for new tasks may be dominated by sensory independent task specific activation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. The precedence of topological change over top-down attention in masked priming.

    PubMed

    Huang, Yan; Zhou, Tiangang; Chen, Lin

    2011-10-14

    Recent data indicate that unconscious masked priming can be mediated by top-down attentional set, so that priming effects of congruence between a masked prime and a subsequent probe vanish when the congruence ceases to be task relevant. Here, we show that, while the attentional set determines masked priming for color and orientation features, it does not fully determine priming based on the topological properties of stimuli. Specifically, across a series of different choice-RT tasks, we find that topological congruence between prime and probe stimuli affects RTs for the probes even when other stimulus information (e.g., color or orientation) is required for the response, whereas congruence priming effects of color or orientation occur only when these features are response relevant. Our results suggest that changes in topological properties take precedence over task-directed top-down attentional modulation in masked priming.

  1. Linguistic and Perceptual Mapping in Spatial Representations: An Attentional Account.

    PubMed

    Valdés-Conroy, Berenice; Hinojosa, José A; Román, Francisco J; Romero-Ferreiro, Verónica

    2018-03-01

    Building on evidence for embodied representations, we investigated whether Spanish spatial terms map onto the NEAR/FAR perceptual division of space. Using a long horizontal display, we measured congruency effects during the processing of spatial terms presented in NEAR or FAR space. Across three experiments, we manipulated the task demands in order to investigate the role of endogenous attention in linguistic and perceptual space mapping. We predicted congruency effects only when spatial properties were relevant for the task (reaching estimation task, Experiment 1) but not when attention was allocated to other features (lexical decision, Experiment 2; and color, Experiment 3). Results showed faster responses for words presented in Near-space in all experiments. Consistent with our hypothesis, congruency effects were observed only when a reaching estimate was requested. Our results add important evidence for the role of top-down processing in congruency effects from embodied representations of spatial terms. Copyright © 2017 Cognitive Science Society, Inc.

  2. Pain Anxiety and Its Association With Pain Congruence Trajectories During the Cold Pressor Task.

    PubMed

    Clark, Shannon M; Cano, Annmarie; Goubert, Liesbet; Vlaeyen, Johan W S; Wurm, Lee H; Corley, Angelia M

    2017-04-01

    Incongruence of pain severity ratings among people experiencing pain and their observers has been linked to psychological distress. Previous studies have measured pain rating congruence through static self-report, involving a single rating of pain; however, this method does not capture changes in ratings over time. The present study examined the extent to which partners were congruent on multiple ratings of a participants' pain severity during the cold pressor task. Furthermore, 2 components of pain anxiety-pain catastrophizing and perceived threat-were examined as predictors of pain congruence. Undergraduate couples in a romantic relationship (N = 127 dyads) participated in this study. Both partners completed measures of pain catastrophizing and perceived threat before randomization to their cold pressor participant or observer roles. Participants and observers rated the participant's pain in writing several times over the course of the task. On average, observers rated participants' pain as less severe than participants' rated their own pain. In addition, congruence between partners increased over time because of observers' ratings becoming more similar to participant's ratings. Finally, pain catastrophizing and perceived threat independently and jointly influenced the degree to which partners similarly rated the participant's pain. This article presents a novel application of the cold pressor task to show that pain rating congruence among romantic partners changes over time. These findings indicate that pain congruence is not static and is subject to pain anxiety in both partners. Copyright © 2016 American Pain Society. Published by Elsevier Inc. All rights reserved.

  3. Does working memory capacity predict cross-modally induced failures of awareness?

    PubMed

    Kreitz, Carina; Furley, Philip; Simons, Daniel J; Memmert, Daniel

    2016-01-01

    People often fail to notice unexpected stimuli when they are focusing attention on another task. Most studies of this phenomenon address visual failures induced by visual attention tasks (inattentional blindness). Yet, such failures also occur within audition (inattentional deafness), and people can even miss unexpected events in one sensory modality when focusing attention on tasks in another modality. Such cross-modal failures are revealing because they suggest the existence of a common, central resource limitation. And, such central limits might be predicted from individual differences in cognitive capacity. We replicated earlier evidence, establishing substantial rates of inattentional deafness during a visual task and inattentional blindness during an auditory task. However, neither individual working memory capacity nor the ability to perform the primary task predicted noticing in either modality. Thus, individual differences in cognitive capacity did not predict failures of awareness even though the failures presumably resulted from central resource limitations. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Effects of spatial congruency on saccade and visual discrimination performance in a dual-task paradigm.

    PubMed

    Moehler, Tobias; Fiehler, Katja

    2014-12-01

    The present study investigated the coupling of selection-for-perception and selection-for-action during saccadic eye movement planning in three dual-task experiments. We focused on the effects of spatial congruency of saccade target (ST) location and discrimination target (DT) location and the time between ST-cue and Go-signal (SOA) on saccadic eye movement performance. In two experiments, participants performed a visual discrimination task at a cued location while programming a saccadic eye movement to a cued location. In the third experiment, the discrimination task was not cued and appeared at a random location. Spatial congruency of ST-location and DT-location resulted in enhanced perceptual performance irrespective of SOA. Perceptual performance in spatially incongruent trials was above chance, but only when the DT-location was cued. Saccade accuracy and precision were also affected by spatial congruency showing superior performance when the ST- and DT-location coincided. Saccade latency was only affected by spatial congruency when the DT-cue was predictive of the ST-location. Moreover, saccades consistently curved away from the incongruent DT-locations. Importantly, the effects of spatial congruency on saccade parameters only occurred when the DT-location was cued; therefore, results from experiments 1 and 2 are due to the endogenous allocation of attention to the DT-location and not caused by the salience of the probe. The SOA affected saccade latency showing decreasing latencies with increasing SOA. In conclusion, our results demonstrate that visuospatial attention can be voluntarily distributed upon spatially distinct perceptual and motor goals in dual-task situations, resulting in a decline of visual discrimination and saccade performance.

  5. Basic multisensory functions can be acquired after congenital visual pattern deprivation in humans.

    PubMed

    Putzar, Lisa; Gondan, Matthias; Röder, Brigitte

    2012-01-01

    People treated for bilateral congenital cataracts offer a model to study the influence of visual deprivation in early infancy on visual and multisensory development. We investigated cross-modal integration capabilities in cataract patients using a simple detection task that provided redundant information to two different senses. In both patients and controls, redundancy gains were consistent with coactivation models, indicating an integrated processing of modality-specific information. This finding is in contrast with recent studies showing impaired higher-level multisensory interactions in cataract patients. The present results suggest that basic cross-modal integrative processes for simple short stimuli do not depend on visual and/or crossmodal input since birth.

  6. Different Levels of Learning Interact to Shape the Congruency Sequence Effect

    ERIC Educational Resources Information Center

    Weissman, Daniel H.; Hawks, Zoë W.; Egner, Tobias

    2016-01-01

    The congruency effect in distracter interference tasks is often reduced after incongruent relative to congruent trials. Moreover, this "congruency sequence effect" (CSE) is influenced by learning related to concrete stimulus and response features as well as by learning related to abstract cognitive control processes. There is an ongoing…

  7. Attention Modulation by Proportion Congruency: The Asymmetrical List Shifting Effect

    ERIC Educational Resources Information Center

    Abrahamse, Elger L.; Duthoo, Wout; Notebaert, Wim; Risko, Evan F.

    2013-01-01

    Proportion congruency effects represent hallmark phenomena in current theorizing about cognitive control. This is based on the notion that proportion congruency determines the relative levels of attention to relevant and irrelevant information in conflict tasks. However, little empirical evidence exists that uniquely supports such an attention…

  8. Arousal Rules: An Empirical Investigation into the Aesthetic Experience of Cross-Modal Perception with Emotional Visual Music

    PubMed Central

    Lee, Irene Eunyoung; Latchoumane, Charles-Francois V.; Jeong, Jaeseung

    2017-01-01

    Emotional visual music is a promising tool for the study of aesthetic perception in human psychology; however, the production of such stimuli and the mechanisms of auditory-visual emotion perception remain poorly understood. In Experiment 1, we suggested a literature-based, directive approach to emotional visual music design, and inspected the emotional meanings thereof using the self-rated psychometric and electroencephalographic (EEG) responses of the viewers. A two-dimensional (2D) approach to the assessment of emotion (the valence-arousal plane) with frontal alpha power asymmetry EEG (as a proposed index of valence) validated our visual music as an emotional stimulus. In Experiment 2, we used our synthetic stimuli to investigate possible underlying mechanisms of affective evaluation mechanisms in relation to audio and visual integration conditions between modalities (namely congruent, complementation, or incongruent combinations). In this experiment, we found that, when arousal information between auditory and visual modalities was contradictory [for example, active (+) on the audio channel but passive (−) on the video channel], the perceived emotion of cross-modal perception (visual music) followed the channel conveying the stronger arousal. Moreover, we found that an enhancement effect (heightened and compacted in subjects' emotional responses) in the aesthetic perception of visual music might occur when the two channels contained contradictory arousal information and positive congruency in valence and texture/control. To the best of our knowledge, this work is the first to propose a literature-based directive production of emotional visual music prototypes and the validations thereof for the study of cross-modally evoked aesthetic experiences in human subjects. PMID:28421007

  9. Arousal Rules: An Empirical Investigation into the Aesthetic Experience of Cross-Modal Perception with Emotional Visual Music.

    PubMed

    Lee, Irene Eunyoung; Latchoumane, Charles-Francois V; Jeong, Jaeseung

    2017-01-01

    Emotional visual music is a promising tool for the study of aesthetic perception in human psychology; however, the production of such stimuli and the mechanisms of auditory-visual emotion perception remain poorly understood. In Experiment 1, we suggested a literature-based, directive approach to emotional visual music design, and inspected the emotional meanings thereof using the self-rated psychometric and electroencephalographic (EEG) responses of the viewers. A two-dimensional (2D) approach to the assessment of emotion (the valence-arousal plane) with frontal alpha power asymmetry EEG (as a proposed index of valence) validated our visual music as an emotional stimulus. In Experiment 2, we used our synthetic stimuli to investigate possible underlying mechanisms of affective evaluation mechanisms in relation to audio and visual integration conditions between modalities (namely congruent, complementation, or incongruent combinations). In this experiment, we found that, when arousal information between auditory and visual modalities was contradictory [for example, active (+) on the audio channel but passive (-) on the video channel], the perceived emotion of cross-modal perception (visual music) followed the channel conveying the stronger arousal. Moreover, we found that an enhancement effect (heightened and compacted in subjects' emotional responses) in the aesthetic perception of visual music might occur when the two channels contained contradictory arousal information and positive congruency in valence and texture/control. To the best of our knowledge, this work is the first to propose a literature-based directive production of emotional visual music prototypes and the validations thereof for the study of cross-modally evoked aesthetic experiences in human subjects.

  10. Functionally segregated neural substrates for arbitrary audiovisual paired-association learning.

    PubMed

    Tanabe, Hiroki C; Honda, Manabu; Sadato, Norihiro

    2005-07-06

    To clarify the neural substrates and their dynamics during crossmodal association learning, we conducted functional magnetic resonance imaging (MRI) during audiovisual paired-association learning of delayed matching-to-sample tasks. Thirty subjects were involved in the study; 15 performed an audiovisual paired-association learning task, and the remainder completed a control visuo-visual task. Each trial consisted of the successive presentation of a pair of stimuli. Subjects were asked to identify predefined audiovisual or visuo-visual pairs by trial and error. Feedback for each trial was given regardless of whether the response was correct or incorrect. During the delay period, several areas showed an increase in the MRI signal as learning proceeded: crossmodal activity increased in unimodal areas corresponding to visual or auditory areas, and polymodal responses increased in the occipitotemporal junction and parahippocampal gyrus. This pattern was not observed in the visuo-visual intramodal paired-association learning task, suggesting that crossmodal associations might be formed by binding unimodal sensory areas via polymodal regions. In both the audiovisual and visuo-visual tasks, the MRI signal in the superior temporal sulcus (STS) in response to the second stimulus and feedback peaked during the early phase of learning and then decreased, indicating that the STS might be key to the creation of paired associations, regardless of stimulus type. In contrast to the activity changes in the regions discussed above, there was constant activity in the frontoparietal circuit during the delay period in both tasks, implying that the neural substrates for the formation and storage of paired associates are distinct from working memory circuits.

  11. Congruency sequence effect without feature integration and contingency learning.

    PubMed

    Kim, Sanga; Cho, Yang Seok

    2014-06-01

    The magnitude of congruency effects, such as the flanker-compatibility effects, has been found to vary as a function of the congruency of the previous trial. Some studies have suggested that this congruency sequence effect is attributable to stimulus and/or response priming, and/or contingency learning, whereas other studies have suggested that the control process triggered by conflict modulates the congruency effect. The present study examined whether sequential modulation can occur without stimulus and response repetitions and contingency learning. Participants were asked to perform two color flanker-compatibility tasks alternately in a trial-by-trial manner, with four fingers of one hand in Experiment 1 and with the index and middle fingers of two hands in Experiment 2, to avoid stimulus and response repetitions and contingency learning. A significant congruency sequence effect was obtained between the congruencies of the two tasks in Experiment 1 but not in Experiment 2. These results provide evidence for the idea that the sequential modulation is, at least in part, an outcome of the top-down control process triggered by conflict, which is specific to response mode. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Categorization difficulty modulates the mediated route for response selection in task switching.

    PubMed

    Schneider, Darryl W

    2017-12-22

    Conflict during response selection in task switching is indicated by the response congruency effect: worse performance for incongruent targets (requiring different responses across tasks) than for congruent targets (requiring the same response). The effect can be explained by dual-task processing in a mediated route for response selection, whereby targets are categorized with respect to both tasks. In the present study, the author tested predictions for the modulation of response congruency effects by categorization difficulty derived from a relative-speed-of-processing hypothesis. Categorization difficulty was manipulated for the relevant and irrelevant task dimensions in a novel spatial task-switching paradigm that involved judging the locations of target dots in a grid, without repetition of dot configurations. Response congruency effects were observed and they varied systematically with categorization difficulty (e.g., being larger when irrelevant categorization was easy than when it was hard). These results are consistent with the relative-speed-of-processing hypothesis and suggest that task-switching models that implement variations of the mediated route for response selection need to address the time course of categorization.

  13. Between- and within-Ear Congruency and Laterality Effects in an Auditory Semantic/Emotional Prosody Conflict Task

    ERIC Educational Resources Information Center

    Techentin, Cheryl; Voyer, Daniel; Klein, Raymond M.

    2009-01-01

    The present study investigated the influence of within- and between-ear congruency on interference and laterality effects in an auditory semantic/prosodic conflict task. Participants were presented dichotically with words (e.g., mad, sad, glad) pronounced in either congruent or incongruent emotional tones (e.g., angry, happy, or sad) and…

  14. Conditional automaticity in subliminal morphosyntactic priming.

    PubMed

    Ansorge, Ulrich; Reynvoet, Bert; Hendler, Jessica; Oettl, Lennart; Evert, Stefan

    2013-07-01

    We used a gender-classification task to test the principles of subliminal morphosyntactic priming. In Experiment 1, masked, subliminal feminine or masculine articles were used as primes. They preceded a visible target noun. Subliminal articles either had a morphosyntactically congruent or incongruent gender with the targets. In a gender-classification task of the target nouns, subliminal articles primed the responses: responses were faster in congruent than incongruent conditions (Experiment 1). In Experiment 2, we tested whether this congruence effect depended on gender relevance. In line with a relevance-dependence, the congruence effect only occurred in a gender-classification task but was absent in another categorical discrimination of the target nouns (Experiment 2). The congruence effect also depended on correct word order. It was diminished when nouns preceded articles (Experiment 3). Finally, the congruence effect was replicated with a larger set of targets but only for masculine targets (Experiment 4). Results are discussed in light of theories of subliminal priming in general and of subliminal syntactic priming in particular.

  15. Revealing List-Level Control in the Stroop Task by Uncovering Its Benefits and a Cost

    PubMed Central

    Bugg, Julie M.; McDaniel, Mark A.; Scullin, Michael K.; Braver, Todd S.

    2012-01-01

    Interference is reduced in mostly incongruent relative to mostly congruent lists. Classic accounts of this list-wide proportion congruence effect assume that list-level control processes strategically modulate word reading. Contemporary accounts posit that reliance on the word is modulated poststimulus onset by item-specific information (e.g., proportion congruency of the word). To adjudicate between these accounts, we used novel designs featuring neutral trials. In two experiments, we showed that the list-wide proportion congruence effect is accompanied by a change in neutral trial color-naming performance. Because neutral words have no item-specific bias, this pattern can be attributed to list-level control. Additionally, we showed that list-level attenuation of word reading led to a cost to performance on a secondary prospective memory task but only when that task required processing of the irrelevant, neutral word. These findings indicate that the list-wide proportion congruence effect at least partially reflects list-level control and challenge purely item-specific accounts of this effect. PMID:21767049

  16. Behold the voice of wrath: cross-modal modulation of visual attention by anger prosody.

    PubMed

    Brosch, Tobias; Grandjean, Didier; Sander, David; Scherer, Klaus R

    2008-03-01

    Emotionally relevant stimuli are prioritized in human information processing. It has repeatedly been shown that selective spatial attention is modulated by the emotional content of a stimulus. Until now, studies investigating this phenomenon have only examined within-modality effects, most frequently using pictures of emotional stimuli to modulate visual attention. In this study, we used simultaneously presented utterances with emotional and neutral prosody as cues for a visually presented target in a cross-modal dot probe task. Response times towards targets were faster when they appeared at the location of the source of the emotional prosody. Our results show for the first time a cross-modal attentional modulation of visual attention by auditory affective prosody.

  17. The time course of episodic associative retrieval: electrophysiological correlates of cued recall of unimodal and crossmodal pair-associate learning.

    PubMed

    Tibon, Roni; Levy, Daniel A

    2014-03-01

    Little is known about the time course of processes supporting episodic cued recall. To examine these processes, we recorded event-related scalp electrical potentials during episodic cued recall following pair-associate learning of unimodal object-picture pairs and crossmodal object-picture and sound pairs. Successful cued recall of unimodal associates was characterized by markedly early scalp potential differences over frontal areas, while cued recall of both unimodal and crossmodal associates were reflected by subsequent differences recorded over frontal and parietal areas. Notably, unimodal cued recall success divergences over frontal areas were apparent in a time window generally assumed to reflect the operation of familiarity but not recollection processes, raising the possibility that retrieval success effects in that temporal window may reflect additional mnemonic processes beyond familiarity. Furthermore, parietal scalp potential recall success differences, which did not distinguish between crossmodal and unimodal tasks, seemingly support attentional or buffer accounts of posterior parietal mnemonic function but appear to constrain signal accumulation, expectation, or representational accounts.

  18. Cross-modal activation of auditory regions during visuo-spatial working memory in early deafness.

    PubMed

    Ding, Hao; Qin, Wen; Liang, Meng; Ming, Dong; Wan, Baikun; Li, Qiang; Yu, Chunshui

    2015-09-01

    Early deafness can reshape deprived auditory regions to enable the processing of signals from the remaining intact sensory modalities. Cross-modal activation has been observed in auditory regions during non-auditory tasks in early deaf subjects. In hearing subjects, visual working memory can evoke activation of the visual cortex, which further contributes to behavioural performance. In early deaf subjects, however, whether and how auditory regions participate in visual working memory remains unclear. We hypothesized that auditory regions may be involved in visual working memory processing and activation of auditory regions may contribute to the superior behavioural performance of early deaf subjects. In this study, 41 early deaf subjects (22 females and 19 males, age range: 20-26 years, age of onset of deafness < 2 years) and 40 age- and gender-matched hearing controls underwent functional magnetic resonance imaging during a visuo-spatial delayed recognition task that consisted of encoding, maintenance and recognition stages. The early deaf subjects exhibited faster reaction times on the spatial working memory task than did the hearing controls. Compared with hearing controls, deaf subjects exhibited increased activation in the superior temporal gyrus bilaterally during the recognition stage. This increased activation amplitude predicted faster and more accurate working memory performance in deaf subjects. Deaf subjects also had increased activation in the superior temporal gyrus bilaterally during the maintenance stage and in the right superior temporal gyrus during the encoding stage. These increased activation amplitude also predicted faster reaction times on the spatial working memory task in deaf subjects. These findings suggest that cross-modal plasticity occurs in auditory association areas in early deaf subjects. These areas are involved in visuo-spatial working memory. Furthermore, amplitudes of cross-modal activation during the maintenance stage were positively correlated with the age of onset of hearing aid use and were negatively correlated with the percentage of lifetime hearing aid use in deaf subjects. These findings suggest that earlier and longer hearing aid use may inhibit cross-modal reorganization in early deaf subjects. Granger causality analysis revealed that, compared to the hearing controls, the deaf subjects had an enhanced net causal flow from the frontal eye field to the superior temporal gyrus. These findings indicate that a top-down mechanism may better account for the cross-modal activation of auditory regions in early deaf subjects.See MacSweeney and Cardin (doi:10/1093/awv197) for a scientific commentary on this article. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Modality independence of order coding in working memory: Evidence from cross-modal order interference at recall.

    PubMed

    Vandierendonck, André

    2016-01-01

    Working memory researchers do not agree on whether order in serial recall is encoded by dedicated modality-specific systems or by a more general modality-independent system. Although previous research supports the existence of autonomous modality-specific systems, it has been shown that serial recognition memory is prone to cross-modal order interference by concurrent tasks. The present study used a serial recall task, which was performed in a single-task condition and in a dual-task condition with an embedded memory task in the retention interval. The modality of the serial task was either verbal or visuospatial, and the embedded tasks were in the other modality and required either serial or item recall. Care was taken to avoid modality overlaps during presentation and recall. In Experiment 1, visuospatial but not verbal serial recall was more impaired when the embedded task was an order than when it was an item task. Using a more difficult verbal serial recall task, verbal serial recall was also more impaired by another order recall task in Experiment 2. These findings are consistent with the hypothesis of modality-independent order coding. The implications for views on short-term recall and the multicomponent view of working memory are discussed.

  20. Visual cortex activation in late-onset, Braille naive blind individuals: an fMRI study during semantic and phonological tasks with heard words.

    PubMed

    Burton, Harold; McLaren, Donald G

    2006-01-09

    Visual cortex activity in the blind has been shown in Braille literate people, which raise the question of whether Braille literacy influences cross-modal reorganization. We used fMRI to examine visual cortex activation during semantic and phonological tasks with auditory presentation of words in two late-onset blind individuals who lacked Braille literacy. Multiple visual cortical regions were activated in the Braille naive individuals. Positive BOLD responses were noted in lower tier visuotopic (e.g., V1, V2, VP, and V3) and several higher tier visual areas (e.g., V4v, V8, and BA 37). Activity was more extensive and cross-correlation magnitudes were greater during the semantic compared to the phonological task. These results with Braille naive individuals plausibly suggest that visual deprivation alone induces visual cortex reorganization. Cross-modal reorganization of lower tier visual areas may be recruited by developing skills in attending to selected non-visual inputs (e.g., Braille literacy, enhanced auditory skills). Such learning might strengthen remote connections with multisensory cortical areas. Of necessity, the Braille naive participants must attend to auditory stimulation for language. We hypothesize that learning to attend to non-visual inputs probably strengthens the remaining active synapses following visual deprivation, and thereby, increases cross-modal activation of lower tier visual areas when performing highly demanding non-visual tasks of which reading Braille is just one example.

  1. Visual cortex activation in late-onset, Braille naive blind individuals: An fMRI study during semantic and phonological tasks with heard words

    PubMed Central

    Burton, Harold; McLaren, Donald G.

    2013-01-01

    Visual cortex activity in the blind has been shown in Braille literate people, which raise the question of whether Braille literacy influences cross-modal reorganization. We used fMRI to examine visual cortex activation during semantic and phonological tasks with auditory presentation of words in two late-onset blind individuals who lacked Braille literacy. Multiple visual cortical regions were activated in the Braille naive individuals. Positive BOLD responses were noted in lower tier visuotopic (e.g., V1, V2, VP, and V3) and several higher tier visual areas (e.g., V4v, V8, and BA 37). Activity was more extensive and cross-correlation magnitudes were greater during the semantic compared to the phonological task. These results with Braille naive individuals plausibly suggest that visual deprivation alone induces visual cortex reorganization. Cross-modal reorganization of lower tier visual areas may be recruited by developing skills in attending to selected non-visual inputs (e.g., Braille literacy, enhanced auditory skills). Such learning might strengthen remote connections with multisensory cortical areas. Of necessity, the Braille naive participants must attend to auditory stimulation for language. We hypothesize that learning to attend to non-visual inputs probably strengthens the remaining active synapses following visual deprivation, and thereby, increases cross-modal activation of lower tier visual areas when performing highly demanding non-visual tasks of which reading Braille is just one example. PMID:16198053

  2. Assessing of imagined and real expanded Timed Up and Go tests in patients with chronic stroke: A case-control study.

    PubMed

    Geiger, Maxime; Bonnyaud, Céline; Bussel, Bernard; Roche, Nicolas

    2018-05-08

    To assess temporal congruence (the difference in performance-time and time to imagine) between the sub-tasks of the Expanded Timed Up and Go (ETUG) and imagined ETUG (iETUG) tests in patients with hemiparesis following unilateral hemispheric stroke, and to compare the results with those for with healthy subjects. Case-controlled study. Subject/patients: Twenty patients with chronic stroke and 20 healthy subjects. TUG, ETUG and iETUG test performance times were recorded for all participants. Temporal congruence was calculated with the following formula: (ETUG-iETUG)/[(ETUG+iETUG)/2]*100. Patients' performances were slower than those of healthy subjects for all 5 sub-tasks of the TUG, ETUG and iETUG tests. However, there was no significant difference in temporal congruence between healthy subjects and patients. Intragroup analysis showed significant differences between the executed and the imagined conditions for both groups for the "walking", "turn around" and "sitting" phases (healthy subjects p = 0.01, p = 0.03, p = 0.03, and patients p = 0.01, p = 0.003, p = 0.003, respectively). Temporal congruence was similar for healthy subjects and patients for all sub-tasks of the ETUG test. Moreover, temporal congruence was reduced for the same sub-tasks of the ETUG test in patients and healthy subjects. This suggests that the motor imagery involved the same cerebral structures in both groups, probably including the cerebellum, since it was intact in all patients.

  3. Effect of Syllable Congruency in Sixth Graders in the Lexical Decision Task with Masked Priming

    ERIC Educational Resources Information Center

    Chetail, Fabienne; Mathey, Stephanie

    2012-01-01

    The aim of this study was to investigate the role of the syllable in visual recognition of French words in Grade 6. To do so, the syllabic congruency effect was examined in the lexical decision task combined with masked priming. Target words were preceded by pseudoword primes sharing the first letters that either corresponded to the syllable…

  4. When congruence breeds preference: the influence of selective attention processes on evaluative conditioning.

    PubMed

    Blask, Katarina; Walther, Eva; Frings, Christian

    2017-09-01

    We investigated in two experiments whether selective attention processes modulate evaluative conditioning (EC). Based on the fact that the typical stimuli in an EC paradigm involve an affect-laden unconditioned stimulus (US) and a neutral conditioned stimulus (CS), we started from the assumption that learning might depend in part upon selective attention to the US. Attention to the US was manipulated by including a variant of the Eriksen flanker task in the EC paradigm. Similarly to the original Flanker paradigm, we implemented a target-distracter logic by introducing the CS as the task-relevant stimulus (i.e. the target) to which the participants had to respond and the US as a task-irrelevant distracter. Experiment 1 showed that CS-US congruence modulated EC if the CS had to be selected against the US. Specifically, EC was more pronounced for congruent CS-US pairs as compared to incongruent CS-US pairs. Experiment 2 disentangled CS-US congruence and CS-US compatibility and suggested that it is indeed CS-US stimulus congruence rather than CS-US response compatibility that modulates EC.

  5. The neural basis of visual dominance in the context of audio-visual object processing.

    PubMed

    Schmid, Carmen; Büchel, Christian; Rose, Michael

    2011-03-01

    Visual dominance refers to the observation that in bimodal environments vision often has an advantage over other senses in human. Therefore, a better memory performance for visual compared to, e.g., auditory material is assumed. However, the reason for this preferential processing and the relation to the memory formation is largely unknown. In this fMRI experiment, we manipulated cross-modal competition and attention, two factors that both modulate bimodal stimulus processing and can affect memory formation. Pictures and sounds of objects were presented simultaneously in two levels of recognisability, thus manipulating the amount of cross-modal competition. Attention was manipulated via task instruction and directed either to the visual or the auditory modality. The factorial design allowed a direct comparison of the effects between both modalities. The resulting memory performance showed that visual dominance was limited to a distinct task setting. Visual was superior to auditory object memory only when allocating attention towards the competing modality. During encoding, cross-modal competition and attention towards the opponent domain reduced fMRI signals in both neural systems, but cross-modal competition was more pronounced in the auditory system and only in auditory cortex this competition was further modulated by attention. Furthermore, neural activity reduction in auditory cortex during encoding was closely related to the behavioural auditory memory impairment. These results indicate that visual dominance emerges from a less pronounced vulnerability of the visual system against competition from the auditory domain. Copyright © 2010 Elsevier Inc. All rights reserved.

  6. Audio-Visual, Visuo-Tactile and Audio-Tactile Correspondences in Preschoolers.

    PubMed

    Nava, Elena; Grassi, Massimo; Turati, Chiara

    2016-01-01

    Interest in crossmodal correspondences has recently seen a renaissance thanks to numerous studies in human adults. Yet, still very little is known about crossmodal correspondences in children, particularly in sensory pairings other than audition and vision. In the current study, we investigated whether 4-5-year-old children match auditory pitch to the spatial motion of visual objects (audio-visual condition). In addition, we investigated whether this correspondence extends to touch, i.e., whether children also match auditory pitch to the spatial motion of touch (audio-tactile condition) and the spatial motion of visual objects to touch (visuo-tactile condition). In two experiments, two different groups of children were asked to indicate which of two stimuli fitted best with a centrally located third stimulus (Experiment 1), or to report whether two presented stimuli fitted together well (Experiment 2). We found sensitivity to the congruency of all of the sensory pairings only in Experiment 2, suggesting that only under specific circumstances can these correspondences be observed. Our results suggest that pitch-height correspondences for audio-visual and audio-tactile combinations may still be weak in preschool children, and speculate that this could be due to immature linguistic and auditory cues that are still developing at age five.

  7. The congruency sequence effect 3.0: a critical test of conflict adaptation.

    PubMed

    Duthoo, Wout; Abrahamse, Elger L; Braem, Senne; Boehler, C Nico; Notebaert, Wim

    2014-01-01

    Over the last two decades, the congruency sequence effect (CSE) -the finding of a reduced congruency effect following incongruent trials in conflict tasks- has played a central role in advancing research on cognitive control. According to the influential conflict-monitoring account, the CSE reflects adjustments in selective attention that enhance task focus when needed, often termed conflict adaptation. However, this dominant interpretation of the CSE has been called into question by several alternative accounts that stress the role of episodic memory processes: feature binding and (stimulus-response) contingency learning. To evaluate the notion of conflict adaptation in accounting for the CSE, we construed versions of three widely used experimental paradigms (the colour-word Stroop, picture-word Stroop and flanker task) that effectively control for feature binding and contingency learning. Results revealed that a CSE can emerge in all three tasks. This strongly suggests a contribution of attentional control to the CSE and highlights the potential of these unprecedentedly clean paradigms for further examining cognitive control.

  8. Cross-modal interaction between visual and olfactory learning in Apis cerana.

    PubMed

    Zhang, Li-Zhen; Zhang, Shao-Wu; Wang, Zi-Long; Yan, Wei-Yu; Zeng, Zhi-Jiang

    2014-10-01

    The power of the small honeybee brain carrying out behavioral and cognitive tasks has been shown repeatedly to be highly impressive. The present study investigates, for the first time, the cross-modal interaction between visual and olfactory learning in Apis cerana. To explore the role and molecular mechanisms of cross-modal learning in A. cerana, the honeybees were trained and tested in a modified Y-maze with seven visual and five olfactory stimulus, where a robust visual threshold for black/white grating (period of 2.8°-3.8°) and relatively olfactory threshold (concentration of 50-25%) was obtained. Meanwhile, the expression levels of five genes (AcCREB, Acdop1, Acdop2, Acdop3, Actyr1) related to learning and memory were analyzed under different training conditions by real-time RT-PCR. The experimental results indicate that A. cerana could exhibit cross-modal interactions between visual and olfactory learning by reducing the threshold level of the conditioning stimuli, and that these genes may play important roles in the learning process of honeybees.

  9. Effects of expectation congruency on event-related potentials (ERPs) to facial expressions depend on cognitive load during the expectation phase.

    PubMed

    Lin, Huiyan; Schulz, Claudia; Straube, Thomas

    2016-10-01

    Previous studies have shown that event-related potentials (ERPs) to facial expressions are modulated by expectation (congruency) and that the ERP effects of expectation congruency are altered by cognitive tasks during the expectation phase. However, it is as yet unknown whether the congruency ERP effects can be modulated by the amount of cognitive load during the expectation phase. To address this question, electroencephalogram (EEG) was acquired when participants viewed fearful and neutral facial expressions. Before the presentation of facial expressions, a cue indicating the expression of a face and subsequently, an expectation interval without any cues were presented. Facial expressions were congruent with the cues in 75% of all trials. During the expectation interval, participants had to solve a cognitive task, in which several letters were presented for target letter detection. The letters were all the same under low load, but differed under high load. Event-related potential (ERP) results showed that the amount of cognitive load during the expectation phase altered the congruency effect in N2 and EPN amplitudes for fearful faces. Congruent as compared to incongruent fearful expressions elicited larger N2 and smaller EPN amplitudes under low load, but these congruency effects were not observed under high load. For neutral faces, a congruency effect in late positive potential (LPP) amplitudes was modulated by cognitive load during the expectation phase. The LPP was more positive for incongruent as compared to congruent faces under low load, but the congruency effect was not evident under high load. The findings indicate that congruency effects on ERPs are modulated by the amount of cognitive load the expectation phase and that this modulation is altered by facial expression. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Cross-modal cueing effects of visuospatial attention on conscious somatosensory perception.

    PubMed

    Doruk, Deniz; Chanes, Lorena; Malavera, Alejandra; Merabet, Lotfi B; Valero-Cabré, Antoni; Fregni, Felipe

    2018-04-01

    The impact of visuospatial attention on perception with supraliminal stimuli and stimuli at the threshold of conscious perception has been previously investigated. In this study, we assess the cross-modal effects of visuospatial attention on conscious perception for near-threshold somatosensory stimuli applied to the face. Fifteen healthy participants completed two sessions of a near-threshold cross-modality cue-target discrimination/conscious detection paradigm. Each trial began with an endogenous visuospatial cue that predicted the location of a weak near-threshold electrical pulse delivered to the right or left cheek with high probability (∼75%). Participants then completed two tasks: first, a forced-choice somatosensory discrimination task (felt once or twice?) and then, a somatosensory conscious detection task (did you feel the stimulus and, if yes, where (left/right)?). Somatosensory discrimination was evaluated with the response reaction times of correctly detected targets, whereas the somatosensory conscious detection was quantified using perceptual sensitivity (d') and response bias (beta). A 2 × 2 repeated measures ANOVA was used for statistical analysis. In the somatosensory discrimination task (1 st task), participants were significantly faster in responding to correctly detected targets (p < 0.001). In the somatosensory conscious detection task (2 nd task), a significant effect of visuospatial attention on response bias (p = 0.008) was observed, suggesting that participants had a less strict criterion for stimuli preceded by spatially valid than invalid visuospatial cues. We showed that spatial attention has the potential to modulate the discrimination and the conscious detection of near-threshold somatosensory stimuli as measured, respectively, by a reduction of reaction times and a shift in response bias toward less conservative responses when the cue predicted stimulus location. A shift in response bias indicates possible effects of spatial attention on internal decision processes. The lack of significant results in perceptual sensitivity (d') could be due to weaker effects of endogenous attention on perception.

  11. Unimodal and crossmodal working memory representations of visual and kinesthetic movement trajectories.

    PubMed

    Seemüller, Anna; Fiehler, Katja; Rösler, Frank

    2011-01-01

    The present study investigated whether visual and kinesthetic stimuli are stored as multisensory or modality-specific representations in unimodal and crossmodal working memory tasks. To this end, angle-shaped movement trajectories were presented to 16 subjects in delayed matching-to-sample tasks either visually or kinesthetically during encoding and recognition. During the retention interval, a secondary visual or kinesthetic interference task was inserted either immediately or with a delay after encoding. The modality of the interference task interacted significantly with the encoding modality. After visual encoding, memory was more impaired by a visual than by a kinesthetic secondary task, while after kinesthetic encoding the pattern was reversed. The time when the secondary task had to be performed interacted with the encoding modality as well. For visual encoding, memory was more impaired, when the secondary task had to be performed at the beginning of the retention interval. In contrast, memory after kinesthetic encoding was more affected, when the secondary task was introduced later in the retention interval. The findings suggest that working memory traces are maintained in a modality-specific format characterized by distinct consolidation processes that take longer after kinesthetic than after visual encoding. Copyright © 2010 Elsevier B.V. All rights reserved.

  12. Is there a general task switching ability?

    PubMed

    Yehene, Einat; Meiran, Nachshon

    2007-11-01

    Participants were tested on two analogous task switching paradigms involving Shape/Size tasks and Vertical/Horizontal tasks, respectively, and three measures of psychometric intelligence, tapping fluid, crystallized and perceptual speed abilities. The paradigms produced similar patterns of group mean reaction times (RTs) and the vast majority of the participants showed switching cost (switch RT minus repeat RT), mixing cost (repeat RT minus single-task RT) and congruency effects. The shared intra-individual variance across paradigms and with psychometric intelligence served as criteria for general ability. Structural equations modeling indicated that switching cost with ample preparation ("residual cost") and mixing cost met these criteria. However, switching cost with little preparation and congruency effects were predominantly paradigm specific.

  13. Are we on the same page? The performance effects of congruence between supervisor and group trust.

    PubMed

    Carter, Min Z; Mossholder, Kevin W

    2015-09-01

    Taking a multiple-stakeholder perspective, we examined the effects of supervisor-work group trust congruence on groups' task and contextual performance using a polynomial regression and response surface analytical framework. We expected motivation experienced by work groups to mediate the positive influence of trust congruence on performance. Although hypothesized congruence effects on performance were more strongly supported for affective rather than for cognitive trust, we found significant indirect effects on performance (via work group motivation) for both types of trust. We discuss the performance effects of trust congruence and incongruence between supervisors and work groups, as well as implications for practice and future research. (c) 2015 APA, all rights reserved).

  14. Schizophrenia patients show task switching deficits consistent with N-methyl-d-aspartate system dysfunction but not global executive deficits: implications for pathophysiology of executive dysfunction in schizophrenia.

    PubMed

    Wylie, Glenn R; Clark, E A; Butler, P D; Javitt, D C

    2010-05-01

    Schizophrenia is associated with cognitive processing deficits, including deficits in executive processing, that represent a core component of the disorder. In the Task Switching Test, subjects view ambiguous stimuli and must alternate between competing rules to generate correct responses. Subjects show worse performance (prolonged response time and/or increased error rates) on the first response after a switch than on subsequent responses ("switch costs"), as well as performing worse when stimuli are incongruent as opposed to congruent ("congruence costs"). Finally, subjects show worse performance in the dual vs single task condition ("mixing costs"). In monkeys, the N-methyl-D-aspartate (NMDA) antagonist ketamine has been shown to increase congruence but not switch costs. Here, subjects viewed colored letters and had to respond alternately based upon letter (X vs O) or color (red vs blue). Switch, congruence and mixing costs were calculated. Patients with schizophrenia (n = 16) and controls (n = 17) showed similar switch costs, consistent with prior literature. Patients nevertheless showed increased congruence and mixing costs. In addition, relative to controls, patients showed worse performance across conditions in the letter vs color tasks, suggesting deficits in form vs color processing. Overall, while confirming executive dysfunction in schizophrenia, this study indicates that not all aspects of executive control are impaired and that the task switching paradigm may be useful for evaluating neurochemical vs neuroanatomic hypotheses of schizophrenia.

  15. Processing of task-irrelevant emotional faces impacted by implicit sequence learning.

    PubMed

    Peng, Ming; Cai, Mengfei; Zhou, Renlai

    2015-12-02

    Attentional load may be increased by task-relevant attention, such as difficulty of task, or task-irrelevant attention, such as an unexpected light-spot in the screen. Several studies have focused on the influence of task-relevant attentional load on task-irrelevant emotion processing. In this study, we used event-related potentials to examine the impact of task-irrelevant attentional load on task-irrelevant expression processing. Eighteen participants identified the color of a word (i.e. the color Stroop task) while a picture of a fearful or a neutral face was shown in the background. The task-irrelevant attentional load was increased by regularly presented congruence trials (congruence between the color and the meaning of the word) in the regular condition because implicit sequence learning was induced. We compared the task-irrelevant expression processing between the regular condition and the random condition (the congruence and incongruence trials were presented randomly). Behaviorally, reaction times for the fearful face condition were faster than the neutral faces condition in the random condition, whereas no significant difference was found in the regular condition. The event-related potential results indicated enhanced positive amplitudes in P2, N2, and P3 components relative to neutral faces in the random condition. In comparison, only P2 differed significantly for the two types of expressions in the regular condition. The study showed that attentional load increased by implicit sequence learning influenced the late processing of task-irrelevant expression.

  16. Performance of normal adults and children on central auditory diagnostic tests and their corresponding visual analogs.

    PubMed

    Bellis, Teri James; Ross, Jody

    2011-09-01

    It has been suggested that, in order to validate a diagnosis of (C)APD (central auditory processing disorder), testing using direct cross-modal analogs should be performed to demonstrate that deficits exist solely or primarily in the auditory modality (McFarland and Cacace, 1995; Cacace and McFarland, 2005). This modality-specific viewpoint is controversial and not universally accepted (American Speech-Language-Hearing Association [ASHA], 2005; Musiek et al, 2005). Further, no such analogs have been developed to date, and neither the feasibility of such testing in normally functioning individuals nor the concurrent validity of cross-modal analogs has been established. The purpose of this study was to investigate the feasibility of cross-modal testing by examining the performance of normal adults and children on four tests of central auditory function and their corresponding visual analogs. In addition, this study investigated the degree to which concurrent validity of auditory and visual versions of these tests could be demonstrated. An experimental repeated measures design was employed. Participants consisted of two groups (adults, n=10; children, n=10) with normal and symmetrical hearing sensitivity, normal or corrected-to-normal visual acuity, and no family or personal history of auditory/otologic, language, learning, neurologic, or related disorders. Visual analogs of four tests in common clinical use for the diagnosis of (C)APD were developed (Dichotic Digits [Musiek, 1983]; Frequency Patterns [Pinheiro and Ptacek, 1971]; Duration Patterns [Pinheiro and Musiek, 1985]; and the Random Gap Detection Test [RGDT; Keith, 2000]). Participants underwent two 1 hr test sessions separated by at least 1 wk. Order of sessions (auditory, visual) and tests within each session were counterbalanced across participants. ANOVAs (analyses of variance) were used to examine effects of group, modality, and laterality (for the Dichotic/Dichoptic Digits tests) or response condition (for the auditory and visual Frequency Patterns and Duration Patterns tests). Pearson product-moment correlations were used to investigate relationships between auditory and visual performance. Adults performed significantly better than children on the Dichotic/Dichoptic Digits tests. Results also revealed a significant effect of modality, with auditory better than visual, and a significant modality×laterality interaction, with a right-ear advantage seen for the auditory task and a left-visual-field advantage seen for the visual task. For the Frequency Patterns test and its visual analog, results revealed a significant modality×response condition interaction, with humming better than labeling for the auditory version but the reversed effect for the visual version. For Duration Patterns testing, visual performance was significantly poorer than auditory performance. Due to poor test-retest reliability and ceiling effects for the auditory and visual gap-detection tasks, analyses could not be performed. No cross-modal correlations were observed for any test. Results demonstrated that cross-modal testing is at least feasible using easily accessible computer hardware and software. The lack of any cross-modal correlations suggests independent processing mechanisms for auditory and visual versions of each task. Examination of performance in individuals with central auditory and pan-sensory disorders is needed to determine the utility of cross-modal analogs in the differential diagnosis of (C)APD. American Academy of Audiology.

  17. Cross-modal perceptual load: the impact of modality and individual differences.

    PubMed

    Sandhu, Rajwant; Dyson, Benjamin James

    2016-05-01

    Visual distractor processing tends to be more pronounced when the perceptual load (PL) of a task is low compared to when it is high [perpetual load theory (PLT); Lavie in J Exp Psychol Hum Percept Perform 21(3):451-468, 1995]. While PLT is well established in the visual domain, application to cross-modal processing has produced mixed results, and the current study was designed in an attempt to improve previous methodologies. First, we assessed PLT using response competition, a typical metric from the uni-modal domain. Second, we looked at the impact of auditory load on visual distractors, and of visual load on auditory distractors, within the same individual. Third, we compared individual uni- and cross-modal selective attention abilities, by correlating performance with the visual Attentional Network Test (ANT). Fourth, we obtained a measure of the relative processing efficiency between vision and audition, to investigate whether processing ease influences the extent of distractor processing. Although distractor processing was evident during both attend auditory and attend visual conditions, we found that PL did not modulate processing of either visual or auditory distractors. We also found support for a correlation between the uni-modal (visual) ANT and our cross-modal task but only when the distractors were visual. Finally, although auditory processing was more impacted by visual distractors, our measure of processing efficiency only accounted for this asymmetry in the auditory high-load condition. The results are discussed with respect to the continued debate regarding the shared or separate nature of processing resources across modalities.

  18. Thumb carpometacarpal joint congruence during functional tasks and thumb range-of-motion activities

    PubMed Central

    Halilaj, Eni; Moore, Douglas C; Patel, Tarpit K; Laidlaw, David H; Ladd, Amy L; Weiss, Arnold-Peter C; Crisco, Joseph J

    2017-01-01

    Joint incongruity is often cited as a possible etiological factor for the high incidence of thumb carpometacarpal (CMC) joint osteoarthritis (OA) in older women. There is evidence suggesting that biomechanics plays a role in CMC OA progression, but little is known about how CMC joint congruence, specifically, differs among different cohorts. The purpose of this in vivo study was to determine if CMC joint congruence differs with sex, age, and early stage OA for different thumb positions. Using CT data from 155 subjects and a congruence metric that is based on both articular morphology and joint posture, we did not find any differences in CMC joint congruence with sex or age group, but found that patients in the early stages of OA exhibit lower congruence than healthy subjects of the same age group. PMID:25570956

  19. Thumb carpometacarpal joint congruence during functional tasks and thumb range-of-motion activities.

    PubMed

    Halilaj, Eni; Moore, Douglas C; Patel, Tarpit K; Laidlaw, David H; Ladd, Amy L; Weiss, Arnold-Peter C; Crisco, Joseph J

    2014-01-01

    Joint incongruity is often cited as a possible etiological factor for the high incidence of thumb carpometacarpal (CMC) joint osteoarthritis (OA) in older women. There is evidence suggesting that biomechanics plays a role in CMC OA progression, but little is known about how CMC joint congruence, specifically, differs among different cohorts. The purpose of this in vivo study was to determine if CMC joint congruence differs with sex, age, and early stage OA for different thumb positions. Using CT data from 155 subjects and a congruence metric that is based on both articular morphology and joint posture, we did not find any differences in CMC joint congruence with sex or age group, but found that patients in the early stages of OA exhibit lower congruence than healthy subjects of the same age group.

  20. Preservation of crossmodal selective attention in healthy aging

    PubMed Central

    Hugenschmidt, Christina E.; Peiffer, Ann M.; McCoy, Thomas P.; Hayasaka, Satoru; Laurienti, Paul J.

    2010-01-01

    The goal of the present study was to determine if older adults benefited from attention to a specific sensory modality in a voluntary attention task and evidenced changes in voluntary or involuntary attention when compared to younger adults. Suppressing and enhancing effects of voluntary attention were assessed using two cued forced-choice tasks, one that asked participants to localize and one that asked them to categorize visual and auditory targets. Involuntary attention was assessed using the same tasks, but with no attentional cues. The effects of attention were evaluated using traditional comparisons of means and Cox proportional hazards models. All analyses showed that older adults benefited behaviorally from selective attention in both visual and auditory conditions, including robust suppressive effects of attention. Of note, the performance of the older adults was commensurate with that of younger adults in almost all analyses, suggesting that older adults can successfully engage crossmodal attention processes. Thus, age-related increases in distractibility across sensory modalities are likely due to mechanisms other than deficits in attentional processing. PMID:19404621

  1. Grouping and Segregation of Sensory Events by Actions in Temporal Audio-Visual Recalibration.

    PubMed

    Ikumi, Nara; Soto-Faraco, Salvador

    2016-01-01

    Perception in multi-sensory environments involves both grouping and segregation of events across sensory modalities. Temporal coincidence between events is considered a strong cue to resolve multisensory perception. However, differences in physical transmission and neural processing times amongst modalities complicate this picture. This is illustrated by cross-modal recalibration, whereby adaptation to audio-visual asynchrony produces shifts in perceived simultaneity. Here, we examined whether voluntary actions might serve as a temporal anchor to cross-modal recalibration in time. Participants were tested on an audio-visual simultaneity judgment task after an adaptation phase where they had to synchronize voluntary actions with audio-visual pairs presented at a fixed asynchrony (vision leading or vision lagging). Our analysis focused on the magnitude of cross-modal recalibration to the adapted audio-visual asynchrony as a function of the nature of the actions during adaptation, putatively fostering cross-modal grouping or, segregation. We found larger temporal adjustments when actions promoted grouping than segregation of sensory events. However, a control experiment suggested that additional factors, such as attention to planning/execution of actions, could have an impact on recalibration effects. Contrary to the view that cross-modal temporal organization is mainly driven by external factors related to the stimulus or environment, our findings add supporting evidence for the idea that perceptual adjustments strongly depend on the observer's inner states induced by motor and cognitive demands.

  2. Grouping and Segregation of Sensory Events by Actions in Temporal Audio-Visual Recalibration

    PubMed Central

    Ikumi, Nara; Soto-Faraco, Salvador

    2017-01-01

    Perception in multi-sensory environments involves both grouping and segregation of events across sensory modalities. Temporal coincidence between events is considered a strong cue to resolve multisensory perception. However, differences in physical transmission and neural processing times amongst modalities complicate this picture. This is illustrated by cross-modal recalibration, whereby adaptation to audio-visual asynchrony produces shifts in perceived simultaneity. Here, we examined whether voluntary actions might serve as a temporal anchor to cross-modal recalibration in time. Participants were tested on an audio-visual simultaneity judgment task after an adaptation phase where they had to synchronize voluntary actions with audio-visual pairs presented at a fixed asynchrony (vision leading or vision lagging). Our analysis focused on the magnitude of cross-modal recalibration to the adapted audio-visual asynchrony as a function of the nature of the actions during adaptation, putatively fostering cross-modal grouping or, segregation. We found larger temporal adjustments when actions promoted grouping than segregation of sensory events. However, a control experiment suggested that additional factors, such as attention to planning/execution of actions, could have an impact on recalibration effects. Contrary to the view that cross-modal temporal organization is mainly driven by external factors related to the stimulus or environment, our findings add supporting evidence for the idea that perceptual adjustments strongly depend on the observer's inner states induced by motor and cognitive demands. PMID:28154529

  3. Neuromagnetic brain activities associated with perceptual categorization and sound-content incongruency: a comparison between monosyllabic words and pitch names

    PubMed Central

    Tsai, Chen-Gia; Chen, Chien-Chung; Wen, Ya-Chien; Chou, Tai-Li

    2015-01-01

    In human cultures, the perceptual categorization of musical pitches relies on pitch-naming systems. A sung pitch name concurrently holds the information of fundamental frequency and pitch name. These two aspects may be either congruent or incongruent with regard to pitch categorization. The present study aimed to compare the neuromagnetic responses to musical and verbal stimuli for congruency judgments, for example a congruent pair for the pitch C4 sung with the pitch name do in a C-major context (the pitch-semantic task) or for the meaning of a word to match the speaker’s identity (the voice-semantic task). Both the behavioral data and neuromagnetic data showed that congruency detection of the speaker’s identity and word meaning was slower than that of the pitch and pitch name. Congruency effects of musical stimuli revealed that pitch categorization and semantic processing of pitch information were associated with P2m and N400m, respectively. For verbal stimuli, P2m and N400m did not show any congruency effect. In both the pitch-semantic task and the voice-semantic task, we found that incongruent stimuli evoked stronger slow waves with the latency of 500–600 ms than congruent stimuli. These findings shed new light on the neural mechanisms underlying pitch-naming processes. PMID:26347638

  4. Enhanced tactile encoding and memory recognition in congenital blindness.

    PubMed

    D'Angiulli, Amedeo; Waraich, Paul

    2002-06-01

    Several behavioural studies have shown that early-blind persons possess superior tactile skills. Since neurophysiological data show that early-blind persons recruit visual as well as somatosensory cortex to carry out tactile processing (cross-modal plasticity), blind persons' sharper tactile skills may be related to cortical re-organisation resulting from loss of vision early in their life. To examine the nature of blind individuals' tactile superiority and its implications for cross-modal plasticity, we compared the tactile performance of congenitally totally blind, low-vision and sighted children on raised-line picture identification test and re-test, assessing effects of task familiarity, exploratory strategy and memory recognition. What distinguished the blind from the other children was higher memory recognition and higher tactile encoding associated with efficient exploration. These results suggest that enhanced perceptual encoding and recognition memory may be two cognitive correlates of cross-modal plasticity in congenital blindness.

  5. Is cross-modal integration of emotional expressions independent of attentional resources?

    PubMed

    Vroomen, J; Driver, J; de Gelder, B

    2001-12-01

    In this study, we examined whether integration of visual and auditory information about emotions requires limited attentional resources. Subjects judged whether a voice expressed happiness or fear, while trying to ignore a concurrently presented static facial expression. As an additional task, the subjects had to add two numbers together rapidly (Experiment 1), count the occurrences of a target digit in a rapid serial visual presentation (Experiment 2), or judge the pitch of a tone as high or low (Experiment 3). The visible face had an impact on judgments of the emotion of the heard voice in all the experiments. This cross-modal effect was independent of whether or not the subjects performed a demanding additional task. This suggests that integration of visual and auditory information about emotions may be a mandatory process, unconstrained by attentional resources.

  6. Use a rabbit or a rhino to sell a carrot? The effect of character-product congruence on children's liking of healthy foods.

    PubMed

    de Droog, Simone M; Buijzen, Moniek; Valkenburg, Patti M

    2012-01-01

    This study investigated whether unfamiliar characters are as effective as familiar characters in stimulating children's affective responses toward healthy foods. In particular, the authors investigated whether an unfamiliar character which is congruent with a product can be as effective as a familiar character. The authors tested 2 types of character-product congruence: conceptual congruence (on the basis of a familiar link), and perceptual congruence (on the basis of color similarity). In a repeated measures design, 166 children (4-6 years old) were exposed to a picture of a carrot combined randomly with 5 different types of character: an (incongruent) familiar character and four unfamiliar characters varying in character-product congruence (i.e., both conceptually and perceptually congruent, conceptual only, perceptual only, and incongruent). The authors measured children's automatic affective responses toward these character-product combinations using a time-constrained task, and elaborate affective responses using a nonconstrained task. Results revealed that the conceptually congruent unfamiliar characters were just as effective as the familiar character in increasing children's automatic affective responses. However, the familiar character triggered the most positive elaborate affective responses. Results are explained in light of processing fluency and parasocial relationship theories.

  7. Working memory capacity predicts conflict-task performance.

    PubMed

    Gulbinaite, Rasa; Johnson, Addie

    2014-01-01

    The relationship between the ability to maintain task goals and working memory capacity (WMC) is firmly established, but evidence for WMC-related differences in conflict processing is mixed. We investigated whether WMC (measured using two complex-span tasks) mediates differences in adjustments of cognitive control in response to conflict. Participants performed a Simon task in which congruent and incongruent trials were equiprobable, but in which the proportion of congruency repetitions (congruent trials followed by congruent trials or incongruent trials followed by incongruent trials) and thus the need for trial-by-trial adjustments in cognitive control varied by block. The overall Simon effect did not depend on WMC capacity. However, for the low-WMC participants the Simon effect decreased as the proportion of congruency repetitions decreased, whereas for the high- and average-WMC participants it was relatively constant across conditions. Distribution analysis of the Simon effect showed more evidence for the inhibition of stimulus location in the low- than in the high-WMC participants, especially when the proportion of congruency repetitions was low. We hypothesize that low-WMC individuals exhibit more interference from task-irrelevant information due to weaker preparatory control prior to stimulus presentation and, thus, stronger reliance on reactive recruitment of cognitive control.

  8. Grammatical Gender Inhibition in Bilinguals

    PubMed Central

    Morales, Luis; Paolieri, Daniela; Bajo, Teresa

    2011-01-01

    Inhibitory control processes have been recently considered to be involved in interference resolution in bilinguals at the phonological level. In this study we explored if interference resolution is also carried out by this inhibitory mechanism at the grammatical level. Thirty-two bilinguals (Italian-L1 and Spanish-L2) participated. All of them completed two tasks. In the first one they had to name pictures in L2. We manipulated gender congruency between the two languages and the number of presentations of the pictures (1 and 5). Results showed a gender congruency effect with slower naming latencies in the incongruent condition. In the second task, participants were presented with the pictures practiced during the first naming task, but now they were asked to produce the L1 article. Results showed a grammatical gender congruency effect in L1 that increased for those words practiced five times in L2. Our conclusion is that an inhibitory mechanism was involved in the suppression of the native language during a picture naming task. Furthermore, this inhibitory process was also involved in suppressing grammatical gender when it was a source of competition between the languages. PMID:22046168

  9. Relation between brain activation and lexical performance.

    PubMed

    Booth, James R; Burman, Douglas D; Meyer, Joel R; Gitelman, Darren R; Parrish, Todd B; Mesulam, M Marsel

    2003-07-01

    Functional magnetic resonance imaging (fMRI) was used to determine whether performance on lexical tasks was correlated with cerebral activation patterns. We found that such relationships did exist and that their anatomical distribution reflected the neurocognitive processing routes required by the task. Better performance on intramodal tasks (determining if visual words were spelled the same or if auditory words rhymed) was correlated with more activation in unimodal regions corresponding to the modality of sensory input, namely the fusiform gyrus (BA 37) for written words and the superior temporal gyrus (BA 22) for spoken words. Better performance in tasks requiring cross-modal conversions (determining if auditory words were spelled the same or if visual words rhymed), on the other hand, was correlated with more activation in posterior heteromodal regions, including the supramarginal gyrus (BA 40) and the angular gyrus (BA 39). Better performance in these cross-modal tasks was also correlated with greater activation in unimodal regions corresponding to the target modality of the conversion process (i.e., fusiform gyrus for auditory spelling and superior temporal gyrus for visual rhyming). In contrast, performance on the auditory spelling task was inversely correlated with activation in the superior temporal gyrus possibly reflecting a greater emphasis on the properties of the perceptual input rather than on the relevant transmodal conversions. Copyright 2003 Wiley-Liss, Inc.

  10. Crossmodal interactions during non-linguistic auditory processing in cochlear-implanted deaf patients.

    PubMed

    Barone, Pascal; Chambaudie, Laure; Strelnikov, Kuzma; Fraysse, Bernard; Marx, Mathieu; Belin, Pascal; Deguine, Olivier

    2016-10-01

    Due to signal distortion, speech comprehension in cochlear-implanted (CI) patients relies strongly on visual information, a compensatory strategy supported by important cortical crossmodal reorganisations. Though crossmodal interactions are evident for speech processing, it is unclear whether a visual influence is observed in CI patients during non-linguistic visual-auditory processing, such as face-voice interactions, which are important in social communication. We analyse and compare visual-auditory interactions in CI patients and normal-hearing subjects (NHS) at equivalent auditory performance levels. Proficient CI patients and NHS performed a voice-gender categorisation in the visual-auditory modality from a morphing-generated voice continuum between male and female speakers, while ignoring the presentation of a male or female visual face. Our data show that during the face-voice interaction, CI deaf patients are strongly influenced by visual information when performing an auditory gender categorisation task, in spite of maximum recovery of auditory speech. No such effect is observed in NHS, even in situations of CI simulation. Our hypothesis is that the functional crossmodal reorganisation that occurs in deafness could influence nonverbal processing, such as face-voice interaction; this is important for patient internal supramodal representation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Dissociable early attentional control mechanisms underlying cognitive and affective conflicts

    PubMed Central

    Chen, Taolin; Kendrick, Keith M.; Feng, Chunliang; Sun, Shiyue; Yang, Xun; Wang, Xiaogang; Luo, Wenbo; Yang, Suyong; Huang, Xiaoqi; Valdés-Sosa, Pedro A.; Gong, Qiyong; Fan, Jin; Luo, Yue-Jia

    2016-01-01

    It has been well documented that cognitive conflict is sensitive to the relative proportion of congruent and incongruent trials. However, few studies have examined whether affective conflict processing is modulated as a function of proportion congruency (PC). To address this question we recorded event-related potentials (ERP) while subjects performed both cognitive and affective face-word Stroop tasks. By varying the proportion of congruent and incongruent trials in each block, we examined the extent to which PC impacts both cognitive and affective conflict control at different temporal stages. Results showed that in the cognitive task an anteriorly localized early N2 component occurred predominantly in the low proportion congruency context, whereas in the affective task it was found to occur in the high proportion congruency one. The N2 effects across the two tasks were localized to the dorsolateral prefrontal cortex, where responses were increased in the cognitive task but decreased in the affective one. Furthermore, high proportions of congruent items produced both larger amplitude of a posteriorly localized sustained potential component and a larger behavioral Stroop effect in cognitive and affective tasks. Our findings suggest that cognitive and affective conflicts engage early dissociable attentional control mechanisms and a later common conflict response system. PMID:27892513

  12. Examining Lateralized Lexical Ambiguity Processing Using Dichotic and Cross-Modal Tasks

    ERIC Educational Resources Information Center

    Atchley, Ruth Ann; Grimshaw, Gina; Schuster, Jonathan; Gibson, Linzi

    2011-01-01

    The individual roles played by the cerebral hemispheres during the process of language comprehension have been extensively studied in tasks that require individuals to read text (for review see Jung-Beeman, 2005). However, it is not clear whether or not some aspects of the theorized laterality models of semantic comprehension are a result of the…

  13. ERP Evidence of Early Cross-Modal Links between Auditory Selective Attention and Visuo-Spatial Memory

    ERIC Educational Resources Information Center

    Bomba, Marie D.; Singhal, Anthony

    2010-01-01

    Previous dual-task research pairing complex visual tasks involving non-spatial cognitive processes during dichotic listening have shown effects on the late component (Ndl) of the negative difference selective attention waveform but no effects on the early (Nde) response suggesting that the Ndl, but not the Nde, is affected by non-spatial…

  14. Musicians are more consistent: Gestural cross-modal mappings of pitch, loudness and tempo in real-time

    PubMed Central

    Küssner, Mats B.; Tidhar, Dan; Prior, Helen M.; Leech-Wilkinson, Daniel

    2014-01-01

    Cross-modal mappings of auditory stimuli reveal valuable insights into how humans make sense of sound and music. Whereas researchers have investigated cross-modal mappings of sound features varied in isolation within paradigms such as speeded classification and forced-choice matching tasks, investigations of representations of concurrently varied sound features (e.g., pitch, loudness and tempo) with overt gestures—accounting for the intrinsic link between movement and sound—are scant. To explore the role of bodily gestures in cross-modal mappings of auditory stimuli we asked 64 musically trained and untrained participants to represent pure tones—continually sounding and concurrently varied in pitch, loudness and tempo—with gestures while the sound stimuli were played. We hypothesized musical training to lead to more consistent mappings between pitch and height, loudness and distance/height, and tempo and speed of hand movement and muscular energy. Our results corroborate previously reported pitch vs. height (higher pitch leading to higher elevation in space) and tempo vs. speed (increasing tempo leading to increasing speed of hand movement) associations, but also reveal novel findings pertaining to musical training which influenced consistency of pitch mappings, annulling a commonly observed bias for convex (i.e., rising–falling) pitch contours. Moreover, we reveal effects of interactions between musical parameters on cross-modal mappings (e.g., pitch and loudness on speed of hand movement), highlighting the importance of studying auditory stimuli concurrently varied in different musical parameters. Results are discussed in light of cross-modal cognition, with particular emphasis on studies within (embodied) music cognition. Implications for theoretical refinements and potential clinical applications are provided. PMID:25120506

  15. Attentional Factors in Conceptual Congruency

    ERIC Educational Resources Information Center

    Santiago, Julio; Ouellet, Marc; Roman, Antonio; Valenzuela, Javier

    2012-01-01

    Conceptual congruency effects are biases induced by an irrelevant conceptual dimension of a task (e.g., location in vertical space) on the processing of another, relevant dimension (e.g., judging words' emotional evaluation). Such effects are a central empirical pillar for recent views about how the mind/brain represents concepts. In the present…

  16. The Congruency Sequence Effect 3.0: A Critical Test of Conflict Adaptation

    PubMed Central

    Duthoo, Wout; Abrahamse, Elger L.; Braem, Senne; Boehler, C. Nico; Notebaert, Wim

    2014-01-01

    Over the last two decades, the congruency sequence effect (CSE) –the finding of a reduced congruency effect following incongruent trials in conflict tasks– has played a central role in advancing research on cognitive control. According to the influential conflict-monitoring account, the CSE reflects adjustments in selective attention that enhance task focus when needed, often termed conflict adaptation. However, this dominant interpretation of the CSE has been called into question by several alternative accounts that stress the role of episodic memory processes: feature binding and (stimulus-response) contingency learning. To evaluate the notion of conflict adaptation in accounting for the CSE, we construed versions of three widely used experimental paradigms (the colour-word Stroop, picture-word Stroop and flanker task) that effectively control for feature binding and contingency learning. Results revealed that a CSE can emerge in all three tasks. This strongly suggests a contribution of attentional control to the CSE and highlights the potential of these unprecedentedly clean paradigms for further examining cognitive control. PMID:25340396

  17. Face Recognition, Musical Appraisal, and Emotional Crossmodal Bias.

    PubMed

    Invitto, Sara; Calcagnì, Antonio; Mignozzi, Arianna; Scardino, Rosanna; Piraino, Giulia; Turchi, Daniele; De Feudis, Irio; Brunetti, Antonio; Bevilacqua, Vitoantonio; de Tommaso, Marina

    2017-01-01

    Recent research on the crossmodal integration of visual and auditory perception suggests that evaluations of emotional information in one sensory modality may tend toward the emotional value generated in another sensory modality. This implies that the emotions elicited by musical stimuli can influence the perception of emotional stimuli presented in other sensory modalities, through a top-down process. The aim of this work was to investigate how crossmodal perceptual processing influences emotional face recognition and how potential modulation of this processing induced by music could be influenced by the subject's musical competence. We investigated how emotional face recognition processing could be modulated by listening to music and how this modulation varies according to the subjective emotional salience of the music and the listener's musical competence. The sample consisted of 24 participants: 12 professional musicians and 12 university students (non-musicians). Participants performed an emotional go/no-go task whilst listening to music by Albeniz, Chopin, or Mozart. The target stimuli were emotionally neutral facial expressions. We examined the N170 Event-Related Potential (ERP) and behavioral responses (i.e., motor reaction time to target recognition and musical emotional judgment). A linear mixed-effects model and a decision-tree learning technique were applied to N170 amplitudes and latencies. The main findings of the study were that musicians' behavioral responses and N170 is more affected by the emotional value of music administered in the emotional go/no-go task and this bias is also apparent in responses to the non-target emotional face. This suggests that emotional information, coming from multiple sensory channels, activates a crossmodal integration process that depends upon the stimuli emotional salience and the listener's appraisal.

  18. Kinesthetic alexia due to left parietal lobe lesions.

    PubMed

    Ihori, Nami; Kawamura, Mitsuru; Araki, Shigeo; Kawachi, Juro

    2002-01-01

    To investigate the neuropsychological mechanisms of kinesthetic alexia, we asked 7 patients who showed kinesthetic alexia with preserved visual reading after damage to the left parietal region to perform tasks consisting of kinesthetic written reproduction (writing down the same letter as the kinesthetic stimulus), kinesthetic reading aloud, visual written reproduction (copying letters), and visual reading aloud of hiragana (Japanese phonograms). We compared the performance in these tasks and the lesion sites in each patient. The results suggested that deficits in any one of the following functions might cause kinesthetic alexia: (1) the retrieval of kinesthetic images (motor engrams) of characters from kinesthetic stimuli, (2) kinesthetic images themselves, (3) access to cross-modal association from kinesthetic images, and (4) cross-modal association itself (retrieval of auditory and visual images from kinesthetic images of characters). Each of these factors seemed to be related to different lesion sites in the left parietal lobe. Copyright 2002 S. Karger AG, Basel

  19. What is the link between synaesthesia and sound symbolism?

    PubMed Central

    Bankieris, Kaitlyn; Simner, Julia

    2015-01-01

    Sound symbolism is a property of certain words which have a direct link between their phonological form and their semantic meaning. In certain instances, sound symbolism can allow non-native speakers to understand the meanings of etymologically unfamiliar foreign words, although the mechanisms driving this are not well understood. We examined whether sound symbolism might be mediated by the same types of cross-modal processes that typify synaesthetic experiences. Synaesthesia is an inherited condition in which sensory or cognitive stimuli (e.g., sounds, words) cause additional, unusual cross-modal percepts (e.g., sounds trigger colours, words trigger tastes). Synaesthesia may be an exaggeration of normal cross-modal processing, and if so, there may be a link between synaesthesia and the type of cross-modality inherent in sound symbolism. To test this we predicted that synaesthetes would have superior understanding of unfamiliar (sound symbolic) foreign words. In our study, 19 grapheme-colour synaesthetes and 57 non-synaesthete controls were presented with 400 adjectives from 10 unfamiliar languages and were asked to guess the meaning of each word in a two-alternative forced-choice task. Both groups showed superior understanding compared to chance levels, but synaesthetes significantly outperformed controls. This heightened ability suggests that sound symbolism may rely on the types of cross-modal integration that drive synaesthetes’ unusual experiences. It also suggests that synaesthesia endows or co-occurs with heightened multi-modal skills, and that this can arise in domains unrelated to the specific form of synaesthesia. PMID:25498744

  20. Priming within and across modalities: exploring the nature of rCBF increases and decreases.

    PubMed

    Badgaiyan, R D; Schacter, D L; Alpert, N M

    2001-02-01

    Neuroimaging studies suggest that within-modality priming is associated with reduced regional cerebral blood flow (rCBF) in the extrastriate area, whereas cross-modality priming is associated with increased rCBF in prefrontal cortex. To characterize the nature of rCBF changes in within- and cross-modality priming, we conducted two neuroimaging experiments using positron emission tomography (PET). In experiment 1, rCBF changes in within-modality auditory priming on a word stem completion task were observed under same- and different-voice conditions. Both conditions were associated with decreased rCBF in extrastriate cortex. In the different-voice condition there were additional rCBF changes in the middle temporal gyrus and prefrontal cortex. Results suggest that the extrastriate involvement in within-modality priming is sensitive to a change in sensory modality of target stimuli between study and test, but not to a change in the feature of a stimulus within the same modality. In experiment 2, we studied cross-modality priming on a visual stem completion test after encoding under full- and divided-attention conditions. Increased rCBF in the anterior prefrontal cortex was observed in the full- but not in the divided-attention condition. Because explicit retrieval is compromised after encoding under the divided-attention condition, prefrontal involvement in cross-modality priming indicates recruitment of an aspect of explicit retrieval mechanism. The aspect of explicit retrieval that is most likely to be involved in cross-modality priming is the familiarity effect. Copyright 2001 Academic Press.

  1. Visuotactile motion congruence enhances gamma-band activity in visual and somatosensory cortices.

    PubMed

    Krebber, Martin; Harwood, James; Spitzer, Bernhard; Keil, Julian; Senkowski, Daniel

    2015-08-15

    When touching and viewing a moving surface our visual and somatosensory systems receive congruent spatiotemporal input. Behavioral studies have shown that motion congruence facilitates interplay between visual and tactile stimuli, but the neural mechanisms underlying this interplay are not well understood. Neural oscillations play a role in motion processing and multisensory integration. They may also be crucial for visuotactile motion processing. In this electroencephalography study, we applied linear beamforming to examine the impact of visuotactile motion congruence on beta and gamma band activity (GBA) in visual and somatosensory cortices. Visual and tactile inputs comprised of gratings that moved either in the same or different directions. Participants performed a target detection task that was unrelated to motion congruence. While there were no effects in the beta band (13-21Hz), the power of GBA (50-80Hz) in visual and somatosensory cortices was larger for congruent compared with incongruent motion stimuli. This suggests enhanced bottom-up multisensory processing when visual and tactile gratings moved in the same direction. Supporting its behavioral relevance, GBA was correlated with shorter reaction times in the target detection task. We conclude that motion congruence plays an important role for the integrative processing of visuotactile stimuli in sensory cortices, as reflected by oscillatory responses in the gamma band. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Hand Movement Deviations in a Visual Search Task with Cross Modal Cuing

    ERIC Educational Resources Information Center

    Aslan, Asli; Aslan, Hurol

    2007-01-01

    The purpose of this study is to demonstrate the cross-modal effects of an auditory organization on a visual search task and to investigate the influence of the level of detail in instructions describing or hinting at the associations between auditory stimuli and the possible locations of a visual target. In addition to measuring the participants'…

  3. Color of scents: chromatic stimuli modulate odor responses in the human brain.

    PubMed

    Osterbauer, Robert A; Matthews, Paul M; Jenkinson, Mark; Beckmann, Christian F; Hansen, Peter C; Calvert, Gemma A

    2005-06-01

    Color has a profound effect on the perception of odors. For example, strawberry-flavored drinks smell more pleasant when colored red than green and descriptions of the "nose" of a wine are dramatically influenced by its color. Using functional magnetic resonance imaging, we demonstrate a neurophysiological correlate of these cross-modal visual influences on olfactory perception. Subjects were scanned while exposed either to odors or colors in isolation or to color-odor combinations that were rated on the basis of how well they were perceived to match. Activity in caudal regions of the orbitofrontal cortex and in the insular cortex increased progressively with the perceived congruency of the odor-color pairs. These findings demonstrate the neuronal correlates of olfactory response modulation by color cues in brain areas previously identified as encoding the hedonic value of smells.

  4. Interpersonal Congruence, Transactive Memory, and Feedback Processes: An Integrative Model of Group Learning

    ERIC Educational Resources Information Center

    London, Manuel; Polzer, Jeffrey T.; Omoregie, Heather

    2005-01-01

    This article presents a multilevel model of group learning that focuses on antecedents and consequences of interpersonal congruence, transactive memory, and feedback processes. The model holds that members' self-verification motives and situational conditions (e.g., member diversity and task demands) give rise to identity negotiation behaviors…

  5. Practice and Colour-Word Integration in Stroop Interference

    ERIC Educational Resources Information Center

    Gul, Amara; Humphreys, Glyn W.

    2015-01-01

    Congruency effects were examined using a manual response version of the Stroop task in which the relationship between the colour word and its hue on incongruent trials was either kept constant or varied randomly across different pairings within the stimulus set. Congruency effects were increased in the condition where the incongruent hue-word…

  6. Spatial Attention Effects during Conscious and Nonconscious Processing of Visual Features and Objects

    ERIC Educational Resources Information Center

    Tapia, Evelina; Breitmeyer, Bruno G.; Jacob, Jane; Broyles, Elizabeth C.

    2013-01-01

    Flanker congruency effects were measured in a masked flanker task to assess the properties of spatial attention during conscious and nonconscious processing of form, color, and conjunctions of these features. We found that (1) consciously and nonconsciously processed colored shape distractors (i.e., flankers) produce flanker congruency effects;…

  7. Interpreting instructional cues in task switching procedures: the role of mediator retrieval.

    PubMed

    Logan, Gordon D; Schneider, Darryl W

    2006-03-01

    In 3 experiments the role of mediators in task switching with transparent and nontransparent cues was examined. Subjects switched between magnitude (greater or less than 5) and parity (odd or even) judgments of single digits. A cue-target congruency effect indicated mediator use: subjects responded faster to congruent cue-target combinations (e.g., ODD-3) than to incongruent cue-target combinations (e.g., ODD-4). Experiment 1 revealed significant congruency effects with transparent word cues (ODD, EVEN, HIGH, and LOW) and with relatively transparent letter cues (O, E, H, and L) but not with nontransparent letter cues (D, V, G, and W). Experiment 2 revealed significant congruency effects after subjects who were trained with nontransparent letter cues were informed of the relations between cues and word mediators halfway through the experiment. Experiment 3 showed that congruency effects with relatively transparent letter cues diminished over 10 sessions of practice, suggesting that subjects used mediators less as practice progressed. The results are discussed in terms of the role of mediators in interpreting instructional cues.

  8. Fast parallel DNA-based algorithms for molecular computation: quadratic congruence and factoring integers.

    PubMed

    Chang, Weng-Long

    2012-03-01

    Assume that n is a positive integer. If there is an integer such that M (2) ≡ C (mod n), i.e., the congruence has a solution, then C is said to be a quadratic congruence (mod n). If the congruence does not have a solution, then C is said to be a quadratic noncongruence (mod n). The task of solving the problem is central to many important applications, the most obvious being cryptography. In this article, we describe a DNA-based algorithm for solving quadratic congruence and factoring integers. In additional to this novel contribution, we also show the utility of our encoding scheme, and of the algorithm's submodules. We demonstrate how a variety of arithmetic, shifted and comparative operations, namely bitwise and full addition, subtraction, left shifter and comparison perhaps are performed using strands of DNA.

  9. Cognitive Control Acts Locally

    ERIC Educational Resources Information Center

    Notebaert, Wim; Verguts, Tom

    2008-01-01

    Cognitive control adjusts information processing to momentary needs and task requirements. We investigated conflict adaptation when participants are performing two tasks, a Simon task and a SNARC task. The results indicated that one congruency effect (e.g., Simon) was reduced after conflict in the other task (e.g., SNARC), but only when both tasks…

  10. Learning to perceive differences in solid shape through vision and touch.

    PubMed

    Norman, J Farley; Clayton, Anna Marie; Norman, Hideko F; Crabtree, Charles E

    2008-01-01

    A single experiment was designed to investigate perceptual learning and the discrimination of 3-D object shape. Ninety-six observers were presented with naturally shaped solid objects either visually, haptically, or across the modalities of vision and touch. The observers' task was to judge whether the two sequentially presented objects on any given trial possessed the same or different 3-D shapes. The results of the experiment revealed that significant perceptual learning occurred in all modality conditions, both unimodal and cross-modal. The amount of the observers' perceptual learning, as indexed by increases in hit rate and d', was similar for all of the modality conditions. The observers' hit rates were highest for the unimodal conditions and lowest in the cross-modal conditions. Lengthening the inter-stimulus interval from 3 to 15 s led to increases in hit rates and decreases in response bias. The results also revealed the existence of an asymmetry between two otherwise equivalent cross-modal conditions: in particular, the observers' perceptual sensitivity was higher for the vision-haptic condition and lower for the haptic-vision condition. In general, the results indicate that effective cross-modal shape comparisons can be made between the modalities of vision and active touch, but that complete information transfer does not occur.

  11. Interaction between Phonemic Abilities and Syllable Congruency Effect in Young Readers

    ERIC Educational Resources Information Center

    Chetail, Fabienne; Mathey, Stephanie

    2013-01-01

    This study investigated whether and to what extent phonemic abilities of young readers (Grade 5) influence syllabic effects in reading. More precisely, the syllable congruency effect was tested in the lexical decision task combined with masked priming in eleven-year-old children. Target words were preceded by a pseudo-word prime sharing the first…

  12. Coupling between Theta Oscillations and Cognitive Control Network during Cross-Modal Visual and Auditory Attention: Supramodal vs Modality-Specific Mechanisms.

    PubMed

    Wang, Wuyi; Viswanathan, Shivakumar; Lee, Taraz; Grafton, Scott T

    2016-01-01

    Cortical theta band oscillations (4-8 Hz) in EEG signals have been shown to be important for a variety of different cognitive control operations in visual attention paradigms. However the synchronization source of these signals as defined by fMRI BOLD activity and the extent to which theta oscillations play a role in multimodal attention remains unknown. Here we investigated the extent to which cross-modal visual and auditory attention impacts theta oscillations. Using a simultaneous EEG-fMRI paradigm, healthy human participants performed an attentional vigilance task with six cross-modal conditions using naturalistic stimuli. To assess supramodal mechanisms, modulation of theta oscillation amplitude for attention to either visual or auditory stimuli was correlated with BOLD activity by conjunction analysis. Negative correlation was localized to cortical regions associated with the default mode network and positively with ventral premotor areas. Modality-associated attention to visual stimuli was marked by a positive correlation of theta and BOLD activity in fronto-parietal area that was not observed in the auditory condition. A positive correlation of theta and BOLD activity was observed in auditory cortex, while a negative correlation of theta and BOLD activity was observed in visual cortex during auditory attention. The data support a supramodal interaction of theta activity with of DMN function, and modality-associated processes within fronto-parietal networks related to top-down theta related cognitive control in cross-modal visual attention. On the other hand, in sensory cortices there are opposing effects of theta activity during cross-modal auditory attention.

  13. Evolution of crossmodal reorganization of the voice area in cochlear-implanted deaf patients.

    PubMed

    Rouger, Julien; Lagleyre, Sébastien; Démonet, Jean-François; Fraysse, Bernard; Deguine, Olivier; Barone, Pascal

    2012-08-01

    Psychophysical and neuroimaging studies in both animal and human subjects have clearly demonstrated that cortical plasticity following sensory deprivation leads to a brain functional reorganization that favors the spared modalities. In postlingually deaf patients, the use of a cochlear implant (CI) allows a recovery of the auditory function, which will probably counteract the cortical crossmodal reorganization induced by hearing loss. To study the dynamics of such reversed crossmodal plasticity, we designed a longitudinal neuroimaging study involving the follow-up of 10 postlingually deaf adult CI users engaged in a visual speechreading task. While speechreading activates Broca's area in normally hearing subjects (NHS), the activity level elicited in this region in CI patients is abnormally low and increases progressively with post-implantation time. Furthermore, speechreading in CI patients induces abnormal crossmodal activations in right anterior regions of the superior temporal cortex normally devoted to processing human voice stimuli (temporal voice-sensitive areas-TVA). These abnormal activity levels diminish with post-implantation time and tend towards the levels observed in NHS. First, our study revealed that the neuroplasticity after cochlear implantation involves not only auditory but also visual and audiovisual speech processing networks. Second, our results suggest that during deafness, the functional links between cortical regions specialized in face and voice processing are reallocated to support speech-related visual processing through cross-modal reorganization. Such reorganization allows a more efficient audiovisual integration of speech after cochlear implantation. These compensatory sensory strategies are later completed by the progressive restoration of the visuo-audio-motor speech processing loop, including Broca's area. Copyright © 2011 Wiley Periodicals, Inc.

  14. Clinical judgment research on economic topics: Role of congruence of tasks in clinical practice.

    PubMed

    Huttin, Christine C

    2017-01-01

    This paper discusses what can ensure the performance of judgment studies with an information design that integrates economics of medical systems, in the context of digitalization of healthcare. It is part of a series of 5 methodological papers on statistical procedures and problems to implement judgment research designs and decision models, especially to address cost of care, and ways to measure conversation on cost of care between physicians and patients, with unstructured data such as economic narratives to complement billing and financial information (e.g. cost cognitive cues in conjoint or reversed conjoint designs). The paper discusses how congruence of tasks can increase the reliability of data. It uses some results of two Meta reviews of judgment studies in different fields of applications: psychology, business, medical sciences and education. It compares tests for congruence in judgment studies and efficiency tests in econometric studies.

  15. Comparison of Congruence Judgment and Auditory Localization Tasks for Assessing the Spatial Limits of Visual Capture

    PubMed Central

    Bosen, Adam K.; Fleming, Justin T.; Brown, Sarah E.; Allen, Paul D.; O'Neill, William E.; Paige, Gary D.

    2016-01-01

    Vision typically has better spatial accuracy and precision than audition, and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small visual capture is likely to occur, and when disparity is large visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audio-visual disparities over which visual capture was likely to occur were narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner. PMID:27815630

  16. Cooperative processing in primary somatosensory cortex and posterior parietal cortex during tactile working memory.

    PubMed

    Ku, Yixuan; Zhao, Di; Bodner, Mark; Zhou, Yong-Di

    2015-08-01

    In the present study, causal roles of both the primary somatosensory cortex (SI) and the posterior parietal cortex (PPC) were investigated in a tactile unimodal working memory (WM) task. Individual magnetic resonance imaging-based single-pulse transcranial magnetic stimulation (spTMS) was applied, respectively, to the left SI (ipsilateral to tactile stimuli), right SI (contralateral to tactile stimuli) and right PPC (contralateral to tactile stimuli), while human participants were performing a tactile-tactile unimodal delayed matching-to-sample task. The time points of spTMS were 300, 600 and 900 ms after the onset of the tactile sample stimulus (duration: 200 ms). Compared with ipsilateral SI, application of spTMS over either contralateral SI or contralateral PPC at those time points significantly impaired the accuracy of task performance. Meanwhile, the deterioration in accuracy did not vary with the stimulating time points. Together, these results indicate that the tactile information is processed cooperatively by SI and PPC in the same hemisphere, starting from the early delay of the tactile unimodal WM task. This pattern of processing of tactile information is different from the pattern in tactile-visual cross-modal WM. In a tactile-visual cross-modal WM task, SI and PPC contribute to the processing sequentially, suggesting a process of sensory information transfer during the early delay between modalities. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  17. Response Activation in Overlapping Tasks and the Response-Selection Bottleneck

    ERIC Educational Resources Information Center

    Schubert, Torsten; Fischer, Rico; Stelzel, Christine

    2008-01-01

    The authors investigated the impact of response activation on dual-task performance by presenting a subliminal prime before the stimulus in Task 2 (S2) of a psychological refractory period (PRP) task. Congruence between prime and S2 modulated the reaction times in Task 2 at short stimulus onset asynchrony despite a PRP effect. This Task 2…

  18. The Dynamic Multisensory Engram: Neural Circuitry Underlying Crossmodal Object Recognition in Rats Changes with the Nature of Object Experience.

    PubMed

    Jacklin, Derek L; Cloke, Jacob M; Potvin, Alphonse; Garrett, Inara; Winters, Boyer D

    2016-01-27

    Rats, humans, and monkeys demonstrate robust crossmodal object recognition (CMOR), identifying objects across sensory modalities. We have shown that rats' performance of a spontaneous tactile-to-visual CMOR task requires functional integration of perirhinal (PRh) and posterior parietal (PPC) cortices, which seemingly provide visual and tactile object feature processing, respectively. However, research with primates has suggested that PRh is sufficient for multisensory object representation. We tested this hypothesis in rats using a modification of the CMOR task in which multimodal preexposure to the to-be-remembered objects significantly facilitates performance. In the original CMOR task, with no preexposure, reversible lesions of PRh or PPC produced patterns of impairment consistent with modality-specific contributions. Conversely, in the CMOR task with preexposure, PPC lesions had no effect, whereas PRh involvement was robust, proving necessary for phases of the task that did not require PRh activity when rats did not have preexposure; this pattern was supported by results from c-fos imaging. We suggest that multimodal preexposure alters the circuitry responsible for object recognition, in this case obviating the need for PPC contributions and expanding PRh involvement, consistent with the polymodal nature of PRh connections and results from primates indicating a key role for PRh in multisensory object representation. These findings have significant implications for our understanding of multisensory information processing, suggesting that the nature of an individual's past experience with an object strongly determines the brain circuitry involved in representing that object's multisensory features in memory. The ability to integrate information from multiple sensory modalities is crucial to the survival of organisms living in complex environments. Appropriate responses to behaviorally relevant objects are informed by integration of multisensory object features. We used crossmodal object recognition tasks in rats to study the neurobiological basis of multisensory object representation. When rats had no prior exposure to the to-be-remembered objects, the spontaneous ability to recognize objects across sensory modalities relied on functional interaction between multiple cortical regions. However, prior multisensory exploration of the task-relevant objects remapped cortical contributions, negating the involvement of one region and significantly expanding the role of another. This finding emphasizes the dynamic nature of cortical representation of objects in relation to past experience. Copyright © 2016 the authors 0270-6474/16/361273-17$15.00/0.

  19. Plasticity of attentional functions in older adults after non-action video game training: a randomized controlled trial.

    PubMed

    Mayas, Julia; Parmentier, Fabrice B R; Andrés, Pilar; Ballesteros, Soledad

    2014-01-01

    A major goal of recent research in aging has been to examine cognitive plasticity in older adults and its capacity to counteract cognitive decline. The aim of the present study was to investigate whether older adults could benefit from brain training with video games in a cross-modal oddball task designed to assess distraction and alertness. Twenty-seven healthy older adults participated in the study (15 in the experimental group, 12 in the control group. The experimental group received 20 1-hr video game training sessions using a commercially available brain-training package (Lumosity) involving problem solving, mental calculation, working memory and attention tasks. The control group did not practice this package and, instead, attended meetings with the other members of the study several times along the course of the study. Both groups were evaluated before and after the intervention using a cross-modal oddball task measuring alertness and distraction. The results showed a significant reduction of distraction and an increase of alertness in the experimental group and no variation in the control group. These results suggest neurocognitive plasticity in the old human brain as training enhanced cognitive performance on attentional functions. ClinicalTrials.gov NCT02007616.

  20. Visual and cross-modal cues increase the identification of overlapping visual stimuli in Balint's syndrome.

    PubMed

    D'Imperio, Daniela; Scandola, Michele; Gobbetto, Valeria; Bulgarelli, Cristina; Salgarello, Matteo; Avesani, Renato; Moro, Valentina

    2017-10-01

    Cross-modal interactions improve the processing of external stimuli, particularly when an isolated sensory modality is impaired. When information from different modalities is integrated, object recognition is facilitated probably as a result of bottom-up and top-down processes. The aim of this study was to investigate the potential effects of cross-modal stimulation in a case of simultanagnosia. We report a detailed analysis of clinical symptoms and an 18 F-fluorodeoxyglucose (FDG) brain positron emission tomography/computed tomography (PET/CT) study of a patient affected by Balint's syndrome, a rare and invasive visual-spatial disorder following bilateral parieto-occipital lesions. An experiment was conducted to investigate the effects of visual and nonvisual cues on performance in tasks involving the recognition of overlapping pictures. Four modalities of sensory cues were used: visual, tactile, olfactory, and auditory. Data from neuropsychological tests showed the presence of ocular apraxia, optic ataxia, and simultanagnosia. The results of the experiment indicate a positive effect of the cues on the recognition of overlapping pictures, not only in the identification of the congruent valid-cued stimulus (target) but also in the identification of the other, noncued stimuli. All the sensory modalities analyzed (except the auditory stimulus) were efficacious in terms of increasing visual recognition. Cross-modal integration improved the patient's ability to recognize overlapping figures. However, while in the visual unimodal modality both bottom-up (priming, familiarity effect, disengagement of attention) and top-down processes (mental representation and short-term memory, the endogenous orientation of attention) are involved, in the cross-modal integration it is semantic representations that mainly activate visual recognition processes. These results are potentially useful for the design of rehabilitation training for attentional and visual-perceptual deficits.

  1. THE REELIN RECEPTORS VLDLR AND ApoER2 REGULATE SENSORIMOTOR GATING IN MICE

    PubMed Central

    Barr, Alasdair M.; Fish, Kenneth N.; Markou, Athina

    2007-01-01

    Summary Postmortem brain loss of reelin is noted in schizophrenia patients. Accordingly, heterozygous reeler mutant mice have been proposed as a putative model of this disorder. Little is known, however, about the involvement of the two receptors for reelin, Very-Low-Density Lipoprotein Receptor (VLDLR) and Apolipoprotein E Receptor 2 (ApoER2), on pre-cognitive processes of relevance to deficits seen in schizophrenia. Thus, we evaluated sensorimotor gating in mutant mice heterozygous or homozygous for the two reelin receptors. Mutant mice lacking one of these reelin receptors were tested for prepulse inhibition (PPI) of the acoustic startle reflex prior to and following puberty, and on a crossmodal PPI task, involving the presentation of acoustic and tactile stimuli. Furthermore, because schizophrenia patients show increased sensitivity to N-methyl-D-aspartate (NMDA) receptor blockade, we assessed the sensitivity of these mice to the PPI-disruptive effects of the NMDA receptor antagonist phencyclidine. The results demonstrated that acoustic PPI did not differ between mutant and wildtype mice. However, VLDLR homozygous mice displayed significant deficits in crossmodal PPI, while ApoER2 heterozygous and homozygous mice displayed significantly increased crossmodal PPI. Both ApoER2 and VLDLR heterozygous and homozygous mice exhibited greater sensitivity to the PPI-disruptive effects of phencyclidine than wildtype mice. These results indicate that partial or complete loss of either one of the reelin receptors results in a complex pattern of alterations in PPI function that include alterations in crossmodal, but not acoustic, PPI and increased sensitivity to NMDA receptor blockade. Thus, reelin receptor function appears to be critically involved in crossmodal PPI and the modulation of the PPI response by NMDA receptors. These findings have relevance to a range of neuropsychiatric disorders that involve sensorimotor gating deficits, including schizophrenia.. PMID:17261317

  2. Is Phonological Encoding in Naming Influenced by Literacy?

    ERIC Educational Resources Information Center

    Ventura, Paulo; Kolinsky, Regine; Querido, Jose-Luis; Fernandes, Sandra; Morais, Jose

    2007-01-01

    We examined phonological priming in illiterate adults, using a cross-modal picture-word interference task. Participants named pictures while hearing distractor words at different Stimulus Onset Asynchronies (SOAs). Ex-illiterates and university students were also tested. We specifically assessed the ability of the three populations to use…

  3. Phonological encoding in speech-sound disorder: evidence from a cross-modal priming experiment.

    PubMed

    Munson, Benjamin; Krause, Miriam O P

    2017-05-01

    Psycholinguistic models of language production provide a framework for determining the locus of language breakdown that leads to speech-sound disorder (SSD) in children. To examine whether children with SSD differ from their age-matched peers with typical speech and language development (TD) in the ability phonologically to encode lexical items that have been accessed from memory. Thirty-six children (18 with TD, 18 with SSD) viewed pictures while listening to interfering words (IW) or a non-linguistic auditory stimulus presented over headphones either 150 ms before, concurrent with or 150 ms after picture presentation. The phonological similarity of the IW and the pictures' names varied. Picture-naming latency, accuracy and duration were tallied. All children named pictures more quickly in the presence of an IW identical to the picture's name than in the other conditions. At the +150 ms stimulus onset asynchrony, pictures were named more quickly when the IW shared phonemes with the picture's name than when they were phonologically unrelated to the picture's name. The size of this effect was similar for children with SSD and children with TD. Variation in the magnitude of inhibition and facilitation on cross-modal priming tasks across children was more strongly affected by the size of the expressive and receptive lexicons than by speech-production accuracy. Results suggest that SSD is not associated with reduced phonological encoding ability, at least as it is reflected by cross-modal naming tasks. © 2016 Royal College of Speech and Language Therapists.

  4. Crossmodal integration enhances neural representation of task-relevant features in audiovisual face perception.

    PubMed

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Liu, Yongjian; Liang, Changhong; Sun, Pei

    2015-02-01

    Previous studies have shown that audiovisual integration improves identification performance and enhances neural activity in heteromodal brain areas, for example, the posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG). Furthermore, it has also been demonstrated that attention plays an important role in crossmodal integration. In this study, we considered crossmodal integration in audiovisual facial perception and explored its effect on the neural representation of features. The audiovisual stimuli in the experiment consisted of facial movie clips that could be classified into 2 gender categories (male vs. female) or 2 emotion categories (crying vs. laughing). The visual/auditory-only stimuli were created from these movie clips by removing the auditory/visual contents. The subjects needed to make a judgment about the gender/emotion category for each movie clip in the audiovisual, visual-only, or auditory-only stimulus condition as functional magnetic resonance imaging (fMRI) signals were recorded. The neural representation of the gender/emotion feature was assessed using the decoding accuracy and the brain pattern-related reproducibility indices, obtained by a multivariate pattern analysis method from the fMRI data. In comparison to the visual-only and auditory-only stimulus conditions, we found that audiovisual integration enhanced the neural representation of task-relevant features and that feature-selective attention might play a role of modulation in the audiovisual integration. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. Dorsolateral prefrontal cortex bridges bilateral primary somatosensory cortices during cross-modal working memory.

    PubMed

    Zhao, Di; Ku, Yixuan

    2018-05-01

    Neural activity in the dorsolateral prefrontal cortex (DLPFC) has been suggested to integrate information from distinct sensory areas. However, how the DLPFC interacts with the bilateral primary somatosensory cortices (SIs) in tactile-visual cross-modal working memory has not yet been established. In the present study, we applied single-pulse transcranial magnetic stimulation (sp-TMS) over the contralateral DLPFC and bilateral SIs of human participants at various time points, while they performed a tactile-visual delayed matching-to-sample task with a 2-second delay. sp-TMS over the contralateral DLPFC or the contralateral SI at either an sensory encoding stage [i.e. 100 ms after the onset of a vibrotactile sample stimulus (200-ms duration)] or an early maintenance stage (i.e. 300 ms after the onset), significantly impaired the accuracy of task performance; sp-TMS over the contralateral DLPFC or the ipsilateral SI at a late maintenance stage (1600 ms and 1900 ms) also significantly disrupted the performance. Furthermore, at 300 ms after the onset of the vibrotactile sample stimulus, there was a significant correlation between the deteriorating effects of sp-TMS over the contralateral SI and the contralateral DLPFC. These results imply that the DLPFC and the bilateral SIs play causal roles at distinctive stages during cross-modal working memory, while the contralateral DLPFC communicates with the contralateral SI in the early delay, and cooperates with the ipsilateral SI in the late delay. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Re-Examining the Automaticity and Directionality of the Activation of the Spatial-Valence "Good is Up" Metaphoric Association

    PubMed Central

    Huang, Yanli; Tse, Chi-Shing

    2015-01-01

    According to the Conceptual Metaphor Theory, people understand abstract concepts depending on the activation of more concrete concepts, but not vice versa. The present research aims to investigate the role of directionality and automaticity regarding the activation of the conceptual metaphor “good is up”. Experiment 1 tested the automaticity of the spatial-to-valence metaphoric congruency effect by having participants judge the valence of a positive or negative word that appeared either at the top or at the bottom of the screen. They performed the task concurrently with a 6-digit verbal rehearsal task in the working-memory-load (WML) blocks and without this task in the non-WML blocks. The spatial-to-valence metaphoric congruency effect occurred for the positive words in the non-WML blocks (i.e., positive words are judged more quickly when they appeared at the top than at the bottom of the screen), but not in the WML blocks, suggesting that this metaphoric association might not be activated automatically. Experiments 2-6 investigated the valence-to-spatial metaphoric association and its automaticity. Participants processed a positive or negative prime, which appeared at the center of the screen, and then identified a letter (p/q) that subsequently appeared at the top or bottom of the screen. The valence-to-spatial metaphoric congruency effect did not occur in the WML (6-digit verbal rehearsal) or non-WML blocks, whether response modality to the prime was key-press or vocal, or whether the prime was a word or a picture. The effect only unexpectedly occurred when the task was simultaneously performed with a 4-dot-position visuospatial rehearsal task. Nevertheless, the data collapsed across multiple experiments showed a null valence-to-spatial metaphoric congruency effect, suggesting the absence of the valence-to-spatial metaphoric association in general. The implications of the current findings for the Conceptual Metaphor Theory and its alternatives are discussed. PMID:25867748

  7. Does incongruence of lexicosemantic and prosodic information cause discernible cognitive conflict?

    PubMed

    Mitchell, Rachel L C

    2006-12-01

    We are often required to interpret discordant emotional signals. Whereas equivalent cognitive paradigms cause noticeable conflict via their behavioral and psychophysiological effects, the same may not necessarily be true for discordant emotions. Skin conductance responses (SCRs) and heart rates (HRs) were measured during a classic Stroop task and one in which the emotions conveyed by lexicosemantic content and prosody were congruent or incongruent. The participants' task was to identify the emotion conveyed by lexicosemantic content or prosody. No relationship was observed between HR and congruence. SCR was higher during incongruent than during congruent conditions of the experimental task (as well as in the classic Stroop task), but no difference in SCR was observed in a comparison between congruence effects during lexicosemantic emotion identification and those during prosodic emotion identification. It is concluded that incongruence between lexicosemantic and prosodic emotion does cause notable cognitive conflict. Functional neuroanatomic implications are discussed.

  8. How holistic processing of faces relates to cognitive control and intelligence.

    PubMed

    Gauthier, Isabel; Chua, Kao-Wei; Richler, Jennifer J

    2018-04-16

    The Vanderbilt Holistic Processing Test for faces (VHPT-F) is the first standard test designed to measure individual differences in holistic processing. The test measures failures of selective attention to face parts through congruency effects, an operational definition of holistic processing. However, this conception of holistic processing has been challenged by the suggestion that it may tap into the same selective attention or cognitive control mechanisms that yield congruency effects in Stroop and Flanker paradigms. Here, we report data from 130 subjects on the VHPT-F, several versions of Stroop and Flanker tasks, as well as fluid IQ. Results suggested a small degree of shared variance in Stroop and Flanker congruency effects, which did not relate to congruency effects on the VHPT-F. Variability on the VHPT-F was also not correlated with Fluid IQ. In sum, we find no evidence that holistic face processing as measured by congruency in the VHPT-F is accounted for by domain-general control mechanisms.

  9. Parent and Adolescent Perceptions of Adolescent Career Development Tasks and Vocational Identity

    ERIC Educational Resources Information Center

    Rogers, Mary E.; Creed, Peter A.; Praskova, Anna

    2018-01-01

    We surveyed Australian adolescents and parents to test differences and congruence in perceptions of adolescent career development tasks (career planning, exploration, certainty, and world-of-work knowledge) and vocational identity. We found that, for adolescents (N = 415), career development tasks (not career exploration) explained 48% of the…

  10. Correlates of stimulus-response congruence in the posterior parietal cortex.

    PubMed

    Stoet, Gijsbert; Snyder, Lawrence H

    2007-02-01

    Primate behavior is flexible: The response to a stimulus often depends on the task in which it occurs. Here we study how single neurons in the posterior parietal cortex (PPC) respond to stimuli which are associated with different responses in different tasks. Two rhesus monkeys performed a task-switching paradigm. Each trial started with a task cue instructing which of two tasks to perform, followed by a stimulus requiring a left or right button press. For half the stimuli, the associated responses were different in the two tasks, meaning that the task context was necessary to disambiguate the incongruent stimuli. The other half of stimuli required the same response irrespective of task context (congruent). Using this paradigm, we previously showed that behavioral responses to incongruent stimuli are significantly slower than to congruent stimuli. We now demonstrate a neural correlate in the PPC of the additional processing time required for incongruent stimuli. Furthermore, we previously found that 29% of parietal neurons encode the task being performed (task-selective cells). We now report differences in neuronal timing related to congruency in task-selective versus task nonselective cells. These differences in timing suggest that the activity in task nonselective cells reflects a motor command, whereas activity in task-selective cells reflects a decision process.

  11. Attentional load attenuates synaesthetic priming effects in grapheme-colour synaesthesia.

    PubMed

    Mattingley, Jason B; Payne, Jonathan M; Rich, Anina N

    2006-02-01

    One of the hallmarks of grapheme-colour synaesthesia is that colours induced by letters, digits and words tend to interfere with the identification of coloured targets when the two colours are different, i.e., when they are incongruent. In a previous investigation (Mattingley et al., 2001) we found that this synaesthetic congruency effect occurs when an achromatic-letter prime precedes a coloured target, but that the effect disappears when the letter is pattern masked to prevent conscious recognition of its identity. Here we investigated whether selective attention modulates the synaesthetic congruency effect in a letter-priming task. Fourteen grapheme-colour synaesthetes and 14 matched, non-synaesthetic controls participated. The amount of selective attention available to process the letter-prime was limited by having participants perform a secondary visual task that involved discriminating pairs of gaps in adjacent limbs of a diamond surrounding the prime. In separate blocks of trials the attentional load of the secondary task was systematically varied to yield 'low load' and 'high load' conditions. We found a significant congruency effect for synaesthetes, but not for controls, when they performed a secondary attention-demanding task during presentation of the letter prime. Crucially, however, the magnitude of this priming was significantly reduced under conditions of high-load relative to low-load, indicating that attention plays an important role in modulating synaesthesia. Our findings help to explain the observation that synaesthetic colour experiences are often weak or absent during attention-demanding tasks.

  12. The effect of visual parameters on neural activation during nonsymbolic number comparison and its relation to math competency.

    PubMed

    Wilkey, Eric D; Barone, Jordan C; Mazzocco, Michèle M M; Vogel, Stephan E; Price, Gavin R

    2017-10-01

    Nonsymbolic numerical comparison task performance (whereby a participant judges which of two groups of objects is numerically larger) is thought to index the efficiency of neural systems supporting numerical magnitude perception, and performance on such tasks has been related to individual differences in math competency. However, a growing body of research suggests task performance is heavily influenced by visual parameters of the stimuli (e.g. surface area and dot size of object sets) such that the correlation with math is driven by performance on trials in which number is incongruent with visual cues. Almost nothing is currently known about whether the neural correlates of nonsymbolic magnitude comparison are also affected by visual congruency. To investigate this issue, we used functional magnetic resonance imaging (fMRI) to analyze neural activity during a nonsymbolic comparison task as a function of visual congruency in a sample of typically developing high school students (n = 36). Further, we investigated the relation to math competency as measured by the preliminary scholastic aptitude test (PSAT) in 10th grade. Our results indicate that neural activity was modulated by the ratio of the dot sets being compared in brain regions previously shown to exhibit an effect of ratio (i.e. left anterior cingulate, left precentral gyrus, left intraparietal sulcus, and right superior parietal lobe) when calculated from the average of congruent and incongruent trials, as it is in most studies, and that the effect of ratio within those regions did not differ as a function of congruency condition. However, there were significant differences in other regions in overall task-related activation, as opposed to the neural ratio effect, when congruent and incongruent conditions were contrasted at the whole-brain level. Math competency negatively correlated with ratio-dependent neural response in the left insula across congruency conditions and showed distinct correlations when split across conditions. There was a positive correlation between math competency in the right supramarginal gyrus during congruent trials and a negative correlation in the left angular gyrus during incongruent trials. Together, these findings support the idea that performance on the nonsymbolic comparison task relates to math competency and ratio-dependent neural activity does not differ by congruency condition. With regards to math competency, congruent and incongruent trials showed distinct relations between math competency and individual differences in ratio-dependent neural activity. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. An investigation of the time course of category congruence and priming distance effects in number classification tasks.

    PubMed

    Perry, Jason R; Lupker, Stephen J

    2012-09-01

    The issue investigated in the present research is the nature of the information that is responsible for producing masked priming effects (e.g., semantic information or stimulus-response [S-R] associations) when responding to number stimuli. This issue was addressed by assessing both the magnitude of the category congruence (priming) effect and the nature of the priming distance effect across trials using single-digit primes and targets. Participants made either magnitude (i.e., whether the number presented was larger or smaller than 5) or identification (i.e., press the left button if the number was either a 1, 2, 3, or 4 or the right button if the number was either a 6, 7, 8, or 9) judgments. The results indicated that, regardless of task instruction, there was a clear priming distance effect and a significantly increasing category congruence effect. These results indicated that both semantic activation and S-R associations play important roles in producing masked priming effects.

  14. Mood-specific effects in the allocation of attention across time.

    PubMed

    Rokke, Paul D; Lystad, Chad M

    2015-01-01

    Participants completed single and dual rapid serial visual presentation (RSVP) tasks. Across five experiments, either the mood of the participant or valence of the target was manipulated to create pairings in which the critical target was either mood congruent or mood noncongruent. When the second target (T2) in an RSVP stream was congruent with the participant's mood, performance was enhanced. This was true for happy and sad moods and in single- and dual-task conditions. In contrast, the effects of congruence varied when the focus was on the first target (T1). When in a sad mood and having attended to a sad T1, detection of a neutral T2 was impaired, resulting in a stronger attentional blink (AB). There was no effect of stimulus-mood congruence for T1 when in a happy mood. It was concluded that mood-congruence is important for stimulus detection, but that sadness uniquely influences post-identification processing when attention is first focused on mood-congruent information.

  15. Alertness and cognitive control: Toward a spatial grouping hypothesis.

    PubMed

    Schneider, Darryl W

    2018-05-01

    A puzzling interaction involving alertness and cognitive control is indicated by the finding of faster performance but larger congruency effects on alert trials (on which alerting cues are presented before the task stimuli) than on no-alert trials in selective attention tasks. In the present study, the author conducted four experiments to test hypotheses about the interaction. Manipulation of stimulus spacing revealed a difference in congruency effects between alert and no-alert trials for narrowly spaced stimuli but not for widely spaced stimuli, inconsistent with the hypothesis that increased alertness is associated with more diffuse attention. Manipulation of color grouping revealed similar differences in congruency effects between alert and no-alert trials for same-color and different-color groupings of targets and distractors, inconsistent with the general hypothesis that increased alertness is associated with more perceptual grouping. To explain the results, the author proposes that increased alertness is associated specifically with more spatial grouping of stimuli, possibly by modulating the threshold for parsing stimulus displays into distinct objects.

  16. Evaluative stimulus (in)congruency impacts performance in an unrelated task: evidence for a resource-based account of evaluative priming.

    PubMed

    Gast, Anne; Werner, Benedikt; Heitmann, Christina; Spruyt, Adriaan; Rothermund, Klaus

    2014-01-01

    In two experiments, we assessed evaluative priming effects in a task that was unrelated to the congruent or incongruent stimulus pairs. In each trial, participants saw two valent (positive or negative) pictures that formed evaluatively congruent or incongruent stimulus pairs and a letter that was superimposed on the second picture. Different from typical evaluative priming studies, participants were not required to respond to the second of the valent stimuli, but asked to categorize the letter that was superimposed on the second picture. We assessed the impact of the evaluative (in)congruency of the two pictures on the performance in responding to the letter. In addition, we manipulated attention to the evaluative dimension by asking participants in one experimental group to respond to the valence of the pictures on a subset of trials (evaluative task condition). In both experiments, we found evaluative priming effects in letter categorization responses: Participants categorized the letter faster (and sometimes more correctly) in trials with congruent picture-pairs. These effects were present only in the evaluative task condition. These findings can be explained with different resource-based accounts of evaluative priming and the additional assumption that attention to valence is necessary for evaluative congruency to affect processing resources. According to resource-based accounts valence-incongruent trials require more cognitive resources than valence-congruent trials (e.g., Hermans, Van den Broeck, & Eelen, 1998).

  17. Differential occipital responses in early- and late-blind individuals during a sound-source discrimination task.

    PubMed

    Voss, Patrice; Gougoux, Frederic; Zatorre, Robert J; Lassonde, Maryse; Lepore, Franco

    2008-04-01

    Blind individuals do not necessarily receive more auditory stimulation than sighted individuals. However, to interact effectively with their environment, they have to rely on non-visual cues (in particular auditory) to a greater extent. Often benefiting from cerebral reorganization, they not only learn to rely more on such cues but also may process them better and, as a result, demonstrate exceptional abilities in auditory spatial tasks. Here we examine the effects of blindness on brain activity, using positron emission tomography (PET), during a sound-source discrimination task (SSDT) in both early- and late-onset blind individuals. This should not only provide an answer to the question of whether the blind manifest changes in brain activity but also allow a direct comparison of the two subgroups performing an auditory spatial task. The task was presented under two listening conditions: one binaural and one monaural. The binaural task did not show any significant behavioural differences between groups, but it demonstrated striate and extrastriate activation in the early-blind groups. A subgroup of early-blind individuals, on the other hand, performed significantly better than all the other groups during the monaural task, and these enhanced skills were correlated with elevated activity within the left dorsal extrastriate cortex. Surprisingly, activation of the right ventral visual pathway, which was significantly activated in the late-blind individuals during the monaural task, was negatively correlated with performance. This suggests the possibility that not all cross-modal plasticity is beneficial. Overall, our results not only support previous findings showing that occipital cortex of early-blind individuals is functionally engaged in spatial auditory processing but also shed light on the impact the age of onset of blindness can have on the ensuing cross-modal plasticity.

  18. Automatic selective attention as a function of sensory modality in aging.

    PubMed

    Guerreiro, Maria J S; Adam, Jos J; Van Gerven, Pascal W M

    2012-03-01

    It was recently hypothesized that age-related differences in selective attention depend on sensory modality (Guerreiro, M. J. S., Murphy, D. R., & Van Gerven, P. W. M. (2010). The role of sensory modality in age-related distraction: A critical review and a renewed view. Psychological Bulletin, 136, 975-1022. doi:10.1037/a0020731). So far, this hypothesis has not been tested in automatic selective attention. The current study addressed this issue by investigating age-related differences in automatic spatial cueing effects (i.e., facilitation and inhibition of return [IOR]) across sensory modalities. Thirty younger (mean age = 22.4 years) and 25 older adults (mean age = 68.8 years) performed 4 left-right target localization tasks, involving all combinations of visual and auditory cues and targets. We used stimulus onset asynchronies (SOAs) of 100, 500, 1,000, and 1,500 ms between cue and target. The results showed facilitation (shorter reaction times with valid relative to invalid cues at shorter SOAs) in the unimodal auditory and in both cross-modal tasks but not in the unimodal visual task. In contrast, there was IOR (longer reaction times with valid relative to invalid cues at longer SOAs) in both unimodal tasks but not in either of the cross-modal tasks. Most important, these spatial cueing effects were independent of age. The results suggest that the modality hypothesis of age-related differences in selective attention does not extend into the realm of automatic selective attention.

  19. Feature Integration and Task Switching: Diminished Switch Costs after Controlling for Stimulus, Response, and Cue Repetitions

    PubMed Central

    Schmidt, James R.; Liefooghe, Baptist

    2016-01-01

    This report presents data from two versions of the task switching procedure in which the separate influence of stimulus repetitions, response key repetitions, conceptual response repetitions, cue repetitions, task repetitions, and congruency are considered. Experiment 1 used a simple alternating runs procedure with parity judgments of digits and consonant/vowel decisions of letters as the two tasks. Results revealed sizable effects of stimulus and response repetitions, and controlling for these effects reduced the switch cost. Experiment 2 was a cued version of the task switch paradigm with parity and magnitude judgments of digits as the two tasks. Results again revealed large effects of stimulus and response repetitions, in addition to cue repetition effects. Controlling for these effects again reduced the switch cost. Congruency did not interact with our novel “unbiased” measure of switch costs. We discuss how the task switch paradigm might be thought of as a more complex version of the feature integration paradigm and propose an episodic learning account of the effect. We further consider to what extent appeals to higher-order control processes might be unnecessary and propose that controls for feature integration biases should be standard practice in task switching experiments. PMID:26964102

  20. Severe Cross-Modal Object Recognition Deficits in Rats Treated Sub-Chronically with NMDA Receptor Antagonists are Reversed by Systemic Nicotine: Implications for Abnormal Multisensory Integration in Schizophrenia

    PubMed Central

    Jacklin, Derek L; Goel, Amit; Clementino, Kyle J; Hall, Alexander W M; Talpos, John C; Winters, Boyer D

    2012-01-01

    Schizophrenia is a complex and debilitating disorder, characterized by positive, negative, and cognitive symptoms. Among the cognitive deficits observed in patients with schizophrenia, recent work has indicated abnormalities in multisensory integration, a process that is important for the formation of comprehensive environmental percepts and for the appropriate guidance of behavior. Very little is known about the neural bases of such multisensory integration deficits, partly because of the lack of viable behavioral tasks to assess this process in animal models. In this study, we used our recently developed rodent cross-modal object recognition (CMOR) task to investigate multisensory integration functions in rats treated sub-chronically with one of two N-methyl-D-aspartate receptor (NMDAR) antagonists, MK-801, or ketamine; such treatment is known to produce schizophrenia-like symptoms. Rats treated with the NMDAR antagonists were impaired on the standard spontaneous object recognition (SOR) task, unimodal (tactile or visual only) versions of SOR, and the CMOR task with intermediate to long retention delays between acquisition and testing phases, but they displayed a selective CMOR task deficit when mnemonic demand was minimized. This selective impairment in multisensory information processing was dose-dependently reversed by acute systemic administration of nicotine. These findings suggest that persistent NMDAR hypofunction may contribute to the multisensory integration deficits observed in patients with schizophrenia and highlight the valuable potential of the CMOR task to facilitate further systematic investigation of the neural bases of, and potential treatments for, this hitherto overlooked aspect of cognitive dysfunction in schizophrenia. PMID:22669170

  1. Object discrimination using optimized multi-frequency auditory cross-modal haptic feedback.

    PubMed

    Gibson, Alison; Artemiadis, Panagiotis

    2014-01-01

    As the field of brain-machine interfaces and neuro-prosthetics continues to grow, there is a high need for sensor and actuation mechanisms that can provide haptic feedback to the user. Current technologies employ expensive, invasive and often inefficient force feedback methods, resulting in an unrealistic solution for individuals who rely on these devices. This paper responds through the development, integration and analysis of a novel feedback architecture where haptic information during the neural control of a prosthetic hand is perceived through multi-frequency auditory signals. Through representing force magnitude with volume and force location with frequency, the feedback architecture can translate the haptic experiences of a robotic end effector into the alternative sensory modality of sound. Previous research with the proposed cross-modal feedback method confirmed its learnability, so the current work aimed to investigate which frequency map (i.e. frequency-specific locations on the hand) is optimal in helping users distinguish between hand-held objects and tasks associated with them. After short use with the cross-modal feedback during the electromyographic (EMG) control of a prosthetic hand, testing results show that users are able to use audial feedback alone to discriminate between everyday objects. While users showed adaptation to three different frequency maps, the simplest map containing only two frequencies was found to be the most useful in discriminating between objects. This outcome provides support for the feasibility and practicality of the cross-modal feedback method during the neural control of prosthetics.

  2. Different levels of learning interact to shape the congruency sequence effect.

    PubMed

    Weissman, Daniel H; Hawks, Zoë W; Egner, Tobias

    2016-04-01

    The congruency effect in distracter interference tasks is often reduced after incongruent relative to congruent trials. Moreover, this congruency sequence effect (CSE) is influenced by learning related to concrete stimulus and response features as well as by learning related to abstract cognitive control processes. There is an ongoing debate, however, over whether interactions between these learning processes are best explained by an episodic retrieval account, an adaptation by binding account, or a cognitive efficiency account of the CSE. To make this distinction, we orthogonally manipulated the expression of these learning processes in a novel factorial design involving the prime-probe arrow task. In Experiment 1, these processes interacted in an over-additive fashion to influence CSE magnitude. In Experiment 2, we replicated this interaction while showing it was not driven by conditional differences in the size of the congruency effect. In Experiment 3, we ruled out an alternative account of this interaction as reflecting conditional differences in learning related to concrete stimulus and response features. These findings support an episodic retrieval account of the CSE, in which repeating a stimulus feature from the previous trial facilitates the retrieval and use of previous-trial control parameters, thereby boosting control in the current trial. In contrast, they do not fit with (a) an adaptation by binding account, in which CSE magnitude is directly related to the size of the congruency effect, or (b) a cognitive efficiency account, in which costly control processes are recruited only when behavioral adjustments cannot be mediated by low-level associative mechanisms. (c) 2016 APA, all rights reserved).

  3. Green love is ugly: emotions elicited by synesthetic grapheme-color perceptions.

    PubMed

    Callejas, Alicia; Acosta, Alberto; Lupiáñez, Juan

    2007-01-05

    Synesthetes who experience grapheme-color synesthesia often report feeling uneasy when dealing with incongruently colored graphemes although no empirical data is available to confirm this phenomenon. We studied this affective reaction related to synesthetic perceptions by means of an evaluation task. We found that the perception of an incorrectly colored word affects the judgments of emotional valence. Furthermore, this effect competed with the word's emotional valence in a categorization task thus supporting the automatic nature of this synesthetically elicited affective reaction. When manipulating word valence and word color-photism congruence, we found that responses were slower (and less accurate) for inconsistent conditions than for consistent conditions. Inconsistent conditions were defined as those where semantics and color-photism congruence did not produce a similar assessment and therefore gave rise to a negative affective reaction (i.e., positive-valence words presented in a color different from the synesthete's photism or negative-valence words presented in the photism's color). We therefore observed a modulation of the congruency effect (i.e., faster reaction times to congruently colored words than incongruently colored words). Although this congruence effect has been taken as an index of the true experience of synesthesia, we observed that it can be reversed when the experimental manipulations turn an incongruently colored word into a consistent stimulus. To our knowledge, this is the first report of an affective reaction elicited by the congruency between the synesthetically induced color of a word and the color in which the word is actually presented. The underlying neural mechanisms that might be involved in this phenomenon are discussed.

  4. Cross-Modal Attention-Switching Is Impaired in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Reed, Phil; McCarthy, Julia

    2012-01-01

    This investigation aimed to determine if children with ASD are impaired in their ability to switch attention between different tasks, and whether performance is further impaired when required to switch across two separate modalities (visual and auditory). Eighteen children with ASD (9-13 years old) were compared with 18 typically-developing…

  5. Got Rhythm...For Better and for Worse. Cross-Modal Effects of Auditory Rhythm on Visual Word Recognition

    ERIC Educational Resources Information Center

    Brochard, Renaud; Tassin, Maxime; Zagar, Daniel

    2013-01-01

    The present research aimed to investigate whether, as previously observed with pictures, background auditory rhythm would also influence visual word recognition. In a lexical decision task, participants were presented with bisyllabic visual words, segmented into two successive groups of letters, while an irrelevant strongly metric auditory…

  6. Fast transfer of crossmodal time interval training.

    PubMed

    Chen, Lihan; Zhou, Xiaolin

    2014-06-01

    Sub-second time perception is essential for many important sensory and perceptual tasks including speech perception, motion perception, motor coordination, and crossmodal interaction. This study investigates to what extent the ability to discriminate sub-second time intervals acquired in one sensory modality can be transferred to another modality. To this end, we used perceptual classification of visual Ternus display (Ternus in Psychol Forsch 7:81-136, 1926) to implicitly measure participants' interval perception in pre- and posttests and implemented an intra- or crossmodal sub-second interval discrimination training protocol in between the tests. The Ternus display elicited either an "element motion" or a "group motion" percept, depending on the inter-stimulus interval between the two visual frames. The training protocol required participants to explicitly compare the interval length between a pair of visual, auditory, or tactile stimuli with a standard interval or to implicitly perceive the length of visual, auditory, or tactile intervals by completing a non-temporal task (discrimination of auditory pitch or tactile intensity). Results showed that after fast explicit training of interval discrimination (about 15 min), participants improved their ability to categorize the visual apparent motion in Ternus displays, although the training benefits were mild for visual timing training. However, the benefits were absent for implicit interval training protocols. This finding suggests that the timing ability in one modality can be rapidly acquired and used to improve timing-related performance in another modality and that there may exist a central clock for sub-second temporal processing, although modality-specific perceptual properties may constrain the functioning of this clock.

  7. Sliding perspectives: dissociating ownership from self-location during full body illusions in virtual reality

    PubMed Central

    Maselli, Antonella; Slater, Mel

    2014-01-01

    Bodily illusions have been used to study bodily self-consciousness and disentangle its various components, among other the sense of ownership and self-location. Congruent multimodal correlations between the real body and a fake humanoid body can in fact trigger the illusion that the fake body is one's own and/or disrupt the unity between the perceived self-location and the position of the physical body. However, the extent to which changes in self-location entail changes in ownership is still matter of debate. Here we address this problem with the support of immersive virtual reality. Congruent visuotactile stimulation was delivered on healthy participants to trigger full body illusions from different visual perspectives, each resulting in a different degree of overlap between real and virtual body. Changes in ownership and self-location were measured with novel self-posture assessment tasks and with an adapted version of the cross-modal congruency task. We found that, despite their strong coupling, self-location and ownership can be selectively altered: self-location was affected when having a third person perspective over the virtual body, while ownership toward the virtual body was experienced only in the conditions with total or partial overlap. Thus, when the virtual body is seen in the far extra-personal space, changes in self-location were not coupled with changes in ownership. If a partial spatial overlap is present, ownership was instead typically experienced with a boosted change in the perceived self-location. We discussed results in the context of the current knowledge of the multisensory integration mechanisms contributing to self-body perception. We argue that changes in the perceived self-location are associated to the dynamical representation of peripersonal space encoded by visuotactile neurons. On the other hand, our results speak in favor of visuo-proprioceptive neuronal populations being a driving trigger in full body ownership illusions. PMID:25309383

  8. Plasticity of Attentional Functions in Older Adults after Non-Action Video Game Training: A Randomized Controlled Trial

    PubMed Central

    Mayas, Julia; Parmentier, Fabrice B. R.; Andrés, Pilar; Ballesteros, Soledad

    2014-01-01

    A major goal of recent research in aging has been to examine cognitive plasticity in older adults and its capacity to counteract cognitive decline. The aim of the present study was to investigate whether older adults could benefit from brain training with video games in a cross-modal oddball task designed to assess distraction and alertness. Twenty-seven healthy older adults participated in the study (15 in the experimental group, 12 in the control group. The experimental group received 20 1-hr video game training sessions using a commercially available brain-training package (Lumosity) involving problem solving, mental calculation, working memory and attention tasks. The control group did not practice this package and, instead, attended meetings with the other members of the study several times along the course of the study. Both groups were evaluated before and after the intervention using a cross-modal oddball task measuring alertness and distraction. The results showed a significant reduction of distraction and an increase of alertness in the experimental group and no variation in the control group. These results suggest neurocognitive plasticity in the old human brain as training enhanced cognitive performance on attentional functions. Trial Registration ClinicalTrials.gov NCT02007616 PMID:24647551

  9. Co-speech iconic gestures and visuo-spatial working memory.

    PubMed

    Wu, Ying Choon; Coulson, Seana

    2014-11-01

    Three experiments tested the role of verbal versus visuo-spatial working memory in the comprehension of co-speech iconic gestures. In Experiment 1, participants viewed congruent discourse primes in which the speaker's gestures matched the information conveyed by his speech, and incongruent ones in which the semantic content of the speaker's gestures diverged from that in his speech. Discourse primes were followed by picture probes that participants judged as being either related or unrelated to the preceding clip. Performance on this picture probe classification task was faster and more accurate after congruent than incongruent discourse primes. The effect of discourse congruency on response times was linearly related to measures of visuo-spatial, but not verbal, working memory capacity, as participants with greater visuo-spatial WM capacity benefited more from congruent gestures. In Experiments 2 and 3, participants performed the same picture probe classification task under conditions of high and low loads on concurrent visuo-spatial (Experiment 2) and verbal (Experiment 3) memory tasks. Effects of discourse congruency and verbal WM load were additive, while effects of discourse congruency and visuo-spatial WM load were interactive. Results suggest that congruent co-speech gestures facilitate multi-modal language comprehension, and indicate an important role for visuo-spatial WM in these speech-gesture integration processes. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Causal Inference for Cross-Modal Action Selection: A Computational Study in a Decision Making Framework.

    PubMed

    Daemi, Mehdi; Harris, Laurence R; Crawford, J Douglas

    2016-01-01

    Animals try to make sense of sensory information from multiple modalities by categorizing them into perceptions of individual or multiple external objects or internal concepts. For example, the brain constructs sensory, spatial representations of the locations of visual and auditory stimuli in the visual and auditory cortices based on retinal and cochlear stimulations. Currently, it is not known how the brain compares the temporal and spatial features of these sensory representations to decide whether they originate from the same or separate sources in space. Here, we propose a computational model of how the brain might solve such a task. We reduce the visual and auditory information to time-varying, finite-dimensional signals. We introduce controlled, leaky integrators as working memory that retains the sensory information for the limited time-course of task implementation. We propose our model within an evidence-based, decision-making framework, where the alternative plan units are saliency maps of space. A spatiotemporal similarity measure, computed directly from the unimodal signals, is suggested as the criterion to infer common or separate causes. We provide simulations that (1) validate our model against behavioral, experimental results in tasks where the participants were asked to report common or separate causes for cross-modal stimuli presented with arbitrary spatial and temporal disparities. (2) Predict the behavior in novel experiments where stimuli have different combinations of spatial, temporal, and reliability features. (3) Illustrate the dynamics of the proposed internal system. These results confirm our spatiotemporal similarity measure as a viable criterion for causal inference, and our decision-making framework as a viable mechanism for target selection, which may be used by the brain in cross-modal situations. Further, we suggest that a similar approach can be extended to other cognitive problems where working memory is a limiting factor, such as target selection among higher numbers of stimuli and selections among other modality combinations.

  11. Perceptuo-motor compatibility governs multisensory integration in bimanual coordination dynamics.

    PubMed

    Zelic, Gregory; Mottet, Denis; Lagarde, Julien

    2016-02-01

    The brain has the remarkable ability to bind together inputs from different sensory origin into a coherent percept. Behavioral benefits can result from such ability, e.g., a person typically responds faster and more accurately to cross-modal stimuli than to unimodal stimuli. To date, it is, however, largely unknown whether such multisensory benefits, shown for discrete reactive behaviors, generalize to the continuous coordination of movements. The present study addressed multisensory integration from the perspective of bimanual coordination dynamics, where the perceptual activity no longer triggers a single response but continuously guides the motor action. The task consisted in coordinating anti-symmetrically the continuous flexion-extension of the index fingers, while synchronizing with an external pacer. Three different configurations of metronome were tested, for which we examined whether a cross-modal pacing (audio-tactile beats) improved the stability of the coordination in comparison with unimodal pacing condition (auditory or tactile beats). We found a more stable bimanual coordination for cross-modal pacing, but only when the metronome configuration directly matched the anti-symmetric coordination pattern. We conclude that multisensory integration can benefit the continuous coordination of movements; however, this is constrained by whether the perceptual and motor activities match in space and time.

  12. An Event-Related Potential Study of Cross-modal Morphological and Phonological Priming

    PubMed Central

    Justus, Timothy; Yang, Jennifer; Larsen, Jary; de Mornay Davies, Paul; Swick, Diane

    2009-01-01

    The current work investigated whether differences in phonological overlap between the past- and present-tense forms of regular and irregular verbs can account for the graded neurophysiological effects of verb regularity observed in past-tense priming designs. Event-related potentials were recorded from sixteen healthy participants who performed a lexical-decision task in which past-tense primes immediately preceded present-tense targets. To minimize intra-modal phonological priming effects, cross-modal presentation between auditory primes and visual targets was employed, and results were compared to a companion intra-modal auditory study (Justus, Larsen, de Mornay Davies, & Swick, 2008). For both regular and irregular verbs, faster response times and reduced N400 components were observed for present-tense forms when primed by the corresponding past-tense forms. Although behavioral facilitation was observed with a pseudopast phonological control condition, neither this condition nor an orthographic-phonological control produced significant N400 priming effects. Instead, these two types of priming were associated with a post-lexical anterior negativity (PLAN). Results are discussed with regard to dual- and single-system theories of inflectional morphology, as well as intra- and cross-modal prelexical priming. PMID:20160930

  13. Evaluative priming in a semantic flanker task: ERP evidence for a mutual facilitation explanation.

    PubMed

    Schmitz, Melanie; Wentura, Dirk; Brinkmann, Thorsten A

    2014-03-01

    In semantic flanker tasks, target categorization response times are affected by the semantic compatibility of the flanker and target. With positive and negative category exemplars, we investigated the influence of evaluative congruency (whether flanker and target share evaluative valence) on the flanker effect, using behavioral and electrophysiological measures. We hypothesized a moderation of the flanker effect by evaluative congruency on the basis of the assumption that evaluatively congruent concepts mutually facilitate each other's activation (see Schmitz & Wentura in Journal of Experimental Psychology: Learning, Memory, and Cognition 38:984-1000, 2012). Applying an onset delay of 50 ms for the flanker, we aimed to decrease the facilitative effect of an evaluatively congruent flanker on target encoding and, at the same time, increase the facilitative effect of an evaluatively congruent target on flanker encoding. As a consequence of increased flanker activation in the case of evaluative congruency, we expected a semantically incompatible flanker to interfere with the target categorization to a larger extent (as compared with an evaluatively incongruent pairing). Confirming our hypotheses, the flanker effect significantly depended on evaluative congruency, in both mean response times and N2 mean amplitudes. Thus, the present study provided behavioral and electrophysiological evidence for the mutual facilitation of evaluatively congruent concepts. Implications for the representation of evaluative connotations of semantic concepts are discussed.

  14. Simple real-time computerized tasks for detection of malingering among murderers with schizophrenia.

    PubMed

    Kertzman, Semion; Grinspan, Haim; Birger, Moshe; Shliapnikov, Nina; Alish, Yakov; Ben Nahum, Zeev; Mester, Roberto; Kotler, Moshe

    2006-01-01

    It is our contention that computer-based two-alternative forced choice techniques can be useful tools for the detection of patients with schizophrenia who feign acute psychotic symptoms and cognitive impairment as opposed to patients with schizophrenia with a true active psychosis. In our experiment, Visual Simple and Choice Reaction Time tasks were used. Reaction time in milliseconds was recorded and accuracy rate was obtained for all subjects' responses. Both types of task were administered to 27 patients with schizophrenia suspected of having committed murder. Patients with schizophrenia who were clinically assessed as malingerers achieved significantly fewer correct results, were significantly slower and less consistent in their reaction time. Congruence of performance between the Simple and Choice tasks was an additional parameter for the accurate diagnosis of malingering. The four parameters of both tests (accuracy of response, reaction time, standard deviation of reaction time and task congruency) are simple and constitute a user-friendly means for the detection of malingering in forensic practice. Another advantage of this procedure is that the software automatically measures and evaluates all the parameters.

  15. Does sunshine prime loyal … or summer? Effects of associative relatedness on the evaluative priming effect in the valent/neutral categorisation task.

    PubMed

    Werner, Benedikt; von Ramin, Elisabeth; Spruyt, Adriaan; Rothermund, Klaus

    2018-02-01

    After 30 years of research, the mechanisms underlying the evaluative priming effect are still a topic of debate. In this study, we tested whether the evaluative priming effect can result from (uncontrolled) associative relatedness rather than evaluative congruency. Stimuli that share the same evaluative connotation are more likely to show some degree of non-evaluative associative relatedness than stimuli that have a different evaluative connotation. Therefore, unless associative relatedness is explicitly controlled for, evaluative priming effects reported in earlier research may be driven by associative relatedness instead of evaluative relatedness. To address this possibility, we performed an evaluative priming study in which evaluative congruency and associative relatedness were manipulated independently from each other. The valent/neutral categorisation task was used to ensure evaluative stimulus processing in the absence of response priming effects. Results showed an effect of associative relatedness but no (overall) effect of evaluative congruency. Our findings highlight the importance of controlling for associative relatedness when testing for evaluative priming effects.

  16. A social Bouba/Kiki effect: A bias for people whose names match their faces.

    PubMed

    Barton, David N; Halberstadt, Jamin

    2018-06-01

    The "bouba/kiki effect" is the robust tendency to associate rounded objects (vs. angular objects) with names that require rounding of the mouth to pronounce, and may reflect synesthesia-like mapping across perceptual modalities. Here we show for the first time a "social" bouba/kiki effect, such that experimental participants associate round names ("Bob," "Lou") with round-faced (vs. angular-faced) individuals. Moreover, consistent with a bias for expectancy-consistent information, we find that participants like targets with "matching" names, both when name-face fit is measured and when it is experimentally manipulated. Finally, we show that such bias could have important practical consequences: An analysis of voting data reveals that Senatorial candidates earn 10% more votes when their names fit their faces very well, versus very poorly. These and similar cross-modal congruencies suggest that social judgment involves not only amodal application of stored information (e.g., stereotypes) to new stimuli, but also integration of perceptual and bodily input.

  17. The cognitive loci of the display and task-relevant set size effects on distractor interference: Evidence from a dual-task paradigm.

    PubMed

    Park, Bo Youn; Kim, Sujin; Cho, Yang Seok

    2018-02-01

    The congruency effect of a task-irrelevant distractor has been found to be modulated by task-relevant set size and display set size. The present study used a psychological refractory period (PRP) paradigm to examine the cognitive loci of the display set size effect (dilution effect) and the task-relevant set size effect (perceptual load effect) on distractor interference. A tone discrimination task (Task 1), in which a response was made to the pitch of the target tone, was followed by a letter discrimination task (Task 2) in which different types of visual target display were used. In Experiment 1, in which display set size was manipulated to examine the nature of the display set size effect on distractor interference in Task 2, the modulation of the congruency effect by display set size was observed at both short and long stimulus-onset asynchronies (SOAs), indicating that the display set size effect occurred after the target was selected for processing in the focused attention stage. In Experiment 2, in which task-relevant set size was manipulated to examine the nature of the task-relevant set size effect on distractor interference in Task 2, the effects of task-relevant set size increased with SOA, suggesting that the target selection efficiency in the preattentive stage was impaired with increasing task-relevant set size. These results suggest that display set size and task-relevant set size modulate distractor processing in different ways.

  18. A conceptual lemon: theta burst stimulation to the left anterior temporal lobe untangles object representation and its canonical color.

    PubMed

    Chiou, Rocco; Sowman, Paul F; Etchell, Andrew C; Rich, Anina N

    2014-05-01

    Object recognition benefits greatly from our knowledge of typical color (e.g., a lemon is usually yellow). Most research on object color knowledge focuses on whether both knowledge and perception of object color recruit the well-established neural substrates of color vision (the V4 complex). Compared with the intensive investigation of the V4 complex, we know little about where and how neural mechanisms beyond V4 contribute to color knowledge. The anterior temporal lobe (ATL) is thought to act as a "hub" that supports semantic memory by integrating different modality-specific contents into a meaningful entity at a supramodal conceptual level, making it a good candidate zone for mediating the mappings between object attributes. Here, we explore whether the ATL is critical for integrating typical color with other object attributes (object shape and name), akin to its role in combining nonperceptual semantic representations. In separate experimental sessions, we applied TMS to disrupt neural processing in the left ATL and a control site (the occipital pole). Participants performed an object naming task that probes color knowledge and elicits a reliable color congruency effect as well as a control quantity naming task that also elicits a cognitive congruency effect but involves no conceptual integration. Critically, ATL stimulation eliminated the otherwise robust color congruency effect but had no impact on the numerical congruency effect, indicating a selective disruption of object color knowledge. Neither color nor numerical congruency effects were affected by stimulation at the control occipital site, ruling out nonspecific effects of cortical stimulation. Our findings suggest that the ATL is involved in the representation of object concepts that include their canonical colors.

  19. Scan Patterns Predict Sentence Production in the Cross-Modal Processing of Visual Scenes

    ERIC Educational Resources Information Center

    Coco, Moreno I.; Keller, Frank

    2012-01-01

    Most everyday tasks involve multiple modalities, which raises the question of how the processing of these modalities is coordinated by the cognitive system. In this paper, we focus on the coordination of visual attention and linguistic processing during speaking. Previous research has shown that objects in a visual scene are fixated before they…

  20. The Time-Course of Auditory and Visual Distraction Effects in a New Crossmodal Paradigm

    ERIC Educational Resources Information Center

    Bendixen, Alexandra; Grimm, Sabine; Deouell, Leon Y.; Wetzel, Nicole; Madebach, Andreas; Schroger, Erich

    2010-01-01

    Vision often dominates audition when attentive processes are involved (e.g., the ventriloquist effect), yet little is known about the relative potential of the two modalities to initiate a "break through of the unattended". The present study was designed to systematically compare the capacity of task-irrelevant auditory and visual events to…

  1. Functionally Specific Oscillatory Activity Correlates between Visual and Auditory Cortex in the Blind

    ERIC Educational Resources Information Center

    Schepers, Inga M.; Hipp, Joerg F.; Schneider, Till R.; Roder, Brigitte; Engel, Andreas K.

    2012-01-01

    Many studies have shown that the visual cortex of blind humans is activated in non-visual tasks. However, the electrophysiological signals underlying this cross-modal plasticity are largely unknown. Here, we characterize the neuronal population activity in the visual and auditory cortex of congenitally blind humans and sighted controls in a…

  2. Does conflict help or hurt cognitive control? Initial evidence for an inverted U-shape relationship between perceived task difficulty and conflict adaptation.

    PubMed

    van Steenbergen, Henk; Band, Guido P H; Hommel, Bernhard

    2015-01-01

    Sequential modulation of congruency effects in conflict tasks indicates that cognitive control quickly adapts to changing task demands. We investigated in four experiments how this behavioral congruency-sequence effect relates to different levels of perceived task difficulty in a flanker and a Stroop task. In addition, online measures of pupil diameter were used as a physiological index of effort mobilization. Consistent with motivational accounts predicting that increased levels of perceived task difficulty will increase effort mobilization only up to a certain limit, reliable dynamic conflict-driven adjustment in cognitive control was only observed when task difficulty was relatively low. Instead, tasks tentatively associated with high levels of difficulty showed no or reversed conflict adaptation. Although the effects could not be linked consistently to effects in self-reported task difficulty in all experiments, regression analyses showed associations between perceived task difficulty and conflict adaptation in some of the experiments, which provides some initial evidence for an inverted U-shape relationship between perceived difficulty and adaptations in cognitive control. Furthermore, high levels of task difficulty were associated with a conflict-driven reduction in pupil dilation, suggesting that pupil dilation can be used as a physiological marker of mental overload. Our findings underscore the importance of developing models that are grounded in motivational accounts of cognitive control.

  3. Does conflict help or hurt cognitive control? Initial evidence for an inverted U-shape relationship between perceived task difficulty and conflict adaptation

    PubMed Central

    van Steenbergen, Henk; Band, Guido P. H.; Hommel, Bernhard

    2015-01-01

    Sequential modulation of congruency effects in conflict tasks indicates that cognitive control quickly adapts to changing task demands. We investigated in four experiments how this behavioral congruency-sequence effect relates to different levels of perceived task difficulty in a flanker and a Stroop task. In addition, online measures of pupil diameter were used as a physiological index of effort mobilization. Consistent with motivational accounts predicting that increased levels of perceived task difficulty will increase effort mobilization only up to a certain limit, reliable dynamic conflict-driven adjustment in cognitive control was only observed when task difficulty was relatively low. Instead, tasks tentatively associated with high levels of difficulty showed no or reversed conflict adaptation. Although the effects could not be linked consistently to effects in self-reported task difficulty in all experiments, regression analyses showed associations between perceived task difficulty and conflict adaptation in some of the experiments, which provides some initial evidence for an inverted U-shape relationship between perceived difficulty and adaptations in cognitive control. Furthermore, high levels of task difficulty were associated with a conflict-driven reduction in pupil dilation, suggesting that pupil dilation can be used as a physiological marker of mental overload. Our findings underscore the importance of developing models that are grounded in motivational accounts of cognitive control. PMID:26217287

  4. Cross-modal representation of spoken and written word meaning in left pars triangularis.

    PubMed

    Liuzzi, Antonietta Gabriella; Bruffaerts, Rose; Peeters, Ronald; Adamczuk, Katarzyna; Keuleers, Emmanuel; De Deyne, Simon; Storms, Gerrit; Dupont, Patrick; Vandenberghe, Rik

    2017-04-15

    The correspondence in meaning extracted from written versus spoken input remains to be fully understood neurobiologically. Here, in a total of 38 subjects, the functional anatomy of cross-modal semantic similarity for concrete words was determined based on a dual criterion: First, a voxelwise univariate analysis had to show significant activation during a semantic task (property verification) performed with written and spoken concrete words compared to the perceptually matched control condition. Second, in an independent dataset, in these clusters, the similarity in fMRI response pattern to two distinct entities, one presented as a written and the other as a spoken word, had to correlate with the similarity in meaning between these entities. The left ventral occipitotemporal transition zone and ventromedial temporal cortex, retrosplenial cortex, pars orbitalis bilaterally, and the left pars triangularis were all activated in the univariate contrast. Only the left pars triangularis showed a cross-modal semantic similarity effect. There was no effect of phonological nor orthographic similarity in this region. The cross-modal semantic similarity effect was confirmed by a secondary analysis in the cytoarchitectonically defined BA45. A semantic similarity effect was also present in the ventral occipital regions but only within the visual modality, and in the anterior superior temporal cortex only within the auditory modality. This study provides direct evidence for the coding of word meaning in BA45 and positions its contribution to semantic processing at the confluence of input-modality specific pathways that code for meaning within the respective input modalities. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Aging and the interaction of sensory cortical function and structure.

    PubMed

    Peiffer, Ann M; Hugenschmidt, Christina E; Maldjian, Joseph A; Casanova, Ramon; Srikanth, Ryali; Hayasaka, Satoru; Burdette, Jonathan H; Kraft, Robert A; Laurienti, Paul J

    2009-01-01

    Even the healthiest older adults experience changes in cognitive and sensory function. Studies show that older adults have reduced neural responses to sensory information. However, it is well known that sensory systems do not act in isolation but function cooperatively to either enhance or suppress neural responses to individual environmental stimuli. Very little research has been dedicated to understanding how aging affects the interactions between sensory systems, especially cross-modal deactivations or the ability of one sensory system (e.g., audition) to suppress the neural responses in another sensory system cortex (e.g., vision). Such cross-modal interactions have been implicated in attentional shifts between sensory modalities and could account for increased distractibility in older adults. To assess age-related changes in cross-modal deactivations, functional MRI studies were performed in 61 adults between 18 and 80 years old during simple auditory and visual discrimination tasks. Results within visual cortex confirmed previous findings of decreased responses to visual stimuli for older adults. Age-related changes in the visual cortical response to auditory stimuli were, however, much more complex and suggested an alteration with age in the functional interactions between the senses. Ventral visual cortical regions exhibited cross-modal deactivations in younger but not older adults, whereas more dorsal aspects of visual cortex were suppressed in older but not younger adults. These differences in deactivation also remained after adjusting for age-related reductions in brain volume of sensory cortex. Thus, functional differences in cortical activity between older and younger adults cannot solely be accounted for by differences in gray matter volume. (c) 2007 Wiley-Liss, Inc.

  6. Perceptual learning in temporal discrimination: asymmetric cross-modal transfer from audition to vision.

    PubMed

    Bratzke, Daniel; Seifried, Tanja; Ulrich, Rolf

    2012-08-01

    This study assessed possible cross-modal transfer effects of training in a temporal discrimination task from vision to audition as well as from audition to vision. We employed a pretest-training-post-test design including a control group that performed only the pretest and the post-test. Trained participants showed better discrimination performance with their trained interval than the control group. This training effect transferred to the other modality only for those participants who had been trained with auditory stimuli. The present study thus demonstrates for the first time that training on temporal discrimination within the auditory modality can transfer to the visual modality but not vice versa. This finding represents a novel illustration of auditory dominance in temporal processing and is consistent with the notion that time is primarily encoded in the auditory system.

  7. Right anterior cerebellum BOLD responses reflect age related changes in Simon task sequential effects.

    PubMed

    Aisenberg, D; Sapir, A; Close, A; Henik, A; d'Avossa, G

    2018-01-31

    Participants are slower to report a feature, such as color, when the target appears on the side opposite the instructed response, than when the target appears on the same side. This finding suggests that target location, even when task-irrelevant, interferes with response selection. This effect is magnified in older adults. Lengthening the inter-trial interval, however, suffices to normalize the congruency effect in older adults, by re-establishing young-like sequential effects (Aisenberg et al., 2014). We examined the neurological correlates of age related changes by comparing BOLD signals in young and old participants performing a visual version of the Simon task. Participants reported the color of a peripheral target, by a left or right-hand keypress. Generally, BOLD responses were greater following incongruent than congruent targets. Also, they were delayed and of smaller amplitude in old than young participants. BOLD responses in visual and motor regions were also affected by the congruency of the previous target, suggesting that sequential effects may reflect remapping of stimulus location onto the hand used to make a response. Crucially, young participants showed larger BOLD responses in right anterior cerebellum to incongruent targets, when the previous target was congruent, but smaller BOLD responses to incongruent targets when the previous target was incongruent. Old participants, however, showed larger BOLD responses to congruent than incongruent targets, irrespective of the previous target congruency. We conclude that aging may interfere with the trial by trial updating of the mapping between the task-irrelevant target location and response, which takes place during the inter-trial interval in the cerebellum and underlays sequential effects in a Simon task. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Contextual Congruency Effect in Natural Scene Categorization: Different Strategies in Humans and Monkeys (Macaca mulatta)

    PubMed Central

    Collet, Anne-Claire; Fize, Denis; VanRullen, Rufin

    2015-01-01

    Rapid visual categorization is a crucial ability for survival of many animal species, including monkeys and humans. In real conditions, objects (either animate or inanimate) are never isolated but embedded in a complex background made of multiple elements. It has been shown in humans and monkeys that the contextual background can either enhance or impair object categorization, depending on context/object congruency (for example, an animal in a natural vs. man-made environment). Moreover, a scene is not only a collection of objects; it also has global physical features (i.e phase and amplitude of Fourier spatial frequencies) which help define its gist. In our experiment, we aimed to explore and compare the contribution of the amplitude spectrum of scenes in the context-object congruency effect in monkeys and humans. We designed a rapid visual categorization task, Animal versus Non-Animal, using as contexts both real scenes photographs and noisy backgrounds built from the amplitude spectrum of real scenes but with randomized phase spectrum. We showed that even if the contextual congruency effect was comparable in both species when the context was a real scene, it differed when the foreground object was surrounded by a noisy background: in monkeys we found a similar congruency effect in both conditions, but in humans the congruency effect was absent (or even reversed) when the context was a noisy background. PMID:26207915

  9. Phonological Priming with Nonwords in Children with and without Specific Language Impairment

    ERIC Educational Resources Information Center

    Brooks, Patricia J.; Seiger-Gardner, Liat; Obeid, Rita; MacWhinney, Brian

    2015-01-01

    Purpose: The cross-modal picture-word interference task is used to examine contextual effects on spoken-word production. Previous work has documented lexical-phonological interference in children with specific language impairment (SLI) when a related distractor (e.g., bell) occurs prior to a picture to be named (e.g., a bed). In the current study,…

  10. The effect of unimodal affective priming on dichotic emotion recognition.

    PubMed

    Voyer, Daniel; Myles, Daniel

    2017-11-15

    The present report concerns two experiments extending to unimodal priming the cross-modal priming effects observed with auditory emotions by Harding and Voyer [(2016). Laterality effects in cross-modal affective priming. Laterality: Asymmetries of Body, Brain and Cognition, 21, 585-605]. Experiment 1 used binaural targets to establish the presence of the priming effect and Experiment 2 used dichotically presented targets to examine auditory asymmetries. In Experiment 1, 82 university students completed a task in which binaural targets consisting of one of 4 English words inflected in one of 4 emotional tones were preceded by binaural primes consisting of one of 4 Mandarin words pronounced in the same (congruent) or different (incongruent) emotional tones. Trials where the prime emotion was congruent with the target emotion showed faster responses and higher accuracy in identifying the target emotion. In Experiment 2, 60 undergraduate students participated and the target was presented dichotically instead of binaurally. Primes congruent with the left ear produced a large left ear advantage, whereas right congruent primes produced a right ear advantage. These results indicate that unimodal priming produces stronger effects than those observed under cross-modal priming. The findings suggest that priming should likely be considered a strong top-down influence on laterality effects.

  11. Size-Sensitive Perceptual Representations Underlie Visual and Haptic Object Recognition

    PubMed Central

    Craddock, Matt; Lawson, Rebecca

    2009-01-01

    A variety of similarities between visual and haptic object recognition suggests that the two modalities may share common representations. However, it is unclear whether such common representations preserve low-level perceptual features or whether transfer between vision and haptics is mediated by high-level, abstract representations. Two experiments used a sequential shape-matching task to examine the effects of size changes on unimodal and crossmodal visual and haptic object recognition. Participants felt or saw 3D plastic models of familiar objects. The two objects presented on a trial were either the same size or different sizes and were the same shape or different but similar shapes. Participants were told to ignore size changes and to match on shape alone. In Experiment 1, size changes on same-shape trials impaired performance similarly for both visual-to-visual and haptic-to-haptic shape matching. In Experiment 2, size changes impaired performance on both visual-to-haptic and haptic-to-visual shape matching and there was no interaction between the cost of size changes and direction of transfer. Together the unimodal and crossmodal matching results suggest that the same, size-specific perceptual representations underlie both visual and haptic object recognition, and indicate that crossmodal memory for objects must be at least partly based on common perceptual representations. PMID:19956685

  12. It Is Not What You Expect: Dissociating Conflict Adaptation from Expectancies in a Stroop Task

    ERIC Educational Resources Information Center

    Jimenez, Luis; Mendez, Amavia

    2013-01-01

    In conflict tasks, congruency effects are modulated by the sequence of preceding trials. This modulation effect has been interpreted as an influence of a proactive mechanism of adaptation to conflict (Botvinick, Nystrom, Fissell, Carter, & Cohen, 1999), but the possible contribution of explicit expectancies to this adaptation effect remains…

  13. Conflict Adaptation and Congruency Sequence Effects to Social-Emotional Stimuli in Individuals with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Worsham, Whitney; Gray, Whitney E.; Larson, Michael J.; South, Mikle

    2015-01-01

    Background: The modification of performance following conflict can be measured using conflict adaptation tasks thought to measure the change in the allocation of cognitive resources in order to reduce conflict interference and improve performance. While previous studies have suggested atypical processing during nonsocial cognitive control tasks,…

  14. Adaptive Competency Acquisition: Why LPN-to-ADN Career Mobility Education Programs Work.

    ERIC Educational Resources Information Center

    Coyle-Rogers, Patricia G.

    Adaptive competencies are the skills required to effectively complete a particular task and are the congruencies (balance) between personal skills and task demands. The differences between the adaptive competency acquisition of students in licensed practical nurse (LPN) programs and associate degree nurse (ADN) programs were examined in a…

  15. Evaluating the Effect of Cognitive Dysfunction on Mental Imagery in Patients with Stroke Using Temporal Congruence and the Imagined ‘Timed Up and Go’ Test (iTUG)

    PubMed Central

    Bonnyaud, Céline; Fery, Yves-André; Bussel, Bernard; Roche, Nicolas

    2017-01-01

    Background Motor imagery (MI) capacity may be altered following stroke. MI is evaluated by measuring temporal congruence between the timed performance of an imagined and an executed task. Temporal congruence between imagined and physical gait-related activities has not been evaluated following stroke. Moreover, the effect of cognitive dysfunction on temporal congruence is not known. Objective To assess temporal congruence between the Timed Up and Go test (TUG) and the imagined TUG (iTUG) tests in patients with stroke and to investigate the role played by cognitive dysfunctions in changes in temporal congruence. Methods TUG and iTUG performance were recorded and compared in twenty patients with chronic stroke and 20 controls. Cognitive function was measured using the Montreal Cognitive Assessment (MOCA), the Frontal Assessment Battery at Bedside (FAB) and the Bells Test. Results The temporal congruence of the patients with stroke was significantly altered compared to the controls, indicating a loss of MI capacity (respectively 45.11 ±35.11 vs 24.36 ±17.91, p = 0.02). Furthermore, iTUG test results were positively correlated with pathological scores on the Bells Test (r = 0.085, p = 0.013), likely suggesting that impairment of attention was a contributing factor. Conclusion These results highlight the importance of evaluating potential attention disorder in patients with stroke to optimise the use of MI for rehabilitation and recovery. However further study is needed to determine how MI should be used in the case of cognitive dysfunction. PMID:28125616

  16. Evaluating the Effect of Cognitive Dysfunction on Mental Imagery in Patients with Stroke Using Temporal Congruence and the Imagined 'Timed Up and Go' Test (iTUG).

    PubMed

    Geiger, Maxime; Bonnyaud, Céline; Fery, Yves-André; Bussel, Bernard; Roche, Nicolas

    2017-01-01

    Motor imagery (MI) capacity may be altered following stroke. MI is evaluated by measuring temporal congruence between the timed performance of an imagined and an executed task. Temporal congruence between imagined and physical gait-related activities has not been evaluated following stroke. Moreover, the effect of cognitive dysfunction on temporal congruence is not known. To assess temporal congruence between the Timed Up and Go test (TUG) and the imagined TUG (iTUG) tests in patients with stroke and to investigate the role played by cognitive dysfunctions in changes in temporal congruence. TUG and iTUG performance were recorded and compared in twenty patients with chronic stroke and 20 controls. Cognitive function was measured using the Montreal Cognitive Assessment (MOCA), the Frontal Assessment Battery at Bedside (FAB) and the Bells Test. The temporal congruence of the patients with stroke was significantly altered compared to the controls, indicating a loss of MI capacity (respectively 45.11 ±35.11 vs 24.36 ±17.91, p = 0.02). Furthermore, iTUG test results were positively correlated with pathological scores on the Bells Test (r = 0.085, p = 0.013), likely suggesting that impairment of attention was a contributing factor. These results highlight the importance of evaluating potential attention disorder in patients with stroke to optimise the use of MI for rehabilitation and recovery. However further study is needed to determine how MI should be used in the case of cognitive dysfunction.

  17. Crossmodal processing of emotions in alcohol-dependence and Korsakoff syndrome.

    PubMed

    Brion, Mélanie; D'Hondt, Fabien; Lannoy, Séverine; Pitel, Anne-Lise; Davidoff, Donald A; Maurage, Pierre

    2017-09-01

    Decoding emotional information from faces and voices is crucial for efficient interpersonal communication. Emotional decoding deficits have been found in alcohol-dependence (ALC), particularly in crossmodal situations (with simultaneous stimulations from different modalities), but are still underexplored in Korsakoff syndrome (KS). The aim of this study is to determine whether the continuity hypothesis, postulating a gradual worsening of cognitive and brain impairments from ALC to KS, is valid for emotional crossmodal processing. Sixteen KS, 17 ALC and 19 matched healthy controls (CP) had to detect the emotion (anger or happiness) displayed by auditory, visual or crossmodal auditory-visual stimuli. Crossmodal stimuli were either emotionally congruent (leading to a facilitation effect, i.e. enhanced performance for crossmodal condition compared to unimodal ones) or incongruent (leading to an interference effect, i.e. decreased performance for crossmodal condition due to discordant information across modalities). Reaction times and accuracy were recorded. Crossmodal integration for congruent information was dampened only in ALC, while both ALC and KS demonstrated, compared to CP, decreased performance for decoding emotional facial expressions in the incongruent condition. The crossmodal integration appears impaired in ALC but preserved in KS. Both alcohol-related disorders present an increased interference effect. These results show the interest of more ecological designs, using crossmodal stimuli, to explore emotional decoding in alcohol-related disorders. They also suggest that the continuum hypothesis cannot be generalised to emotional decoding abilities.

  18. Proactive and reactive control depends on emotional valence: a Stroop study with emotional expressions and words.

    PubMed

    Kar, Bhoomika Rastogi; Srinivasan, Narayanan; Nehabala, Yagyima; Nigam, Richa

    2018-03-01

    We examined proactive and reactive control effects in the context of task-relevant happy, sad, and angry facial expressions on a face-word Stroop task. Participants identified the emotion expressed by a face that contained a congruent or incongruent emotional word (happy/sad/angry). Proactive control effects were measured in terms of the reduction in Stroop interference (difference between incongruent and congruent trials) as a function of previous trial emotion and previous trial congruence. Reactive control effects were measured in terms of the reduction in Stroop interference as a function of current trial emotion and previous trial congruence. Previous trial negative emotions exert greater influence on proactive control than the positive emotion. Sad faces in the previous trial resulted in greater reduction in the Stroop interference for happy faces in the current trial. However, current trial angry faces showed stronger adaptation effects compared to happy faces. Thus, both proactive and reactive control mechanisms are dependent on emotional valence of task-relevant stimuli.

  19. Prolonged Interruption of Cognitive Control of Conflict Processing Over Human Faces by Task-Irrelevant Emotion Expression

    PubMed Central

    Kim, Jinyoung; Kang, Min-Suk; Cho, Yang Seok; Lee, Sang-Hun

    2017-01-01

    As documented by Darwin 150 years ago, emotion expressed in human faces readily draws our attention and promotes sympathetic emotional reactions. How do such reactions to the expression of emotion affect our goal-directed actions? Despite the substantial advance made in the neural mechanisms of both cognitive control and emotional processing, it is not yet known well how these two systems interact. Here, we studied how emotion expressed in human faces influences cognitive control of conflict processing, spatial selective attention and inhibitory control in particular, using the Eriksen flanker paradigm. In this task, participants viewed displays of a central target face flanked by peripheral faces and were asked to judge the gender of the target face; task-irrelevant emotion expressions were embedded in the target face, the flanking faces, or both. We also monitored how emotion expression affects gender judgment performance while varying the relative timing between the target and flanker faces. As previously reported, we found robust gender congruency effects, namely slower responses to the target faces whose gender was incongruent with that of the flanker faces, when the flankers preceded the target by 0.1 s. When the flankers further advanced the target by 0.3 s, however, the congruency effect vanished in most of the viewing conditions, except for when emotion was expressed only in the flanking faces or when congruent emotion was expressed in the target and flanking faces. These results suggest that emotional saliency can prolong a substantial degree of conflict by diverting bottom-up attention away from the target, and that inhibitory control on task-irrelevant information from flanking stimuli is deterred by the emotional congruency between target and flanking stimuli. PMID:28676780

  20. The interplay of bottom-up and top-down mechanisms in visual guidance during object naming.

    PubMed

    Coco, Moreno I; Malcolm, George L; Keller, Frank

    2014-01-01

    An ongoing issue in visual cognition concerns the roles played by low- and high-level information in guiding visual attention, with current research remaining inconclusive about the interaction between the two. In this study, we bring fresh evidence into this long-standing debate by investigating visual saliency and contextual congruency during object naming (Experiment 1), a task in which visual processing interacts with language processing. We then compare the results of this experiment to data of a memorization task using the same stimuli (Experiment 2). In Experiment 1, we find that both saliency and congruency influence visual and naming responses and interact with linguistic factors. In particular, incongruent objects are fixated later and less often than congruent ones. However, saliency is a significant predictor of object naming, with salient objects being named earlier in a trial. Furthermore, the saliency and congruency of a named object interact with the lexical frequency of the associated word and mediate the time-course of fixations at naming. In Experiment 2, we find a similar overall pattern in the eye-movement responses, but only the congruency of the target is a significant predictor, with incongruent targets fixated less often than congruent targets. Crucially, this finding contrasts with claims in the literature that incongruent objects are more informative than congruent objects by deviating from scene context and hence need a longer processing. Overall, this study suggests that different sources of information are interactively used to guide visual attention on the targets to be named and raises new questions for existing theories of visual attention.

  1. Learning a Nonmediated Route for Response Selection in Task Switching

    PubMed Central

    Schneider, Darryl W.; Logan, Gordon D.

    2015-01-01

    Two modes of response selection—a mediated route involving categorization and a nonmediated route involving instance-based memory retrieval—have been proposed to explain response congruency effects in task-switching situations. In the present study, we sought a better understanding of the development and characteristics of the nonmediated route. In two experiments involving training and transfer phases, we investigated practice effects at the level of individual target presentations, transfer effects associated with changing category–response mappings, target-specific effects from comparisons of old and new targets during transfer, and the percentage of early responses associated with task-nonspecific response selection (the target preceded the task cue on every trial). The training results suggested that the nonmediated route is quickly learned in the context of target–cue order and becomes increasingly involved in response selection with practice. The transfer results suggested that the target–response instances underlying the nonmediated route involve abstract response labels coding response congruency that can be rapidly remapped to alternative responses but not rewritten when category–response mappings change after practice. Implications for understanding the nonmediated route and its relationship with the mediated route are discussed. PMID:25663003

  2. Effects of arousal on cognitive control: empirical tests of the conflict-modulated Hebbian-learning hypothesis.

    PubMed

    Brown, Stephen B R E; van Steenbergen, Henk; Kedar, Tomer; Nieuwenhuis, Sander

    2014-01-01

    An increasing number of empirical phenomena that were previously interpreted as a result of cognitive control, turn out to reflect (in part) simple associative-learning effects. A prime example is the proportion congruency effect, the finding that interference effects (such as the Stroop effect) decrease as the proportion of incongruent stimuli increases. While this was previously regarded as strong evidence for a global conflict monitoring-cognitive control loop, recent evidence has shown that the proportion congruency effect is largely item-specific and hence must be due to associative learning. The goal of our research was to test a recent hypothesis about the mechanism underlying such associative-learning effects, the conflict-modulated Hebbian-learning hypothesis, which proposes that the effect of conflict on associative learning is mediated by phasic arousal responses. In Experiment 1, we examined in detail the relationship between the item-specific proportion congruency effect and an autonomic measure of phasic arousal: task-evoked pupillary responses. In Experiment 2, we used a task-irrelevant phasic arousal manipulation and examined the effect on item-specific learning of incongruent stimulus-response associations. The results provide little evidence for the conflict-modulated Hebbian-learning hypothesis, which requires additional empirical support to remain tenable.

  3. Alertness and cognitive control: Testing the early onset hypothesis.

    PubMed

    Schneider, Darryl W

    2018-05-01

    Previous research has revealed a peculiar interaction between alertness and cognitive control in selective-attention tasks: Congruency effects are larger on alert trials (on which an alerting cue is presented briefly in advance of the imperative stimulus) than on no-alert trials, despite shorter response times (RTs) on alert trials. One explanation for this finding is the early onset hypothesis, which is based on the assumptions that increased alertness shortens stimulus-encoding time and that cognitive control involves gradually focusing attention during a trial. The author tested the hypothesis in 3 experiments by manipulating alertness and stimulus quality (which were intended to shorten and lengthen stimulus-encoding time, respectively) in an arrow-based flanker task involving congruent and incongruent stimuli. Replicating past findings, the alerting manipulation led to shorter RTs but larger congruency effects on alert trials than on no-alert trials. The stimulus-quality manipulation led to longer RTs and larger congruency effects for degraded stimuli than for intact stimuli. These results provide mixed support for the early onset hypothesis, but the author discusses how data and theory might be reconciled if stimulus quality affects stimulus-encoding time and the rate of evidence accumulation in the decision process. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  4. Grammatical-gender effects in noun-noun compound production: Evidence from German.

    PubMed

    Lorenz, Antje; Mädebach, Andreas; Jescheniak, Jörg D

    2018-05-01

    We examined how noun-noun compounds and their syntactic properties are lexically stored and processed in speech production. Using gender-marked determiner primes ( der masc , die fem , das neut [the]) in a picture naming task, we tested for specific effects from determiners congruent with either the modifier or the head of the compound target (e.g., Tee masc kanne fem [teapot]) to examine whether the constituents are processed independently at the syntactic level. Experiment 1 assessed effects of auditory gender-marked determiner primes in bare noun picture naming, and Experiment 2 assessed effects of visual gender-marked determiner primes in determiner-noun picture naming. Three prime conditions were implemented: (a) head-congruent determiner (e.g., die fem ), (b) modifier-congruent determiner (e.g., der masc ), and (c) incongruent determiner (e.g., das neuter ). We observed a facilitation effect of head congruency but no effect of modifier congruency. In Experiment 3, participants produced novel noun-noun compounds in response to two pictures, demanding independent processing of head and modifier at the syntactic level. Now, head and modifier congruency effects were obtained, demonstrating the general sensitivity of our task. Our data support the notion of a single-lemma representation of lexically stored compound nouns in the German production lexicon.

  5. The influence of approach-avoidance motivational orientation on conflict adaptation.

    PubMed

    Hengstler, Maikel; Holland, Rob W; van Steenbergen, Henk; van Knippenberg, Ad

    2014-06-01

    To deal effectively with a continuously changing environment, our cognitive system adaptively regulates resource allocation. Earlier findings showed that an avoidance orientation (induced by arm extension), relative to an approach orientation (induced by arm flexion), enhanced sustained cognitive control. In avoidance conditions, performance on a cognitive control task was enhanced, as indicated by a reduced congruency effect, relative to approach conditions. Extending these findings, in the present behavioral studies we investigated dynamic adaptations in cognitive control-that is, conflict adaptation. We proposed that an avoidance state recruits more resources in response to conflicting signals, and thereby increases conflict adaptation. Conversely, in an approach state, conflict processing diminishes, which consequently weakens conflict adaptation. As predicted, approach versus avoidance arm movements affected both behavioral congruency effects and conflict adaptation: As compared to approach, avoidance movements elicited reduced congruency effects and increased conflict adaptation. These results are discussed in line with a possible underlying neuropsychological model.

  6. Auditory peripersonal space in humans.

    PubMed

    Farnè, Alessandro; Làdavas, Elisabetta

    2002-10-01

    In the present study we report neuropsychological evidence of the existence of an auditory peripersonal space representation around the head in humans and its characteristics. In a group of right brain-damaged patients with tactile extinction, we found that a sound delivered near the ipsilesional side of the head (20 cm) strongly extinguished a tactile stimulus delivered to the contralesional side of the head (cross-modal auditory-tactile extinction). By contrast, when an auditory stimulus was presented far from the head (70 cm), cross-modal extinction was dramatically reduced. This spatially specific cross-modal extinction was most consistently found (i.e., both in the front and back spaces) when a complex sound was presented, like a white noise burst. Pure tones produced spatially specific cross-modal extinction when presented in the back space, but not in the front space. In addition, the most severe cross-modal extinction emerged when sounds came from behind the head, thus showing that the back space is more sensitive than the front space to the sensory interaction of auditory-tactile inputs. Finally, when cross-modal effects were investigated by reversing the spatial arrangement of cross-modal stimuli (i.e., touch on the right and sound on the left), we found that an ipsilesional tactile stimulus, although inducing a small amount of cross-modal tactile-auditory extinction, did not produce any spatial-specific effect. Therefore, the selective aspects of cross-modal interaction found near the head cannot be explained by a competition between a damaged left spatial representation and an intact right spatial representation. Thus, consistent with neurophysiological evidence from monkeys, our findings strongly support the existence, in humans, of an integrated cross-modal system coding auditory and tactile stimuli near the body, that is, in the peripersonal space.

  7. Comprehending how visual context influences incremental sentence processing: insights from ERPs and picture-sentence verification

    PubMed Central

    Knoeferle, Pia; Urbach, Thomas P.; Kutas, Marta

    2010-01-01

    To re-establish picture-sentence verification – discredited possibly for its over-reliance on post-sentence response time (RT) measures - as a task for situated comprehension, we collected event-related brain potentials (ERPs) as participants read a subject-verb-object sentence, and RTs indicating whether or not the verb matched a previously depicted action. For mismatches (vs matches), speeded RTs were longer, verb N400s over centro-parietal scalp larger, and ERPs to the object noun more negative. RTs (congruence effect) correlated inversely with the centro-parietal verb N400s, and positively with the object ERP congruence effects. Verb N400s, object ERPs, and verbal working memory scores predicted more variance in RT effects (50%) than N400s alone. Thus, (1) verification processing is not all post-sentence; (2) simple priming cannot account for these results; and (3) verification tasks can inform studies of situated comprehension. PMID:20701712

  8. Effects of Working Memory Span on Processing of Lexical Associations and Congruence in Spoken Discourse

    PubMed Central

    Boudewyn, Megan A.; Long, Debra L.; Swaab, Tamara Y.

    2013-01-01

    The goal of this study was to determine whether variability in working memory (WM) capacity and cognitive control affects the processing of global discourse congruence and local associations among words when participants listened to short discourse passages. The final, critical word of each passage was either associated or unassociated with a preceding prime word (e.g., “He was not prepared for the fame and fortune/praise”). These critical words were also either congruent or incongruent with respect to the preceding discourse context [e.g., a context in which a prestigious prize was won (congruent) or in which the protagonist had been arrested (incongruent)]. We used multiple regression to assess the unique contribution of suppression ability (our measure of cognitive control) and WM capacity on the amplitude of individual N400 effects of congruence and association. Our measure of suppression ability did not predict the size of the N400 effects of association or congruence. However, as expected, the results showed that high WM capacity individuals were less sensitive to the presence of lexical associations (showed smaller N400 association effects). Furthermore, differences in WM capacity were related to differences in the topographic distribution of the N400 effects of discourse congruence. The topographic differences in the global congruence effects indicate differences in the underlying neural generators of the N400 effects, as a function of WM. This suggests additional, or at a minimum, distinct, processing on the part of higher capacity individuals when tasked with integrating incoming words into the developing discourse representation. PMID:23407753

  9. Effects of working memory span on processing of lexical associations and congruence in spoken discourse.

    PubMed

    Boudewyn, Megan A; Long, Debra L; Swaab, Tamara Y

    2013-01-01

    The goal of this study was to determine whether variability in working memory (WM) capacity and cognitive control affects the processing of global discourse congruence and local associations among words when participants listened to short discourse passages. The final, critical word of each passage was either associated or unassociated with a preceding prime word (e.g., "He was not prepared for the fame and fortune/praise"). These critical words were also either congruent or incongruent with respect to the preceding discourse context [e.g., a context in which a prestigious prize was won (congruent) or in which the protagonist had been arrested (incongruent)]. We used multiple regression to assess the unique contribution of suppression ability (our measure of cognitive control) and WM capacity on the amplitude of individual N400 effects of congruence and association. Our measure of suppression ability did not predict the size of the N400 effects of association or congruence. However, as expected, the results showed that high WM capacity individuals were less sensitive to the presence of lexical associations (showed smaller N400 association effects). Furthermore, differences in WM capacity were related to differences in the topographic distribution of the N400 effects of discourse congruence. The topographic differences in the global congruence effects indicate differences in the underlying neural generators of the N400 effects, as a function of WM. This suggests additional, or at a minimum, distinct, processing on the part of higher capacity individuals when tasked with integrating incoming words into the developing discourse representation.

  10. Adaptive effort investment in cognitive and physical tasks: a neurocomputational model

    PubMed Central

    Verguts, Tom; Vassena, Eliana; Silvetti, Massimo

    2015-01-01

    Despite its importance in everyday life, the computational nature of effort investment remains poorly understood. We propose an effort model obtained from optimality considerations, and a neurocomputational approximation to the optimal model. Both are couched in the framework of reinforcement learning. It is shown that choosing when or when not to exert effort can be adaptively learned, depending on rewards, costs, and task difficulty. In the neurocomputational model, the limbic loop comprising anterior cingulate cortex (ACC) and ventral striatum in the basal ganglia allocates effort to cortical stimulus-action pathways whenever this is valuable. We demonstrate that the model approximates optimality. Next, we consider two hallmark effects from the cognitive control literature, namely proportion congruency and sequential congruency effects. It is shown that the model exerts both proactive and reactive cognitive control. Then, we simulate two physical effort tasks. In line with empirical work, impairing the model's dopaminergic pathway leads to apathetic behavior. Thus, we conceptually unify the exertion of cognitive and physical effort, studied across a variety of literatures (e.g., motivation and cognitive control) and animal species. PMID:25805978

  11. How Task Goals Mediate the Interplay between Perception and Action

    PubMed Central

    Haazebroek, Pascal; van Dantzig, Saskia; Hommel, Bernhard

    2013-01-01

    Theories of embodied cognition suppose that perception, action, and cognition are tightly intertwined and share common representations and processes. Indeed, numerous empirical studies demonstrate interaction between stimulus perception, response planning, and response execution. In this paper, we present an experiment and a connectionist model that show how the Simon effect, a canonical example of perception–action congruency, can be moderated by the (cognitive representation of the) task instruction. To date, no representational account of this influence exists. In the experiment, a two-dimensional Simon task was used, with critical stimuli being colored arrows pointing in one of four directions (backward, forward, left, or right). Participants stood on a Wii balance board, oriented diagonally toward the screen displaying the stimuli. They were either instructed to imagine standing on a snowboard or on a pair of skis and to respond to the stimulus color by leaning toward either the left or right foot. We expected that participants in the snowboard condition would encode these movements as forward or backward, resulting in a Simon effect on this dimension. This was confirmed by the results. The left–right congruency effect was larger in the ski condition, whereas the forward–backward congruency effect appeared only in the snowboard condition. The results can be readily accounted for by HiTEC, a connectionist model that aims at capturing the interaction between perception and action at the level of representations, and the way this interaction is mediated by cognitive control. Together, the empirical work and the connectionist model contribute to a better understanding of the complex interaction between perception, cognition, and action. PMID:23675361

  12. Lady Liberty and Godfather Death as candidates for linguistic relativity? Scrutinizing the gender congruency effect on personified allegories with explicit and implicit measures.

    PubMed

    Bender, Andrea; Beller, Sieghard; Klauer, Karl Christoph

    2016-01-01

    Linguistic relativity--the idea that language affects thought by way of its grammatical categorizations--has been controversially debated for decades. One of the contested cases is the grammatical gender of nouns, which is claimed to affect how their referents are conceptualized (i.e., as rather female or male in congruence with the grammatical gender of the noun), especially when used allegorically. But is this association strong enough to be detected in implicit measures, and, if so, can we disentangle effects of grammatical gender and allegorical association? Three experiments with native speakers of German tackled these questions. They revealed a gender congruency effect on allegorically used nouns, but this effect was stronger with an explicit measure (assignment of biological sex) than with an implicit measure (Extrinsic Affective Simon Task) and disappeared in the implicit measure when grammatical gender and allegorical associations were set into contrast. Taken together, these findings indicate that the observed congruency effect was driven by the association of nouns with personifications rather than by their grammatical gender. In conclusion, we also discuss implications of these findings for linguistic relativity.

  13. Memory systems in the rat: effects of reward probability, context, and congruency between working and reference memory.

    PubMed

    Roberts, William A; Guitar, Nicole A; Marsh, Heidi L; MacDonald, Hayden

    2016-05-01

    The interaction of working and reference memory was studied in rats on an eight-arm radial maze. In two experiments, rats were trained to perform working memory and reference memory tasks. On working memory trials, they were allowed to enter four randomly chosen arms for reward in a study phase and then had to choose the unentered arms for reward in a test phase. On reference memory trials, they had to learn to visit the same four arms on the maze on every trial for reward. Retention was tested on working memory trials in which the interval between the study and test phase was 15 s, 15 min, or 30 min. At each retention interval, tests were performed in which the correct WM arms were either congruent or incongruent with the correct RM arms. Both experiments showed that congruency interacted with retention interval, yielding more forgetting at 30 min on incongruent trials than on congruent trials. The effect of reference memory strength on the congruency effect was examined in Experiment 1, and the effect of associating different contexts with working and reference memory on the congruency effect was studied in Experiment 2.

  14. How visual timing and form information affect speech and non-speech processing.

    PubMed

    Kim, Jeesun; Davis, Chris

    2014-10-01

    Auditory speech processing is facilitated when the talker's face/head movements are seen. This effect is typically explained in terms of visual speech providing form and/or timing information. We determined the effect of both types of information on a speech/non-speech task (non-speech stimuli were spectrally rotated speech). All stimuli were presented paired with the talker's static or moving face. Two types of moving face stimuli were used: full-face versions (both spoken form and timing information available) and modified face versions (only timing information provided by peri-oral motion available). The results showed that the peri-oral timing information facilitated response time for speech and non-speech stimuli compared to a static face. An additional facilitatory effect was found for full-face versions compared to the timing condition; this effect only occurred for speech stimuli. We propose the timing effect was due to cross-modal phase resetting; the form effect to cross-modal priming. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Cross-modal orienting of visual attention.

    PubMed

    Hillyard, Steven A; Störmer, Viola S; Feng, Wenfeng; Martinez, Antigona; McDonald, John J

    2016-03-01

    This article reviews a series of experiments that combined behavioral and electrophysiological recording techniques to explore the hypothesis that salient sounds attract attention automatically and facilitate the processing of visual stimuli at the sound's location. This cross-modal capture of visual attention was found to occur even when the attracting sound was irrelevant to the ongoing task and was non-predictive of subsequent events. A slow positive component in the event-related potential (ERP) that was localized to the visual cortex was found to be closely coupled with the orienting of visual attention to a sound's location. This neural sign of visual cortex activation was predictive of enhanced perceptual processing and was paralleled by a desynchronization (blocking) of the ongoing occipital alpha rhythm. Further research is needed to determine the nature of the relationship between the slow positive ERP evoked by the sound and the alpha desynchronization and to understand how these electrophysiological processes contribute to improved visual-perceptual processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Is 9 louder than 1? Audiovisual cross-modal interactions between number magnitude and judged sound loudness.

    PubMed

    Alards-Tomalin, Doug; Walker, Alexander C; Shaw, Joshua D M; Leboe-McGowan, Launa C

    2015-09-01

    The cross-modal impact of number magnitude (i.e. Arabic digits) on perceived sound loudness was examined. Participants compared a target sound's intensity level against a previously heard reference sound (which they judged as quieter or louder). Paired with each target sound was a task irrelevant Arabic digit that varied in magnitude, being either small (1, 2, 3) or large (7, 8, 9). The degree to which the sound and the digit were synchronized was manipulated, with the digit and sound occurring simultaneously in Experiment 1, and the digit preceding the sound in Experiment 2. Firstly, when target sounds and digits occurred simultaneously, sounds paired with large digits were categorized as loud more frequently than sounds paired with small digits. Secondly, when the events were separated, number magnitude ceased to bias sound intensity judgments. In Experiment 3, the events were still separated, however the participants held the number in short-term memory. In this instance the bias returned. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Suppression and Working Memory in Auditory Comprehension of L2 Narratives: Evidence from Cross-Modal Priming.

    PubMed

    Wu, Shiyu; Ma, Zheng

    2016-10-01

    Using a cross-modal priming task, the present study explores whether Chinese-English bilinguals process goal related information during auditory comprehension of English narratives like native speakers. Results indicate that English native speakers adopted both mechanisms of suppression and enhancement to modulate the activation of goals and keep track of the "causal path" in narrative events and that L1 speakers with higher working memory (WM) capacity are more skilled at attenuating interference. L2 speakers, however, experienced the phenomenon of "facilitation-without-inhibition." Their difficulty in suppressing irrelevant information was related to their performance in the test of working memory capacity. For the L2 group with greater working memory capacity, the effects of both enhancement and suppression were found. These findings are discussed in light of a landscape model of L2 text comprehension which highlights the need for WM to be incorporated into comprehensive models of L2 processing as well as theories of SLA.

  18. Adaptation of Physiological and Cognitive Workload via Interactive Multi-modal Displays

    DTIC Science & Technology

    2014-05-28

    peer-reviewed journals (N/A for none) 09/07/2013 Received Paper 8.00 James Merlo, Joseph E. Mercado , Jan B.F. Van Erp, Peter A. Hancock. Improving...08, . : , Mr. Joseph Mercado , Mr. Timothy White, Dr. Peter Hancock. Effects of Cross-Modal Sensory Cueing Automation Failurein a Target Detection Task...fields:...... ...... ...... ...... ...... PERCENT_SUPPORTEDNAME FTE Equivalent: Total Number: Discipline Joseph Mercado 0.50 Timothy White 0.50 1.00 2

  19. One bout of open skill exercise improves cross-modal perception and immediate memory in healthy older adults who habitually exercise.

    PubMed

    O'Brien, Jessica; Ottoboni, Giovanni; Tessari, Alessia; Setti, Annalisa

    2017-01-01

    One single bout of exercise can be associated with positive effects on cognition, due to physiological changes associated with muscular activity, increased arousal, and training of cognitive skills during exercise. While the positive effects of life-long physical activity on cognitive ageing are well demonstrated, it is not well established whether one bout of exercise is sufficient to register such benefits in older adults. The aim of this study was to test the effect of one bout of exercise on two cognitive processes essential to daily life and known to decline with ageing: audio-visual perception and immediate memory. Fifty-eight older adults took part in a quasi-experimental design study and were divided into three groups based on their habitual activity (open skill exercise (mean age = 69.65, SD = 5.64), closed skill exercise, N = 18, 94% female; sedentary activity-control group, N = 21, 62% female). They were then tested before and after their activity (duration between 60 and 80 minutes). Results showed improvement in sensitivity in audio-visual perception in the open skill group and improvements in one of the measures of immediate memory in both exercise groups, after controlling for baseline differences including global cognition and health. These findings indicate that immediate benefits for cross-modal perception and memory can be obtained after open skill exercise. However, improvements after closed skill exercise may be limited to memory benefits. Perceptual benefits are likely to be associated with arousal, while memory benefits may be due to the training effects provided by task requirements during exercise. The respective role of qualitative and quantitative differences between these activities in terms of immediate cognitive benefits should be further investigated. Importantly, the present results present the first evidence for a modulation of cross-modal perception by exercise, providing a plausible avenue for rehabilitation of cross-modal perception deficits, which are emerging as a significant contributor to functional decline in ageing.

  20. Manipulating Bodily Presence Affects Cross-Modal Spatial Attention: A Virtual-Reality-Based ERP Study.

    PubMed

    Harjunen, Ville J; Ahmed, Imtiaj; Jacucci, Giulio; Ravaja, Niklas; Spapé, Michiel M

    2017-01-01

    Earlier studies have revealed cross-modal visuo-tactile interactions in endogenous spatial attention. The current research used event-related potentials (ERPs) and virtual reality (VR) to identify how the visual cues of the perceiver's body affect visuo-tactile interaction in endogenous spatial attention and at what point in time the effect takes place. A bimodal oddball task with lateralized tactile and visual stimuli was presented in two VR conditions, one with and one without visible hands, and one VR-free control with hands in view. Participants were required to silently count one type of stimulus and ignore all other stimuli presented in irrelevant modality or location. The presence of hands was found to modulate early and late components of somatosensory and visual evoked potentials. For sensory-perceptual stages, the presence of virtual or real hands was found to amplify attention-related negativity on the somatosensory N140 and cross-modal interaction in somatosensory and visual P200. For postperceptual stages, an amplified N200 component was obtained in somatosensory and visual evoked potentials, indicating increased response inhibition in response to non-target stimuli. The effect of somatosensory, but not visual, N200 enhanced when the virtual hands were present. The findings suggest that bodily presence affects sustained cross-modal spatial attention between vision and touch and that this effect is specifically present in ERPs related to early- and late-sensory processing, as well as response inhibition, but do not affect later attention and memory-related P3 activity. Finally, the experiments provide commeasurable scenarios for the estimation of the signal and noise ratio to quantify effects related to the use of a head mounted display (HMD). However, despite valid a-priori reasons for fearing signal interference due to a HMD, we observed no significant drop in the robustness of our ERP measurements.

  1. Manipulating Bodily Presence Affects Cross-Modal Spatial Attention: A Virtual-Reality-Based ERP Study

    PubMed Central

    Harjunen, Ville J.; Ahmed, Imtiaj; Jacucci, Giulio; Ravaja, Niklas; Spapé, Michiel M.

    2017-01-01

    Earlier studies have revealed cross-modal visuo-tactile interactions in endogenous spatial attention. The current research used event-related potentials (ERPs) and virtual reality (VR) to identify how the visual cues of the perceiver’s body affect visuo-tactile interaction in endogenous spatial attention and at what point in time the effect takes place. A bimodal oddball task with lateralized tactile and visual stimuli was presented in two VR conditions, one with and one without visible hands, and one VR-free control with hands in view. Participants were required to silently count one type of stimulus and ignore all other stimuli presented in irrelevant modality or location. The presence of hands was found to modulate early and late components of somatosensory and visual evoked potentials. For sensory-perceptual stages, the presence of virtual or real hands was found to amplify attention-related negativity on the somatosensory N140 and cross-modal interaction in somatosensory and visual P200. For postperceptual stages, an amplified N200 component was obtained in somatosensory and visual evoked potentials, indicating increased response inhibition in response to non-target stimuli. The effect of somatosensory, but not visual, N200 enhanced when the virtual hands were present. The findings suggest that bodily presence affects sustained cross-modal spatial attention between vision and touch and that this effect is specifically present in ERPs related to early- and late-sensory processing, as well as response inhibition, but do not affect later attention and memory-related P3 activity. Finally, the experiments provide commeasurable scenarios for the estimation of the signal and noise ratio to quantify effects related to the use of a head mounted display (HMD). However, despite valid a-priori reasons for fearing signal interference due to a HMD, we observed no significant drop in the robustness of our ERP measurements. PMID:28275346

  2. Auditory cross-modal reorganization in cochlear implant users indicates audio-visual integration.

    PubMed

    Stropahl, Maren; Debener, Stefan

    2017-01-01

    There is clear evidence for cross-modal cortical reorganization in the auditory system of post-lingually deafened cochlear implant (CI) users. A recent report suggests that moderate sensori-neural hearing loss is already sufficient to initiate corresponding cortical changes. To what extend these changes are deprivation-induced or related to sensory recovery is still debated. Moreover, the influence of cross-modal reorganization on CI benefit is also still unclear. While reorganization during deafness may impede speech recovery, reorganization also has beneficial influences on face recognition and lip-reading. As CI users were observed to show differences in multisensory integration, the question arises if cross-modal reorganization is related to audio-visual integration skills. The current electroencephalography study investigated cortical reorganization in experienced post-lingually deafened CI users ( n  = 18), untreated mild to moderately hearing impaired individuals (n = 18) and normal hearing controls ( n  = 17). Cross-modal activation of the auditory cortex by means of EEG source localization in response to human faces and audio-visual integration, quantified with the McGurk illusion, were measured. CI users revealed stronger cross-modal activations compared to age-matched normal hearing individuals. Furthermore, CI users showed a relationship between cross-modal activation and audio-visual integration strength. This may further support a beneficial relationship between cross-modal activation and daily-life communication skills that may not be fully captured by laboratory-based speech perception tests. Interestingly, hearing impaired individuals showed behavioral and neurophysiological results that were numerically between the other two groups, and they showed a moderate relationship between cross-modal activation and the degree of hearing loss. This further supports the notion that auditory deprivation evokes a reorganization of the auditory system even at early stages of hearing loss.

  3. Aging effects in response inhibition: general slowing without decline in inhibitory functioning.

    PubMed

    Yano, Madoka

    2011-12-01

    Previous research has examined aging effects on response inhibition using cognitive interference paradigms such as the Stroop task and the Simon task. Performance in these tasks requires participants to inhibit predominant responses. Reduced response inhibition is reflected by poorer performance in incongruent trials where prepotent responses can interfere with other correct responses, than in congruent trials without such interference (i.e., Stroop or Simon congruency effects). It is unclear whether such effects increase with normal aging. Balota et al. (2010) reported that the Stroop effect can be a useful predictor of conversion to Alzheimer's disease in a healthy control sample. Congruency effects are also subject to trial sequencing: They are smaller following an incongruent trial than following a congruent one. The present study determined whether response inhibition was affected by normal aging using the Simon task, with focus on the influence of normal aging on sequence effects. Forty-three young participants and 14 healthy elderly adults performed the Simon task individually. Results indicated that both age groups showed the same magnitude of Simon effects and sequence effects, although overall response latencies were longer in elderly participants than in young participants. Furthermore, the elderly adults tended to make fewer errors than the younger adults. These findings suggest that normal aging may produce reduced processing speed but it does not affect response inhibition itself.

  4. Associative learning changes cross-modal representations in the gustatory cortex

    PubMed Central

    Vincis, Roberto; Fontanini, Alfredo

    2016-01-01

    A growing body of literature has demonstrated that primary sensory cortices are not exclusively unimodal, but can respond to stimuli of different sensory modalities. However, several questions concerning the neural representation of cross-modal stimuli remain open. Indeed, it is poorly understood if cross-modal stimuli evoke unique or overlapping representations in a primary sensory cortex and whether learning can modulate these representations. Here we recorded single unit responses to auditory, visual, somatosensory, and olfactory stimuli in the gustatory cortex (GC) of alert rats before and after associative learning. We found that, in untrained rats, the majority of GC neurons were modulated by a single modality. Upon learning, both prevalence of cross-modal responsive neurons and their breadth of tuning increased, leading to a greater overlap of representations. Altogether, our results show that the gustatory cortex represents cross-modal stimuli according to their sensory identity, and that learning changes the overlap of cross-modal representations. DOI: http://dx.doi.org/10.7554/eLife.16420.001 PMID:27572258

  5. Overestimation of threat from neutral faces and voices in social anxiety.

    PubMed

    Peschard, Virginie; Philippot, Pierre

    2017-12-01

    Social anxiety (SA) is associated with a tendency to interpret social information in a more threatening manner. Most of the research in SA has focused on unimodal exploration (mostly based on facial expressions), thus neglecting the ubiquity of cross-modality. To fill this gap, the present study sought to explore whether SA influences the interpretation of facial and vocal expressions presented separately or jointly. Twenty-five high socially anxious (HSA) and 29 low socially anxious (LSA) participants completed a forced two-choice emotion identification task consisting of angry and neutral expressions conveyed by faces, voices or combined faces and voices. Participants had to identify the emotion (angry or neutral) of the presented cues as quickly and precisely as possible. Our results showed that, compared to LSA, HSA individuals show a higher propensity to misattribute anger to neutral expressions independent of cue modality and despite preserved decoding accuracy. We also found a cross-modal facilitation effect at the level of accuracy (i.e., higher accuracy in the bimodal condition compared to unimodal ones). However, such effect was not moderated by SA. Although the HSA group showed clinical cut-off scores at the Liebowitz Social Anxiety Scale, one limitation is that we did not administer diagnostic interviews. Upcoming studies may want to test whether these results can be generalized to a clinical population. These findings highlight the usefulness of a cross-modal perspective to probe the specificity of biases in SA. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Cross-modal reorganization in cochlear implant users: Auditory cortex contributes to visual face processing.

    PubMed

    Stropahl, Maren; Plotz, Karsten; Schönfeld, Rüdiger; Lenarz, Thomas; Sandmann, Pascale; Yovel, Galit; De Vos, Maarten; Debener, Stefan

    2015-11-01

    There is converging evidence that the auditory cortex takes over visual functions during a period of auditory deprivation. A residual pattern of cross-modal take-over may prevent the auditory cortex to adapt to restored sensory input as delivered by a cochlear implant (CI) and limit speech intelligibility with a CI. The aim of the present study was to investigate whether visual face processing in CI users activates auditory cortex and whether this has adaptive or maladaptive consequences. High-density electroencephalogram data were recorded from CI users (n=21) and age-matched normal hearing controls (n=21) performing a face versus house discrimination task. Lip reading and face recognition abilities were measured as well as speech intelligibility. Evaluation of event-related potential (ERP) topographies revealed significant group differences over occipito-temporal scalp regions. Distributed source analysis identified significantly higher activation in the right auditory cortex for CI users compared to NH controls, confirming visual take-over. Lip reading skills were significantly enhanced in the CI group and appeared to be particularly better after a longer duration of deafness, while face recognition was not significantly different between groups. However, auditory cortex activation in CI users was positively related to face recognition abilities. Our results confirm a cross-modal reorganization for ecologically valid visual stimuli in CI users. Furthermore, they suggest that residual takeover, which can persist even after adaptation to a CI is not necessarily maladaptive. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. The Word Composite Effect Depends on Abstract Lexical Representations But Not Surface Features Like Case and Font.

    PubMed

    Ventura, Paulo; Fernandes, Tânia; Leite, Isabel; Almeida, Vítor B; Casqueiro, Inês; Wong, Alan C-N

    2017-01-01

    Prior studies have shown that words show a composite effect: When readers perform a same-different matching task on a target-part of a word, performance is affected by the irrelevant part, whose influence is severely reduced when the two parts are misaligned. However, the locus of this word composite effect is largely unknown. To enlighten it, in two experiments, Portuguese readers performed the composite task on letter strings: in Experiment 1, in written words varying in surface features (between-participants: courier, notera, alternating-cAsE), and in Experiment 2 in pseudowords. The word composite effect, signaled by a significant interaction between alignment of the two word parts and congruence between parts was found in the three conditions of Experiment 1, being unaffected by NoVeLtY of the configuration or by handwritten form. This effect seems to have a lexical locus, given that in Experiment 2 only the main effect of congruence between parts was significant and was not modulated by alignment. Indeed, the cross-experiment analysis showed that words presented stronger congruence effects than pseudowords only in the aligned condition, because when misaligned the whole lexical item configuration was disrupted. Therefore, the word composite effect strongly depends on abstract lexical representations, as it is unaffected by surface features and is specific to lexical items.

  8. Memory-guided selective attention: Single experiences with conflict have long-lasting effects on cognitive control.

    PubMed

    Brosowsky, Nicholaus P; Crump, Matthew J C

    2018-05-17

    Adjustments in cognitive control, as measured by congruency sequence effects, are thought to be influenced by both external stimuli and internal goals. However, this dichotomy has often overshadowed the potential contribution of past experience stored in memory. Here, we examine the role of long-term episodic memory in guiding selective attention. Our aim was to demonstrate new evidence that selective attention can be modulated by long-term retrieval of stimulus-specific attentional control settings. All the experiments used a modified flanker task involving multiple unique stimuli. Critically, each stimulus was only presented twice during the experiment: first as a prime, and second as a probe. Experiments 1 and 2 varied the number of intervening trials between prime and probe and manipulated the amount of conflict using a secondary task. Experiment 3 ensured that specific colors assigned to prime stimuli were not repeated when presented as probes. Across both Experiments 1 and 2, we consistently found smaller congruency effects on probe trials when its associated prime trial was incongruent compared with congruent, demonstrating long-term congruency sequence effects. However, Experiment 3 showed no evidence for long-term effects. These findings suggest long-term preservation of selective attention processing at the episodic level, and implicate a role for memory in updating cognitive control. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  9. Increased cognitive control after task conflict? Investigating the N-3 effect in task switching.

    PubMed

    Schuch, Stefanie; Grange, James A

    2018-05-25

    Task inhibition is considered to facilitate switching to a new task and is assumed to decay slowly over time. Hence, more persisting inhibition needs to be overcome when returning to a task after one intermediary trial (ABA task sequence) than when returning after two or more intermediary trials (CBA task sequence). Schuch and Grange (J Exp Psychol Learn Mem Cogn 41:760-767, 2015) put forward the hypothesis that there is higher task conflict in ABA than CBA sequences, leading to increased cognitive control in the subsequent trial. They provided evidence that performance is better in trials following ABA than following CBA task sequences. Here, this effect of the previous task sequence ("N-3 effect") is further investigated by varying the cue-stimulus interval (CSI), allowing for short (100 ms) or long (900 ms) preparation time for the upcoming task. If increased cognitive control after ABA involves a better preparation for the upcoming task, the N-3 effect should be larger with long than short CSI. The results clearly show that this is not the case. In Experiment 1, the N-3 effect was smaller with long than short CSI; in Experiment 2, the N-3 effect was not affected by CSI. Diffusion model analysis confirmed previous results in the literature (regarding the effect of CSI and of the ABA-CBA difference); however, the N-3 effect was not unequivocally associated with any of the diffusion model parameters. In exploratory analysis, we also tested the alternative hypothesis that the N-3 effect involves more effective task shielding, which would be reflected in reduced congruency effects in trials following ABA, relative to trials following CBA; congruency effects did not differ between these conditions. Taken together, we can rule out two potential explanations of the N-3 effect: Neither is this effect due to enhanced task preparation, nor to more effective task shielding.

  10. Temporal Ventriloquism Reveals Intact Audiovisual Temporal Integration in Amblyopia.

    PubMed

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-02-01

    We have shown previously that amblyopia involves impaired detection of asynchrony between auditory and visual events. To distinguish whether this impairment represents a defect in temporal integration or nonintegrative multisensory processing (e.g., cross-modal matching), we used the temporal ventriloquism effect in which visual temporal order judgment (TOJ) is normally enhanced by a lagging auditory click. Participants with amblyopia (n = 9) and normally sighted controls (n = 9) performed a visual TOJ task. Pairs of clicks accompanied the two lights such that the first click preceded the first light, or second click lagged the second light by 100, 200, or 450 ms. Baseline audiovisual synchrony and visual-only conditions also were tested. Within both groups, just noticeable differences for the visual TOJ task were significantly reduced compared with baseline in the 100- and 200-ms click lag conditions. Within the amblyopia group, poorer stereo acuity and poorer visual acuity in the amblyopic eye were significantly associated with greater enhancement in visual TOJ performance in the 200-ms click lag condition. Audiovisual temporal integration is intact in amblyopia, as indicated by perceptual enhancement in the temporal ventriloquism effect. Furthermore, poorer stereo acuity and poorer visual acuity in the amblyopic eye are associated with a widened temporal binding window for the effect. These findings suggest that previously reported abnormalities in audiovisual multisensory processing may result from impaired cross-modal matching rather than a diminished capacity for temporal audiovisual integration.

  11. Functional dissociations in top-down control dependent neural repetition priming.

    PubMed

    Klaver, Peter; Schnaidt, Malte; Fell, Jürgen; Ruhlmann, Jürgen; Elger, Christian E; Fernández, Guillén

    2007-02-15

    Little is known about the neural mechanisms underlying top-down control of repetition priming. Here, we use functional brain imaging to investigate these mechanisms. Study and repetition tasks used a natural/man-made forced choice task. In the study phase subjects were required to respond to either pictures or words that were presented superimposed on each other. In the repetition phase only words were presented that were new, previously attended or ignored, or picture names that were derived from previously attended or ignored pictures. Relative to new words we found repetition priming for previously attended words. Previously ignored words showed a reduced priming effect, and there was no significant priming for pictures repeated as picture names. Brain imaging data showed that neural priming of words in the left prefrontal cortex (LIPFC) and left fusiform gyrus (LOTC) was affected by attention, semantic compatibility of superimposed stimuli during study and cross-modal priming. Neural priming reduced for words in the LIPFC and for words and pictures in the LOTC if stimuli were previously ignored. Previously ignored words that were semantically incompatible with a superimposed picture during study induce increased neural priming compared to semantically compatible ignored words (LIPFC) and decreased neural priming of previously attended pictures (LOTC). In summary, top-down control induces dissociable effects on neural priming by attention, cross-modal priming and semantic compatibility in a way that was not evident from behavioral results.

  12. Unconscious Cross-Modal Priming of Auditory Sound Localization by Visual Words

    ERIC Educational Resources Information Center

    Ansorge, Ulrich; Khalid, Shah; Laback, Bernhard

    2016-01-01

    Little is known about the cross-modal integration of unconscious and conscious information. In the current study, we therefore tested whether the spatial meaning of an unconscious visual word, such as "up", influences the perceived location of a subsequently presented auditory target. Although cross-modal integration of unconscious…

  13. Numerical Magnitude Representation in Children With Mathematical Difficulties With or Without Reading Difficulties.

    PubMed

    Tobia, Valentina; Fasola, Anna; Lupieri, Alice; Marzocchi, Gian Marco

    2016-01-01

    This study aimed to explore the spatial numerical association of response codes (SNARC), the flanker, and the numerical distance effects in children with mathematical difficulties. From a sample of 720 third, fourth, and fifth graders, 60 children were selected and divided into the following three groups: typically developing children (TD; n = 29), children with mathematical difficulties only (MD only; n = 21), and children with mathematical and reading difficulties (MD+RD; n = 10). Children were tested with a numerical Eriksen task that was built to assess SNARC, numerical distance, and flanker (first and second order congruency) effects. Children with MD only showed stronger SNARC and second order congruency effects than did TD children, whereas the numerical distance effects were similar across the three groups. Finally, the first order congruency effect was associated with reading difficulties. These results showed that children with mathematical difficulties with or without reading difficulties were globally more impaired when spatial incompatibilities were presented. © Hammill Institute on Disabilities 2014.

  14. Congruency effects in the remote distractor paradigm: evidence for top-down modulation.

    PubMed

    Born, Sabine; Kerzel, Dirk

    2009-08-10

    In three experiments, we examined effects of target-distractor similarity in the remote distractor effect (RDE). Observers made saccades to peripheral targets that were either gray or green. Foveal or peripheral distractors were presented at the same time. The distractors could either share the target's defining property (congruent) or be different from the target (incongruent). Congruent distractors slowed down saccadic reaction times more than incongruent distractors. The increase of the RDE with target-distractor congruency depended on task demands. The more participants had to rely on the target property to locate the target, the larger the congruency effect. We conclude that the RDE can be modulated in a top-down manner. Alternative explanations such as persisting memory traces for the target property or differences in stimulus arrangement were considered but discarded. Our claim is in line with models of saccade generation which assume that the structures underlying the RDE (e.g. the superior colliculus) receive bottom-up as well as top-down information.

  15. Dissociative Global and Local Task-Switching Costs Across Younger Adults, Middle-Aged Adults, Older Adults, and Very Mild Alzheimer Disease Individuals

    PubMed Central

    Huff, Mark J.; Balota, David A.; Minear, Meredith; Aschenbrenner, Andrew J.; Duchek, Janet M.

    2015-01-01

    A task-switching paradigm was used to examine differences in attentional control across younger adults, middle-aged adults, healthy older adults, and individuals classified in the earliest detectable stage of Alzheimer's disease (AD). A large sample of participants (570) completed a switching task in which participants were cued to classify the letter (consonant/vowel) or number (odd/even) task-set dimension of a bivalent stimulus (e.g., A 14), respectively. A Pure block consisting of single-task trials and a Switch block consisting of nonswitch and switch trials were completed. Local (switch vs. nonswitch trials) and global (nonswitch vs. pure trials) costs in mean error rates, mean response latencies, underlying reaction time distributions, along with stimulus-response congruency effects were computed. Local costs in errors were group invariant, but global costs in errors systematically increased as a function of age and AD. Response latencies yielded a strong dissociation: Local costs decreased across groups whereas global costs increased across groups. Vincentile distribution analyses revealed that the dissociation of local and global costs primarily occurred in the slowest response latencies. Stimulus-response congruency effects within the Switch block were particularly robust in accuracy in the very mild AD group. We argue that the results are consistent with the notion that the impaired groups show a reduced local cost because the task sets are not as well tuned, and hence produce minimal cost on switch trials. In contrast, global costs increase because of the additional burden on working memory of maintaining two task sets. PMID:26652720

  16. Asymmetries of Influence: Differential Effects of Body Postures on Perceptions of Emotional Facial Expressions

    PubMed Central

    Mondloch, Catherine J.; Nelson, Nicole L.; Horner, Matthew

    2013-01-01

    The accuracy and speed with which emotional facial expressions are identified is influenced by body postures. Two influential models predict that these congruency effects will be largest when the emotion displayed in the face is similar to that displayed in the body: the emotional seed model and the dimensional model. These models differ in whether similarity is based on physical characteristics or underlying dimensions of valence and arousal. Using a 3-alternative forced-choice task in which stimuli were presented briefly (Exp 1a) or for an unlimited time (Exp 1b) we provide evidence that congruency effects are more complex than either model predicts; the effects are asymmetrical and cannot be accounted for by similarity alone. Fearful postures are especially influential when paired with facial expressions, but not when presented in a flanker task (Exp 2). We suggest refinements to each model that may account for our results and suggest that additional studies be conducted prior to drawing strong theoretical conclusions. PMID:24039996

  17. Visual and analytical strategies in spatial visualisation: perspectives from bilateral symmetry and reflection

    NASA Astrophysics Data System (ADS)

    Ramful, Ajay; Ho, Siew Yin; Lowrie, Tom

    2015-12-01

    This inquiry presents two fine-grained case studies of students demonstrating different levels of cognitive functioning in relation to bilateral symmetry and reflection. The two students were asked to solve four sets of tasks and articulate their reasoning in task-based interviews. The first participant, Brittany, focused essentially on three criteria, namely (1) equidistance, (2) congruence of sides and (3) `exactly opposite' as the intuitive counterpart of perpendicularity for performing reflection. On the other hand, the second participant, Sara, focused on perpendicularity and equidistance, as is the normative procedure. Brittany's inadequate knowledge of reflection shaped her actions and served as a validation for her solutions. Intuitively, her visual strategies took over as a fallback measure to maintain congruence of sides in the absence of a formal notion of perpendicularity. In this paper, we address some of the well-known constraints that students encounter in dealing with bilateral symmetry and reflection, particularly situations involving inclined line of symmetry. Importantly, we make an attempt to show how visual and analytical strategies interact in the production of a reflected image. Our findings highlight the necessity to give more explicit attention to the notion of perpendicularity in bilateral symmetry and reflection tasks.

  18. Reorganization of neural systems mediating peripheral visual selective attention in the deaf: An optical imaging study.

    PubMed

    Seymour, Jenessa L; Low, Kathy A; Maclin, Edward L; Chiarelli, Antonio M; Mathewson, Kyle E; Fabiani, Monica; Gratton, Gabriele; Dye, Matthew W G

    2017-01-01

    Theories of brain plasticity propose that, in the absence of input from the preferred sensory modality, some specialized brain areas may be recruited when processing information from other modalities, which may result in improved performance. The Useful Field of View task has previously been used to demonstrate that early deafness positively impacts peripheral visual attention. The current study sought to determine the neural changes associated with those deafness-related enhancements in visual performance. Based on previous findings, we hypothesized that recruitment of posterior portions of Brodmann area 22, a brain region most commonly associated with auditory processing, would be correlated with peripheral selective attention as measured using the Useful Field of View task. We report data from severe to profoundly deaf adults and normal-hearing controls who performed the Useful Field of View task while cortical activity was recorded using the event-related optical signal. Behavioral performance, obtained in a separate session, showed that deaf subjects had lower thresholds (i.e., better performance) on the Useful Field of View task. The event-related optical data indicated greater activity for the deaf adults than for the normal-hearing controls during the task in the posterior portion of Brodmann area 22 in the right hemisphere. Furthermore, the behavioral thresholds correlated significantly with this neural activity. This work provides further support for the hypothesis that cross-modal plasticity in deaf individuals appears in higher-order auditory cortices, whereas no similar evidence was obtained for primary auditory areas. It is also the only neuroimaging study to date that has linked deaf-related changes in the right temporal lobe to visual task performance outside of the imaging environment. The event-related optical signal is a valuable technique for studying cross-modal plasticity in deaf humans. The non-invasive and relatively quiet characteristics of this technique have great potential utility in research with clinical populations such as deaf children and adults who have received cochlear or auditory brainstem implants. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Crossmodal Connections of Primary Sensory Cortices Largely Vanish During Normal Aging

    PubMed Central

    Henschke, Julia U.; Ohl, Frank W.; Budinger, Eike

    2018-01-01

    During aging, human response times (RTs) to unisensory and crossmodal stimuli decrease. However, the elderly benefit more from crossmodal stimulus representations than younger people. The underlying short-latency multisensory integration process is mediated by direct crossmodal connections at the level of primary sensory cortices. We investigate the age-related changes of these connections using a rodent model (Mongolian gerbil), retrograde tracer injections into the primary auditory (A1), somatosensory (S1), and visual cortex (V1), and immunohistochemistry for markers of apoptosis (Caspase-3), axonal plasticity (Growth associated protein 43, GAP 43), and a calcium-binding protein (Parvalbumin, PV). In adult animals, primary sensory cortices receive a substantial number of direct thalamic inputs from nuclei of their matched, but also from nuclei of non-matched sensory modalities. There are also direct intracortical connections among primary sensory cortices and connections with secondary sensory cortices of other modalities. In very old animals, the crossmodal connections strongly decrease in number or vanish entirely. This is likely due to a retraction of the projection neuron axonal branches rather than ongoing programmed cell death. The loss of crossmodal connections is also accompanied by changes in anatomical correlates of inhibition and excitation in the sensory thalamus and cortex. Together, the loss and restructuring of crossmodal connections during aging suggest a shift of multisensory processing from primary cortices towards other sensory brain areas in elderly individuals. PMID:29551970

  20. Crossmodal Connections of Primary Sensory Cortices Largely Vanish During Normal Aging.

    PubMed

    Henschke, Julia U; Ohl, Frank W; Budinger, Eike

    2018-01-01

    During aging, human response times (RTs) to unisensory and crossmodal stimuli decrease. However, the elderly benefit more from crossmodal stimulus representations than younger people. The underlying short-latency multisensory integration process is mediated by direct crossmodal connections at the level of primary sensory cortices. We investigate the age-related changes of these connections using a rodent model (Mongolian gerbil), retrograde tracer injections into the primary auditory (A1), somatosensory (S1), and visual cortex (V1), and immunohistochemistry for markers of apoptosis (Caspase-3), axonal plasticity (Growth associated protein 43, GAP 43), and a calcium-binding protein (Parvalbumin, PV). In adult animals, primary sensory cortices receive a substantial number of direct thalamic inputs from nuclei of their matched, but also from nuclei of non-matched sensory modalities. There are also direct intracortical connections among primary sensory cortices and connections with secondary sensory cortices of other modalities. In very old animals, the crossmodal connections strongly decrease in number or vanish entirely. This is likely due to a retraction of the projection neuron axonal branches rather than ongoing programmed cell death. The loss of crossmodal connections is also accompanied by changes in anatomical correlates of inhibition and excitation in the sensory thalamus and cortex. Together, the loss and restructuring of crossmodal connections during aging suggest a shift of multisensory processing from primary cortices towards other sensory brain areas in elderly individuals.

  1. Do early sensory cortices integrate cross-modal information?

    PubMed

    Kayser, Christoph; Logothetis, Nikos K

    2007-09-01

    Our different senses provide complementary evidence about the environment and their interaction often aids behavioral performance or alters the quality of the sensory percept. A traditional view defers the merging of sensory information to higher association cortices, and posits that a large part of the brain can be reduced into a collection of unisensory systems that can be studied in isolation. Recent studies, however, challenge this view and suggest that cross-modal interactions can already occur in areas hitherto regarded as unisensory. We review results from functional imaging and electrophysiology exemplifying cross-modal interactions that occur early during the evoked response, and at the earliest stages of sensory cortical processing. Although anatomical studies revealed several potential origins of these cross-modal influences, there is yet no clear relation between particular functional observations and specific anatomical connections. In addition, our view on sensory integration at the neuronal level is coined by many studies on subcortical model systems of sensory integration; yet, the patterns of cross-modal interaction in cortex deviate from these model systems in several ways. Consequently, future studies on cortical sensory integration need to leave the descriptive level and need to incorporate cross-modal influences into models of the organization of sensory processing. Only then will we be able to determine whether early cross-modal interactions truly merit the label sensory integration, and how they increase a sensory system's ability to scrutinize its environment and finally aid behavior.

  2. Evaluating the Performance of a Visually Guided Hearing Aid Using a Dynamic Auditory-Visual Word Congruence Task.

    PubMed

    Roverud, Elin; Best, Virginia; Mason, Christine R; Streeter, Timothy; Kidd, Gerald

    2017-12-15

    The "visually guided hearing aid" (VGHA), consisting of a beamforming microphone array steered by eye gaze, is an experimental device being tested for effectiveness in laboratory settings. Previous studies have found that beamforming without visual steering can provide significant benefits (relative to natural binaural listening) for speech identification in spatialized speech or noise maskers when sound sources are fixed in location. The aim of the present study was to evaluate the performance of the VGHA in listening conditions in which target speech could switch locations unpredictably, requiring visual steering of the beamforming. To address this aim, the present study tested an experimental simulation of the VGHA in a newly designed dynamic auditory-visual word congruence task. Ten young normal-hearing (NH) and 11 young hearing-impaired (HI) adults participated. On each trial, three simultaneous spoken words were presented from three source positions (-30, 0, and 30 azimuth). An auditory-visual word congruence task was used in which participants indicated whether there was a match between the word printed on a screen at a location corresponding to the target source and the spoken target word presented acoustically from that location. Performance was compared for a natural binaural condition (stimuli presented using impulse responses measured on KEMAR), a simulated VGHA condition (BEAM), and a hybrid condition that combined lowpass-filtered KEMAR and highpass-filtered BEAM information (BEAMAR). In some blocks, the target remained fixed at one location across trials, and in other blocks, the target could transition in location between one trial and the next with a fixed but low probability. Large individual variability in performance was observed. There were significant benefits for the hybrid BEAMAR condition relative to the KEMAR condition on average for both NH and HI groups when the targets were fixed. Although not apparent in the averaged data, some individuals showed BEAM benefits relative to KEMAR. Under dynamic conditions, BEAM and BEAMAR performance dropped significantly immediately following a target location transition. However, performance recovered by the second word in the sequence and was sustained until the next transition. When performance was assessed using an auditory-visual word congruence task, the benefits of beamforming reported previously were generally preserved under dynamic conditions in which the target source could move unpredictably from one location to another (i.e., performance recovered rapidly following source transitions) while the observer steered the beamforming via eye gaze, for both young NH and young HI groups.

  3. Influence of auditory and audiovisual stimuli on the right-left prevalence effect.

    PubMed

    Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim

    2014-01-01

    When auditory stimuli are used in two-dimensional spatial compatibility tasks, where the stimulus and response configurations vary along the horizontal and vertical dimensions simultaneously, a right-left prevalence effect occurs in which horizontal compatibility dominates over vertical compatibility. The right-left prevalence effects obtained with auditory stimuli are typically larger than that obtained with visual stimuli even though less attention should be demanded from the horizontal dimension in auditory processing. In the present study, we examined whether auditory or visual dominance occurs when the two-dimensional stimuli are audiovisual, as well as whether there will be cross-modal facilitation of response selection for the horizontal and vertical dimensions. We also examined whether there is an additional benefit of adding a pitch dimension to the auditory stimulus to facilitate vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right-left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did not result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension was not in terms of influencing response selection on a trial-to-trial basis, but in terms of altering the salience of the task environment. Taken together, these findings indicate that in the absence of salient vertical cues, auditory and audiovisual stimuli tend to be coded along the horizontal dimension and vision tends to dominate audition in this two-dimensional spatial stimulus-response task.

  4. Sounds Activate Visual Cortex and Improve Visual Discrimination

    PubMed Central

    Störmer, Viola S.; Martinez, Antigona; McDonald, John J.; Hillyard, Steven A.

    2014-01-01

    A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. PMID:25031419

  5. The functional alterations associated with motor imagery training: a comparison between motor execution and motor imagery of sequential finger tapping

    NASA Astrophysics Data System (ADS)

    Zhang, Hang; Yao, Li; Long, Zhiying

    2011-03-01

    Motor imagery training, as an effective strategy, has been more and more applied to mental disorders rehabilitation and motor skill learning. Studies on the neural mechanism underlying motor imagery have suggested that such effectiveness may be related to the functional congruence between motor execution and motor imagery. However, as compared to the studies on motor imagery, the studies on motor imagery training are much fewer. The functional alterations associated with motor imagery training and the effectiveness of motor imagery training on motor performance improvement still needs further investigation. Using fMRI, we employed a sequential finger tapping paradigm to explore the functional alterations associated with motor imagery training in both motor execution and motor imagery task. We hypothesized through 14 consecutive days motor imagery training, the motor performance could be improved and the functional congruence between motor execution and motor imagery would be sustained form pre-training phase to post-training phase. Our results confirmed the effectiveness of motor imagery training in improving motor performance and demonstrated in both pre and post-training phases, motor imagery and motor execution consistently sustained the congruence in functional neuroanatomy, including SMA (supplementary motor cortex), PMA (premotor area); M1( primary motor cortex) and cerebellum. Moreover, for both execution and imagery tasks, a similar functional alteration was observed in fusiform through motor imagery training. These findings provided an insight into the effectiveness of motor imagery training and suggested its potential therapeutic value in motor rehabilitation.

  6. The development of automated access to symbolic and non-symbolic number knowledge in children: an ERP study.

    PubMed

    Gebuis, Titia; Herfs, Inkeri K; Kenemans, J Leon; de Haan, Edward H F; van der Smagt, Maarten J

    2009-11-01

    Infants can visually detect changes in numerosity, which suggests that a (non-symbolic) numerosity system is already present early in life. This non-symbolic system is hypothesized to serve as the basis for the later acquired symbolic system. Little is known about the processes underlying the transition from the non-symbolic to symbolic code. In the current study we investigated the development of automatization of symbolic number processing in children from second (6.0 years) and fourth grade (8.0 years) and adults using a symbolic and non-symbolic size congruency task and event-related potentials (ERPs) as a measure. The comparison between symbolic and non-symbolic size congruency effects (SCEs) allowed us to disentangle processes necessary to perform the task from processes specific to numerosity notation. In contrast to previous studies, second graders already revealed a behavioral symbolic SCE similar to that of adults. In addition, the behavioral SCE increased for symbolic and decreased for non-symbolic notation with increasing age. For all age groups, the ERP data showed that the two magnitudes interfered at a level before selective activation of the response system, for both notations. However, only for the second graders distinct processes were recruited to perform the symbolic size comparison task. This shift in recruited processes for the symbolic task only might reflect the functional specialization of the parietal cortex.

  7. "Objectifying the subjective: Building blocks of metacognitive experiences in conflict tasks": Correction to Questienne et al. (2018).

    PubMed

    2018-05-01

    Reports an error in "Objectifying the subjective: Building blocks of metacognitive experiences in conflict tasks" by Laurence Questienne, Anne Atas, Boris Burle and Wim Gevers ( Journal of Experimental Psychology: General , 2018[Jan], Vol 147[1], 125-131). In this article, the second sentence of the second paragraph of the Data Processing section is incorrect due to a production error. The second sentence should read as follows: RTs slower/shorter than Median 3 Median Absolute Deviations computed by participant were removed. (The following abstract of the original article appeared in record 2017-52065-001.) Metacognitive appraisals are essential for optimizing our information processing. In conflict tasks, metacognitive appraisals can result from different interrelated features (e.g., motor activity, visual awareness, response speed). Thanks to an original approach combining behavioral and electromyographic measures, the current study objectified the contribution of three features (reaction time [RT], motor hesitation with and without response competition, and visual congruency) to the subjective experience of urge-to-err in a priming conflict task. Both RT and motor hesitation with response competition were major determinants of metacognitive appraisals. Importantly, motor hesitation in absence of response competition and visual congruency had limited effect. Because science aims to rely on objectivity, subjective experiences are often discarded from scientific inquiry. The current study shows that subjectivity can be objectified. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  8. Dual-task interference effects on cross-modal numerical order and sound intensity judgments: the more the louder?

    PubMed

    Alards-Tomalin, Doug; Walker, Alexander C; Nepon, Hillary; Leboe-McGowan, Launa C

    2017-09-01

    In the current study, cross-task interactions between number order and sound intensity judgments were assessed using a dual-task paradigm. Participants first categorized numerical sequences composed of Arabic digits as either ordered (ascending, descending) or non-ordered. Following each number sequence, participants then had to judge the intensity level of a target sound. Experiment 1 emphasized processing the two tasks independently (serial processing), while Experiments 2 and 3 emphasized processing the two tasks simultaneously (parallel processing). Cross-task interference occurred only when the task required parallel processing and was specific to ascending numerical sequences, which led to a higher proportion of louder sound intensity judgments. In Experiment 4 we examined whether this unidirectional interaction was the result of participants misattributing enhanced processing fluency experienced on ascending sequences as indicating a louder target sound. The unidirectional finding could not be entirely attributed to misattributed processing fluency, and may also be connected to experientially derived conceptual associations between ascending number sequences and greater magnitude, consistent with conceptual mapping theory.

  9. Cross-modal metaphorical mapping of spoken emotion words onto vertical space.

    PubMed

    Montoro, Pedro R; Contreras, María José; Elosúa, María Rosa; Marmolejo-Ramos, Fernando

    2015-01-01

    From the field of embodied cognition, previous studies have reported evidence of metaphorical mapping of emotion concepts onto a vertical spatial axis. Most of the work on this topic has used visual words as the typical experimental stimuli. However, to our knowledge, no previous study has examined the association between affect and vertical space using a cross-modal procedure. The current research is a first step toward the study of the metaphorical mapping of emotions onto vertical space by means of an auditory to visual cross-modal paradigm. In the present study, we examined whether auditory words with an emotional valence can interact with the vertical visual space according to a 'positive-up/negative-down' embodied metaphor. The general method consisted in the presentation of a spoken word denoting a positive/negative emotion prior to the spatial localization of a visual target in an upper or lower position. In Experiment 1, the spoken words were passively heard by the participants and no reliable interaction between emotion concepts and bodily simulated space was found. In contrast, Experiment 2 required more active listening of the auditory stimuli. A metaphorical mapping of affect and space was evident but limited to the participants engaged in an emotion-focused task. Our results suggest that the association of affective valence and vertical space is not activated automatically during speech processing since an explicit semantic and/or emotional evaluation of the emotionally valenced stimuli was necessary to obtain an embodied effect. The results are discussed within the framework of the embodiment hypothesis.

  10. Cross-modal metaphorical mapping of spoken emotion words onto vertical space

    PubMed Central

    Montoro, Pedro R.; Contreras, María José; Elosúa, María Rosa; Marmolejo-Ramos, Fernando

    2015-01-01

    From the field of embodied cognition, previous studies have reported evidence of metaphorical mapping of emotion concepts onto a vertical spatial axis. Most of the work on this topic has used visual words as the typical experimental stimuli. However, to our knowledge, no previous study has examined the association between affect and vertical space using a cross-modal procedure. The current research is a first step toward the study of the metaphorical mapping of emotions onto vertical space by means of an auditory to visual cross-modal paradigm. In the present study, we examined whether auditory words with an emotional valence can interact with the vertical visual space according to a ‘positive-up/negative-down’ embodied metaphor. The general method consisted in the presentation of a spoken word denoting a positive/negative emotion prior to the spatial localization of a visual target in an upper or lower position. In Experiment 1, the spoken words were passively heard by the participants and no reliable interaction between emotion concepts and bodily simulated space was found. In contrast, Experiment 2 required more active listening of the auditory stimuli. A metaphorical mapping of affect and space was evident but limited to the participants engaged in an emotion-focused task. Our results suggest that the association of affective valence and vertical space is not activated automatically during speech processing since an explicit semantic and/or emotional evaluation of the emotionally valenced stimuli was necessary to obtain an embodied effect. The results are discussed within the framework of the embodiment hypothesis. PMID:26322007

  11. Perceiving similarity and comprehending metaphor.

    PubMed

    Marks, L E; Hammeal, R J; Bornstein, M H

    1987-01-01

    We conducted a series of 3 experiments to assess the comprehension of 4 types of cross-modal (synesthetic) similarities in nearly 500 3.5-13.5-year-old children and more than 100 adults. We tested both perceptual and verbal (metaphoric) modes. Children of all ages and adults matched pitch to brightness and loudness to brightness, thereby showing that even very young children recognize perceptual similarities between hearing and vision. Children did not consistently recognize similarity between pitch and size until about age 11. This difference in developmental timetables is compatible with the view that pitch-brightness and loudness-brightness similarities are intrinsic characteristics of perception (characteristics based, perhaps, on common sensory codes), whereas pitch-size similarity may be learned (perhaps through association of size with resonance properties). In a parallel verbal task, even 4-year-old children showed at least some capacity to translate meanings metaphorically from one modality to another (e.g., rating "low pitched" as dim and "high pitched" as bright). But not all literal meanings produced metaphoric equivalents in the youngest children (e.g., rating "sunlight" brighter but not louder than "moonlight"). Improvements with age in making metaphoric translations of synesthetic expressions paralleled increasing differentiation of meanings along literal dimensions and increasing capacity to integrate meanings of components in compound expressions. We postulate that perceptual knowledge about objects and events is represented in terms of locations in a multidimensional space; cross-modal similarities imply that the space is also multimodal. Verbal processes later gain access to this graded perceptual knowledge, thus permitting the interpretation of synesthetic metaphors according to the rules of cross-modal perception.

  12. Conflict Adaptation Depends on Task Structure

    ERIC Educational Resources Information Center

    Akcay, Caglar; Hazeltine, Eliot

    2008-01-01

    The dependence of the Simon effect on the correspondence of the previous trial can be explained by the conflict-monitoring theory, which holds that a control system adjusts automatic activation from irrelevant stimulus information (conflict adaptation) on the basis of the congruency of the previous trial. The authors report on 4 experiments…

  13. Auditory Sensory Substitution is Intuitive and Automatic with Texture Stimuli

    PubMed Central

    Stiles, Noelle R. B.; Shimojo, Shinsuke

    2015-01-01

    Millions of people are blind worldwide. Sensory substitution (SS) devices (e.g., vOICe) can assist the blind by encoding a video stream into a sound pattern, recruiting visual brain areas for auditory analysis via crossmodal interactions and plasticity. SS devices often require extensive training to attain limited functionality. In contrast to conventional attention-intensive SS training that starts with visual primitives (e.g., geometrical shapes), we argue that sensory substitution can be engaged efficiently by using stimuli (such as textures) associated with intrinsic crossmodal mappings. Crossmodal mappings link images with sounds and tactile patterns. We show that intuitive SS sounds can be matched to the correct images by naive sighted participants just as well as by intensively-trained participants. This result indicates that existing crossmodal interactions and amodal sensory cortical processing may be as important in the interpretation of patterns by SS as crossmodal plasticity (e.g., the strengthening of existing connections or the formation of new ones), especially at the earlier stages of SS usage. An SS training procedure based on crossmodal mappings could both considerably improve participant performance and shorten training times, thereby enabling SS devices to significantly expand blind capabilities. PMID:26490260

  14. Opposite brain laterality in analogous auditory and visual tests.

    PubMed

    Oltedal, Leif; Hugdahl, Kenneth

    2017-11-01

    Laterality for language processing can be assessed by auditory and visual tasks. Typically, a right ear/right visual half-field (VHF) advantage is observed, reflecting left-hemispheric lateralization for language. Historically, auditory tasks have shown more consistent and reliable results when compared to VHF tasks. While few studies have compared analogous tasks applied to both sensory modalities for the same participants, one such study by Voyer and Boudreau [(2003). Cross-modal correlation of auditory and visual language laterality tasks: a serendipitous finding. Brain Cogn, 53(2), 393-397] found opposite laterality for visual and auditory language tasks. We adapted an experimental paradigm based on a dichotic listening and VHF approach, and applied the combined language paradigm in two separate experiments, including fMRI in the second experiment to measure brain activation in addition to behavioural data. The first experiment showed a right-ear advantage for the auditory task, but a left half-field advantage for the visual task. The second experiment, confirmed the findings, with opposite laterality effects for the visual and auditory tasks. In conclusion, we replicate the finding by Voyer and Boudreau (2003) and support their interpretation that these visual and auditory language tasks measure different cognitive processes.

  15. The effect of Wi-Fi electromagnetic waves in unimodal and multimodal object recognition tasks in male rats.

    PubMed

    Hassanshahi, Amin; Shafeie, Seyed Ali; Fatemi, Iman; Hassanshahi, Elham; Allahtavakoli, Mohammad; Shabani, Mohammad; Roohbakhsh, Ali; Shamsizadeh, Ali

    2017-06-01

    Wireless internet (Wi-Fi) electromagnetic waves (2.45 GHz) have widespread usage almost everywhere, especially in our homes. Considering the recent reports about some hazardous effects of Wi-Fi signals on the nervous system, this study aimed to investigate the effect of 2.4 GHz Wi-Fi radiation on multisensory integration in rats. This experimental study was done on 80 male Wistar rats that were allocated into exposure and sham groups. Wi-Fi exposure to 2.4 GHz microwaves [in Service Set Identifier mode (23.6 dBm and 3% for power and duty cycle, respectively)] was done for 30 days (12 h/day). Cross-modal visual-tactile object recognition (CMOR) task was performed by four variations of spontaneous object recognition (SOR) test including standard SOR, tactile SOR, visual SOR, and CMOR tests. A discrimination ratio was calculated to assess the preference of animal to the novel object. The expression levels of M1 and GAT1 mRNA in the hippocampus were assessed by quantitative real-time RT-PCR. Results demonstrated that rats in Wi-Fi exposure groups could not discriminate significantly between the novel and familiar objects in any of the standard SOR, tactile SOR, visual SOR, and CMOR tests. The expression of M1 receptors increased following Wi-Fi exposure. In conclusion, results of this study showed that chronic exposure to Wi-Fi electromagnetic waves might impair both unimodal and cross-modal encoding of information.

  16. A Cross-Modal Perspective on the Relationships between Imagery and Working Memory

    PubMed Central

    Likova, Lora T.

    2013-01-01

    Mapping the distinctions and interrelationships between imagery and working memory (WM) remains challenging. Although each of these major cognitive constructs is defined and treated in various ways across studies, most accept that both imagery and WM involve a form of internal representation available to our awareness. In WM, there is a further emphasis on goal-oriented, active maintenance, and use of this conscious representation to guide voluntary action. Multicomponent WM models incorporate representational buffers, such as the visuo-spatial sketchpad, plus central executive functions. If there is a visuo-spatial “sketchpad” for WM, does imagery involve the same representational buffer? Alternatively, does WM employ an imagery-specific representational mechanism to occupy our awareness? Or do both constructs utilize a more generic “projection screen” of an amodal nature? To address these issues, in a cross-modal fMRI study, I introduce a novel Drawing-Based Memory Paradigm, and conceptualize drawing as a complex behavior that is readily adaptable from the visual to non-visual modalities (such as the tactile modality), which opens intriguing possibilities for investigating cross-modal learning and plasticity. Blindfolded participants were trained through our Cognitive-Kinesthetic Method (Likova, 2010a, 2012) to draw complex objects guided purely by the memory of felt tactile images. If this WM task had been mediated by transfer of the felt spatial configuration to the visual imagery mechanism, the response-profile in visual cortex would be predicted to have the “top-down” signature of propagation of the imagery signal downward through the visual hierarchy. Remarkably, the pattern of cross-modal occipital activation generated by the non-visual memory drawing was essentially the inverse of this typical imagery signature. The sole visual hierarchy activation was isolated to the primary visual area (V1), and accompanied by deactivation of the entire extrastriate cortex, thus ’cutting-off’ any signal propagation from/to V1 through the visual hierarchy. The implications of these findings for the debate on the interrelationships between the core cognitive constructs of WM and imagery and the nature of internal representations are evaluated. PMID:23346061

  17. Application of the instructional congruence framework: Developing supplemental materials for English language learners

    NASA Astrophysics Data System (ADS)

    Drews, Tina Skjerping

    2009-12-01

    This dissertation is a study of the instructional congruence framework as it was used to develop and pilot a supplemental science unit on energy and the environment for sixth grade students in Arizona. With the growing linguistic and cultural diversity of children in American schools, congruent materials are more important now than ever before. The supplemental materials were designed by the researcher and underwent a six person, three educator and three engineer, panel review. The revised materials were then piloted in two sixth grade classrooms in the Southwest with high numbers of English language learners. Classroom observation, teacher interviews, and the classroom observation protocol were utilized to understand the fidelity to the instructional congruence framework. The fidelity of implementation of materials was subject to the realities of varied educational contexts. Piloting materials in urban contexts with diverse students involved additional challenges. The results of the study explore the challenges in creating instructionally congruent materials for diverse students in urban contexts. Recommendations are provided for curriculum developers that undertake the task of creating instructionally congruent materials and emphasize the need to devise innovative methods of creation, while understanding that there is no perfect solution. The education community as a whole could benefit from incorporating and synthesizing the instructional congruence framework in order to provide maximum opportunities in science for all students.

  18. Gender Congruency From a Neutral Point of View: The Roles of Gender Classes and Conceptual Connotations.

    PubMed

    Bender, Andrea; Beller, Sieghard; Klauer, Karl Christoph

    2018-02-01

    The question of whether language affects thought is long-standing, with grammatical gender being one of the most contended instances. Empirical evidence focuses on the gender congruency effect, according to which referents of masculine nouns are conceptualized more strongly as male and those of feminine nouns more strongly as female. While some recent studies suggest that this effect is driven by conceptual connotations rather than grammatical properties, research remains theoretically inconclusive because of the confounding of grammatical gender and conceptual connotations in gendered (masculine or feminine) nouns. Taking advantage of the fact that German also includes a neuter gender, the current study attempted to disentangle the relative contributions of grammatical properties and connotations to the emergence of the gender congruency effect. In three pairs of experiments, neuter and gendered nouns were compared in an Extrinsic Affective Simon Task based on gender associations, controlled for a possible role of gender-indicating articles. A congruency effect emerged equally strongly for neuter and gendered nouns, but disappeared when including connotations as covariate, thereby effectively excluding grammatical gender as the (only) driving force for this effect. Based on a critical discussion of these findings, we propose a possible mechanism for the emergence of the effect that also has the potential to accommodate conflicting patterns of findings from previous research. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  19. Is Accessing of Words Affected by Affective Valence Only? A Discrete Emotion View on the Emotional Congruency Effect

    PubMed Central

    Chen, Xuqian; Liu, Bo; Lin, Shouwen

    2016-01-01

    This paper advances the discussion on which emotion information affects word accessing. Emotion information, which is formed as a result of repeated experiences, is primary and necessary in learning and representing word meanings. Previous findings suggested that valence (i.e., positive or negative) denoted by words can be automatically activated and plays a role in many significant cognitive processes. However, there has been a lack of discussion about whether discrete emotion information (i.e., happiness, anger, sadness, and fear) is also involved in these processes. According to the hierarchy model, emotions are considered organized within an abstract-to-concrete hierarchy, in which emotion prototypes are organized following affective valence. By controlling different congruencies of emotion relations (i.e., matches or mismatches between valences and prototypes of emotion), the present study showed both an evaluative congruency effect (Experiment 1) and a discrete emotional congruency effect (Experiment 2). These findings indicate that not only affective valences but also discrete emotions can be activated under the present priming lexical decision task. However, the present findings also suggest that discrete emotions might be activated at the later priming stage as compared to valences. The present work provides evidence that information about discrete emotion could be involved in word processing. This might be a result of subjects’ embodied experiences. PMID:27379000

  20. Is Accessing of Words Affected by Affective Valence Only? A Discrete Emotion View on the Emotional Congruency Effect.

    PubMed

    Chen, Xuqian; Liu, Bo; Lin, Shouwen

    2016-01-01

    This paper advances the discussion on which emotion information affects word accessing. Emotion information, which is formed as a result of repeated experiences, is primary and necessary in learning and representing word meanings. Previous findings suggested that valence (i.e., positive or negative) denoted by words can be automatically activated and plays a role in many significant cognitive processes. However, there has been a lack of discussion about whether discrete emotion information (i.e., happiness, anger, sadness, and fear) is also involved in these processes. According to the hierarchy model, emotions are considered organized within an abstract-to-concrete hierarchy, in which emotion prototypes are organized following affective valence. By controlling different congruencies of emotion relations (i.e., matches or mismatches between valences and prototypes of emotion), the present study showed both an evaluative congruency effect (Experiment 1) and a discrete emotional congruency effect (Experiment 2). These findings indicate that not only affective valences but also discrete emotions can be activated under the present priming lexical decision task. However, the present findings also suggest that discrete emotions might be activated at the later priming stage as compared to valences. The present work provides evidence that information about discrete emotion could be involved in word processing. This might be a result of subjects' embodied experiences.

  1. The semantic origin of unconscious priming: Behavioral and event-related potential evidence during category congruency priming from strongly and weakly related masked words.

    PubMed

    Ortells, Juan J; Kiefer, Markus; Castillo, Alejandro; Megías, Montserrat; Morillas, Alejandro

    2016-01-01

    The mechanisms underlying masked congruency priming, semantic mechanisms such as semantic activation or non-semantic mechanisms, for example response activation, remain a matter of debate. In order to decide between these alternatives, reaction times (RTs) and event-related potentials (ERPs) were recorded in the present study, while participants performed a semantic categorization task on visible word targets that were preceded either 167 ms (Experiment 1) or 34 ms before (Experiment 2) by briefly presented (33 ms) novel (unpracticed) masked prime words. The primes and targets belonged to different categories (unrelated), or they were either strongly or weakly semantically related category co-exemplars. Behavioral (RT) and electrophysiological masked congruency priming effects were significantly greater for strongly related pairs than for weakly related pairs, indicating a semantic origin of effects. Priming in the latter condition was not statistically reliable. Furthermore, priming effects modulated the N400 event-related potential (ERP) component, an electrophysiological index of semantic processing, but not ERPs in the time range of the N200 component, associated with response conflict and visuo-motor response priming. The present results demonstrate that masked congruency priming from novel prime words also depends on semantic processing of the primes and is not exclusively driven by non-semantic mechanisms such as response activation. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Conflict adaptation and congruency sequence effects to social-emotional stimuli in individuals with autism spectrum disorders.

    PubMed

    Worsham, Whitney; Gray, Whitney E; Larson, Michael J; South, Mikle

    2015-11-01

    The modification of performance following conflict can be measured using conflict adaptation tasks thought to measure the change in the allocation of cognitive resources in order to reduce conflict interference and improve performance. While previous studies have suggested atypical processing during nonsocial cognitive control tasks, conflict adaptation (i.e. congruency sequence effects) for social-emotional stimuli have not been previously studied in autism spectrum disorder. A total of 32 participants diagnosed with autism spectrum disorder and 27 typically developing matched controls completed an emotional Stroop conflict task that required the classification of facial affect while simultaneously ignoring an overlaid affective word. Both groups showed behavioral evidence for emotional conflict adaptation based on response times and accuracy rates. However, the autism spectrum disorder group demonstrated a speed-accuracy trade-off manifested through significantly faster response times and decreased accuracy rates on trials containing conflict between the emotional face and the overlaid emotional word. Reduced selective attention toward socially relevant information may bias individuals with autism spectrum disorder toward more rapid processing and decision making even when conflict is present. Nonetheless, the loss of important information from the social stimuli reduces decision-making accuracy, negatively affecting the ability to adapt both cognitively and emotionally when conflict arises. © The Author(s) 2014.

  3. Cortical GABAergic Interneurons in Cross-Modal Plasticity following Early Blindness

    PubMed Central

    Desgent, Sébastien; Ptito, Maurice

    2012-01-01

    Early loss of a given sensory input in mammals causes anatomical and functional modifications in the brain via a process called cross-modal plasticity. In the past four decades, several animal models have illuminated our understanding of the biological substrates involved in cross-modal plasticity. Progressively, studies are now starting to emphasise on cell-specific mechanisms that may be responsible for this intermodal sensory plasticity. Inhibitory interneurons expressing γ-aminobutyric acid (GABA) play an important role in maintaining the appropriate dynamic range of cortical excitation, in critical periods of developmental plasticity, in receptive field refinement, and in treatment of sensory information reaching the cerebral cortex. The diverse interneuron population is very sensitive to sensory experience during development. GABAergic neurons are therefore well suited to act as a gate for mediating cross-modal plasticity. This paper attempts to highlight the links between early sensory deprivation, cortical GABAergic interneuron alterations, and cross-modal plasticity, discuss its implications, and further provide insights for future research in the field. PMID:22720175

  4. Cross-modal versus within-modal recall: differences in behavioral and brain responses.

    PubMed

    Butler, Andrew J; James, Karin H

    2011-10-31

    Although human experience is multisensory in nature, previous research has focused predominantly on memory for unisensory as opposed to multisensory information. In this work, we sought to investigate behavioral and neural differences between the cued recall of cross-modal audiovisual associations versus within-modal visual or auditory associations. Participants were presented with cue-target associations comprised of pairs of nonsense objects, pairs of nonsense sounds, objects paired with sounds, and sounds paired with objects. Subsequently, they were required to recall the modality of the target given the cue while behavioral accuracy, reaction time, and blood oxygenation level dependent (BOLD) activation were measured. Successful within-modal recall was associated with modality-specific reactivation in primary perceptual regions, and was more accurate than cross-modal retrieval. When auditory targets were correctly or incorrectly recalled using a cross-modal visual cue, there was re-activation in auditory association cortex, and recall of information from cross-modal associations activated the hippocampus to a greater degree than within-modal associations. Findings support theories that propose an overlap between regions active during perception and memory, and show that behavioral and neural differences exist between within- and cross-modal associations. Overall the current study highlights the importance of the role of multisensory information in memory. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Cross-cultural differences in crossmodal correspondences between basic tastes and visual features

    PubMed Central

    Wan, Xiaoang; Woods, Andy T.; van den Bosch, Jasper J. F.; McKenzie, Kirsten J.; Velasco, Carlos; Spence, Charles

    2014-01-01

    We report a cross-cultural study designed to investigate crossmodal correspondences between a variety of visual features (11 colors, 15 shapes, and 2 textures) and the five basic taste terms (bitter, salty, sour, sweet, and umami). A total of 452 participants from China, India, Malaysia, and the USA viewed color patches, shapes, and textures online and had to choose the taste term that best matched the image and then rate their confidence in their choice. Across the four groups of participants, the results revealed a number of crossmodal correspondences between certain colors/shapes and bitter, sour, and sweet tastes. Crossmodal correspondences were also documented between the color white and smooth/rough textures on the one hand and the salt taste on the other. Cross-cultural differences were observed in the correspondences between certain colors, shapes, and one of the textures and the taste terms. The taste-patterns shown by the participants from the four countries tested in the present study are quite different from one another, and these differences cannot easily be attributed merely to whether a country is Eastern or Western. These findings therefore highlight the impact of cultural background on crossmodal correspondences. As such, they raise a number of interesting questions regarding the neural mechanisms underlying crossmodal correspondences. PMID:25538643

  6. The taste-visual cross-modal Stroop effect: An event-related brain potential study.

    PubMed

    Xiao, X; Dupuis-Roy, N; Yang, X L; Qiu, J F; Zhang, Q L

    2014-03-28

    Event-related potentials (ERPs) were recorded to explore, for the first time, the electrophysiological correlates of the taste-visual cross-modal Stroop effect. Eighteen healthy participants were presented with a taste stimulus and a food image, and asked to categorize the image as "sweet" or "sour" by pressing the relevant button as quickly as possible. Accurate categorization of the image was faster when it was presented with a congruent taste stimulus (e.g., sour taste/image of lemon) than with an incongruent one (e.g., sour taste/image of ice cream). ERP analyses revealed a negative difference component (ND430-620) between 430 and 620ms in the taste-visual cross-modal Stroop interference. Dipole source analysis of the difference wave (incongruent minus congruent) indicated that two generators localized in the prefrontal cortex and the parahippocampal gyrus contributed to this taste-visual cross-modal Stroop effect. This result suggests that the prefrontal cortex is associated with the process of conflict control in the taste-visual cross-modal Stroop effect. Also, we speculate that the parahippocampal gyrus is associated with the process of discordant information in the taste-visual cross-modal Stroop effect. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  7. Cross-cultural differences in crossmodal correspondences between basic tastes and visual features.

    PubMed

    Wan, Xiaoang; Woods, Andy T; van den Bosch, Jasper J F; McKenzie, Kirsten J; Velasco, Carlos; Spence, Charles

    2014-01-01

    We report a cross-cultural study designed to investigate crossmodal correspondences between a variety of visual features (11 colors, 15 shapes, and 2 textures) and the five basic taste terms (bitter, salty, sour, sweet, and umami). A total of 452 participants from China, India, Malaysia, and the USA viewed color patches, shapes, and textures online and had to choose the taste term that best matched the image and then rate their confidence in their choice. Across the four groups of participants, the results revealed a number of crossmodal correspondences between certain colors/shapes and bitter, sour, and sweet tastes. Crossmodal correspondences were also documented between the color white and smooth/rough textures on the one hand and the salt taste on the other. Cross-cultural differences were observed in the correspondences between certain colors, shapes, and one of the textures and the taste terms. The taste-patterns shown by the participants from the four countries tested in the present study are quite different from one another, and these differences cannot easily be attributed merely to whether a country is Eastern or Western. These findings therefore highlight the impact of cultural background on crossmodal correspondences. As such, they raise a number of interesting questions regarding the neural mechanisms underlying crossmodal correspondences.

  8. Experimental and clinical usefulness of crossmodal paradigms in psychiatry: an illustration from emotional processing in alcohol-dependence

    PubMed Central

    Maurage, Pierre; Campanella, Salvatore

    2013-01-01

    Crossmodal processing (i.e., the construction of a unified representation stemming from distinct sensorial modalities inputs) constitutes a crucial ability in humans' everyday life. It has been extensively explored at cognitive and cerebral levels during the last decade among healthy controls. Paradoxically however, and while difficulties to perform this integrative process have been suggested in a large range of psychopathological states (e.g., schizophrenia and autism), these crossmodal paradigms have been very rarely used in the exploration of psychiatric populations. The main aim of the present paper is thus to underline the experimental and clinical usefulness of exploring crossmodal processes in psychiatry. We will illustrate this proposal by means of the recent data obtained in the crossmodal exploration of emotional alterations in alcohol-dependence. Indeed, emotional decoding impairments might have a role in the development and maintenance of alcohol-dependence, and have been extensively investigated by means of experiments using separated visual or auditory stimulations. Besides these unimodal explorations, we have recently conducted several studies using audio-visual crossmodal paradigms, which has allowed us to improve the ecological validity of the unimodal experimental designs and to offer new insights on the emotional alterations among alcohol-dependent individuals. We will show how these preliminary results can be extended to develop a coherent and ambitious research program using crossmodal designs in various psychiatric populations and sensory modalities. We will finally end the paper by underlining the various potential clinical applications and the fundamental implications that can be raised by this emerging project. PMID:23898250

  9. Higher Language Ability is Related to Angular Gyrus Activation Increase During Semantic Processing, Independent of Sentence Incongruency.

    PubMed

    Van Ettinger-Veenstra, Helene; McAllister, Anita; Lundberg, Peter; Karlsson, Thomas; Engström, Maria

    2016-01-01

    This study investigates the relation between individual language ability and neural semantic processing abilities. Our aim was to explore whether high-level language ability would correlate to decreased activation in language-specific regions or rather increased activation in supporting language regions during processing of sentences. Moreover, we were interested if observed neural activation patterns are modulated by semantic incongruency similarly to previously observed changes upon syntactic congruency modulation. We investigated 27 healthy adults with a sentence reading task-which tapped language comprehension and inference, and modulated sentence congruency-employing functional magnetic resonance imaging (fMRI). We assessed the relation between neural activation, congruency modulation, and test performance on a high-level language ability assessment with multiple regression analysis. Our results showed increased activation in the left-hemispheric angular gyrus extending to the temporal lobe related to high language ability. This effect was independent of semantic congruency, and no significant relation between language ability and incongruency modulation was observed. Furthermore, there was a significant increase of activation in the inferior frontal gyrus (IFG) bilaterally when the sentences were incongruent, indicating that processing incongruent sentences was more demanding than processing congruent sentences and required increased activation in language regions. The correlation of high-level language ability with increased rather than decreased activation in the left angular gyrus, a region specific for language processing, is opposed to what the neural efficiency hypothesis would predict. We can conclude that no evidence is found for an interaction between semantic congruency related brain activation and high-level language performance, even though the semantic incongruent condition shows to be more demanding and evoking more neural activation.

  10. Breast cancer treatment decision making among Latinas and non-Latina Whites: a communication model predicting decisional outcomes and quality of life.

    PubMed

    Yanez, Betina; Stanton, Annette L; Maly, Rose C

    2012-09-01

    Deciding among medical treatment options is a pivotal event following cancer diagnosis, a task that can be particularly daunting for individuals uncomfortable with communication in a medical context. Few studies have explored the surgical decision-making process and associated outcomes among Latinas. We propose a model to elucidate pathways through which acculturation (indicated by language use) and reports of communication effectiveness specific to medical decision making contribute to decisional outcomes (i.e., congruency between preferred and actual involvement in decision making, treatment satisfaction) and quality of life among Latinas and non-Latina White women with breast cancer. Latinas (N = 326) and non-Latina Whites (N = 168) completed measures six months after breast cancer diagnosis, and quality of life was assessed 18 months after diagnosis. Structural equation modeling was used to examine relationships between language use, communication effectiveness, and outcomes. Among Latinas, 63% reported congruency in decision making, whereas 76% of non-Latina Whites reported congruency. In Latinas, greater use of English was related to better reported communication effectiveness. Effectiveness in communication was not related to congruency in decision making, but several indicators of effectiveness in communication were related to greater treatment satisfaction, as was greater congruency in decision making. Greater treatment satisfaction predicted more favorable quality of life. The final model fit the data well only for Latinas. Differences in quality of life and effectiveness in communication were observed between racial/ethnic groups. Findings underscore the importance of developing targeted interventions for physicians and Latinas with breast cancer to enhance communication in decision making. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  11. The effect of synesthetic associations between the visual and auditory modalities on the Colavita effect.

    PubMed

    Stekelenburg, Jeroen J; Keetels, Mirjam

    2016-05-01

    The Colavita effect refers to the phenomenon that when confronted with an audiovisual stimulus, observers report more often to have perceived the visual than the auditory component. The Colavita effect depends on low-level stimulus factors such as spatial and temporal proximity between the unimodal signals. Here, we examined whether the Colavita effect is modulated by synesthetic congruency between visual size and auditory pitch. If the Colavita effect depends on synesthetic congruency, we expect a larger Colavita effect for synesthetically congruent size/pitch (large visual stimulus/low-pitched tone; small visual stimulus/high-pitched tone) than synesthetically incongruent (large visual stimulus/high-pitched tone; small visual stimulus/low-pitched tone) combinations. Participants had to identify stimulus type (visual, auditory or audiovisual). The study replicated the Colavita effect because participants reported more often the visual than auditory component of the audiovisual stimuli. Synesthetic congruency had, however, no effect on the magnitude of the Colavita effect. EEG recordings to congruent and incongruent audiovisual pairings showed a late frontal congruency effect at 400-550 ms and an occipitoparietal effect at 690-800 ms with neural sources in the anterior cingulate and premotor cortex for the 400- to 550-ms window and premotor cortex, inferior parietal lobule and the posterior middle temporal gyrus for the 690- to 800-ms window. The electrophysiological data show that synesthetic congruency was probably detected in a processing stage subsequent to the Colavita effect. We conclude that-in a modality detection task-the Colavita effect can be modulated by low-level structural factors but not by higher-order associations between auditory and visual inputs.

  12. Enemies and Friends in the Neighborhood: Orthographic Similarity Effects in Semantic Categorization

    ERIC Educational Resources Information Center

    Pecher, Diane; Zeelenberg, Rene; Wagenmakers, Eric-Jan

    2005-01-01

    Studies investigating orthographic similarity effects in semantic tasks have produced inconsistent results. The authors investigated orthographic similarity effects in animacy decision and in contrast with previous studies, they took semantic congruency into account. In Experiments 1 and 2, performance to a target (cat) was better if a previously…

  13. Semantic Facilitation in Category and Action Naming: Testing the Message-Congruency Account

    ERIC Educational Resources Information Center

    Kuipers, Jan-Rouke; La Heij, Wido

    2008-01-01

    Basic-level picture naming is hampered by the presence of a semantically related context word (compared to an unrelated word), whereas picture categorization is facilitated by a semantically related context word. This reversal of the semantic context effect has been explained by assuming that in categorization tasks, basic-level distractor words…

  14. Effects of Peripheral Eccentricity and Head Orientation on Gaze Discrimination.

    PubMed

    Palanica, Adam; Itier, Roxane J

    2014-01-01

    Visual search tasks support a special role for direct gaze in human cognition, while classic gaze judgment tasks suggest the congruency between head orientation and gaze direction plays a central role in gaze perception. Moreover, whether gaze direction can be accurately discriminated in the periphery using covert attention is unknown. In the present study, individual faces in frontal and in deviated head orientations with a direct or an averted gaze were flashed for 150 ms across the visual field; participants focused on a centred fixation while judging the gaze direction. Gaze discrimination speed and accuracy varied with head orientation and eccentricity. The limit of accurate gaze discrimination was less than ±6° eccentricity. Response times suggested a processing facilitation for direct gaze in fovea, irrespective of head orientation, however, by ±3° eccentricity, head orientation started biasing gaze judgments, and this bias increased with eccentricity. Results also suggested a special processing of frontal heads with direct gaze in central vision, rather than a general congruency effect between eye and head cues. Thus, while both head and eye cues contribute to gaze discrimination, their role differs with eccentricity.

  15. The utility of visual analogs of central auditory tests in the differential diagnosis of (central) auditory processing disorder and attention deficit hyperactivity disorder.

    PubMed

    Bellis, Teri James; Billiet, Cassie; Ross, Jody

    2011-09-01

    Cacace and McFarland (2005) have suggested that the addition of cross-modal analogs will improve the diagnostic specificity of (C)APD (central auditory processing disorder) by ensuring that deficits observed are due to the auditory nature of the stimulus and not to supra-modal or other confounds. Others (e.g., Musiek et al, 2005) have expressed concern about the use of such analogs in diagnosing (C)APD given the uncertainty as to the degree to which cross-modal measures truly are analogous and emphasize the nonmodularity of the CANs (central auditory nervous system) and its function, which precludes modality specificity of (C)APD. To date, no studies have examined the clinical utility of cross-modal (e.g., visual) analogs of central auditory tests in the differential diagnosis of (C)APD. This study investigated performance of children diagnosed with (C)APD, children diagnosed with ADHD (attention deficit hyperactivity disorder), and typically developing children on three diagnostic tests of central auditory function and their corresponding visual analogs. The study sought to determine whether deficits observed in the (C)APD group were restricted to the auditory modality and the degree to which the addition of visual analogs aids in the ability to differentiate among groups. An experimental repeated measures design was employed. Participants consisted of three groups of right-handed children (normal control, n=10; ADHD, n=10; (C)APD, n=7) with normal and symmetrical hearing sensitivity, normal or corrected-to-normal visual acuity, and no family or personal history of disorders unrelated to their primary diagnosis. Participants in Groups 2 and 3 met current diagnostic criteria for ADHD and (C)APD. Visual analogs of three tests in common clinical use for the diagnosis of (C)APD were used (Dichotic Digits [Musiek, 1983]; Frequency Patterns [Pinheiro and Ptacek, 1971]; and Duration Patterns [Pinheiro and Musiek, 1985]). Participants underwent two 1 hr test sessions separated by at least 1 wk. Order of sessions (auditory, visual) and tests within each session were counterbalanced across participants. ANCOVAs (analyses of covariance) were used to examine effects of group, modality, and laterality (Dichotic/Dichoptic Digits) or response condition (auditory and visual patterning). In addition, planned univariate ANCOVAs were used to examine effects of group on intratest comparison measures (REA, HLD [Humming-Labeling Differential]). Children with both ADHD and (C)APD performed more poorly overall than typically developing children on all tasks, with the (C)APD group exhibiting the poorest performance on the auditory and visual patterns tests but the ADHD and (C)APD group performing similarly on the Dichotic/Dichoptic Digits task. However, each of the auditory and visual intratest comparison measures, when taken individually, was able to distinguish the (C)APD group from both the normal control and ADHD groups, whose performance did not differ from one another. Results underscore the importance of intratest comparison measures in the interpretation of central auditory tests (American Speech-Language-Hearing Association [ASHA], 2005 ; American Academy of Audiology [AAA], 2010). Results also support the "non-modular" view of (C)APD in which cross-modal deficits would be predicted based on shared neuroanatomical substrates. Finally, this study demonstrates that auditory tests alone are sufficient to distinguish (C)APD from supra-modal disorders, with cross-modal analogs adding little if anything to the differential diagnostic process. American Academy of Audiology.

  16. Cross-modal illusory conjunctions between vision and touch.

    PubMed

    Cinel, Caterina; Humphreys, Glyn W; Poli, Riccardo

    2002-10-01

    Cross-modal illusory conjunctions (ICs) happen when, under conditions of divided attention, felt textures are reported as being seen or vice versa. Experiments provided evidence for these errors, demonstrated that ICs are more frequent if tactile and visual stimuli are in the same hemispace, and showed that ICs still occur under forced-choice conditions but do not occur when attention to the felt texture is increased. Cross-modal ICs were also found in a patient with parietal damage even with relatively long presentations of visual stimuli. The data are consistent with there being cross-modal integration of sensory information, with the modality of origin sometimes being misattributed when attention is constrained. The empirical conclusions from the experiments are supported by formal models.

  17. A dual contribution to the involuntary semantic processing of unexpected spoken words.

    PubMed

    Parmentier, Fabrice B R; Turner, Jacqueline; Perez, Laura

    2014-02-01

    Sounds are a major cause of distraction. Unexpected to-be-ignored auditory stimuli presented in the context of an otherwise repetitive acoustic background ineluctably break through selective attention and distract people from an unrelated visual task (deviance distraction). This involuntary capture of attention by deviant sounds has been hypothesized to trigger their semantic appraisal and, in some circumstances, interfere with ongoing performance, but it remains unclear how such processing compares with the automatic processing of distractors in classic interference tasks (e.g., Stroop, flanker, Simon tasks). Using a cross-modal oddball task, we assessed the involuntary semantic processing of deviant sounds in the presence and absence of deviance distraction. The results revealed that some involuntary semantic analysis of spoken distractors occurs in the absence of deviance distraction but that this processing is significantly greater in its presence. We conclude that the automatic processing of spoken distractors reflects 2 contributions, one that is contingent upon deviance distraction and one that is independent from it.

  18. Salient sounds activate human visual cortex automatically.

    PubMed

    McDonald, John J; Störmer, Viola S; Martinez, Antigona; Feng, Wenfeng; Hillyard, Steven A

    2013-05-22

    Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, this study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2-4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of colocalized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task.

  19. Salient sounds activate human visual cortex automatically

    PubMed Central

    McDonald, John J.; Störmer, Viola S.; Martinez, Antigona; Feng, Wenfeng; Hillyard, Steven A.

    2013-01-01

    Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, the present study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2, 3, and 4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of co-localized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task. PMID:23699530

  20. Adaptation to Emotional Conflict: Evidence from a Novel Face Emotion Paradigm

    PubMed Central

    Clayson, Peter E.; Larson, Michael J.

    2013-01-01

    The preponderance of research on trial-by-trial recruitment of affective control (e.g., conflict adaptation) relies on stimuli wherein lexical word information conflicts with facial affective stimulus properties (e.g., the face-Stroop paradigm where an emotional word is overlaid on a facial expression). Several studies, however, indicate different neural time course and properties for processing of affective lexical stimuli versus affective facial stimuli. The current investigation used a novel task to examine control processes implemented following conflicting emotional stimuli with conflict-inducing affective face stimuli in the absence of affective words. Forty-one individuals completed a task wherein the affective-valence of the eyes and mouth were either congruent (happy eyes, happy mouth) or incongruent (happy eyes, angry mouth) while high-density event-related potentials (ERPs) were recorded. There was a significant congruency effect and significant conflict adaptation effects for error rates. Although response times (RTs) showed a significant congruency effect, the effect of previous-trial congruency on current-trial RTs was only present for current congruent trials. Temporospatial principal components analysis showed a P3-like ERP source localized using FieldTrip software to the medial cingulate gyrus that was smaller on incongruent than congruent trials and was significantly influenced by the recruitment of control processes following previous-trial emotional conflict (i.e., there was significant conflict adaptation in the ERPs). Results show that a face-only paradigm may be sufficient to elicit emotional conflict and suggest a system for rapidly detecting conflicting emotional stimuli and subsequently adjusting control resources, similar to cognitive conflict detection processes, when using conflicting facial expressions without words. PMID:24073278

  1. Age-Related Effects of Stimulus Type and Congruency on Inattentional Blindness.

    PubMed

    Liu, Han-Hui

    2018-01-01

    Background: Most of the previous inattentional blindness (IB) studies focused on the factors that contributed to the detection of unattended stimuli. The age-related changes on IB have rarely been investigated across all age groups. In the current study, by using the dual-task IB paradigm, we aimed to explore the age-related effects of attended stimuli type and congruency between attended and unattended stimuli on IB. Methods: The current study recruited 111 participants (30 adolescents, 48 young adults, and 33 middle-aged adults) in the baseline recognition experiments and 341 participants (135 adolescents, 135 young adults, and 71 middle-aged adults) in the IB experiment. We applied the superimposed picture and word streams experimental paradigm to explore the age-related effects of attended stimuli type and congruency between attended and unattended stimuli on IB. An ANOVA was performed to analyze the results. Results: Participants across all age groups presented significantly lower recognition scores for both pictures and words in comparison with baseline recognition. Participants presented decreased recognition for unattended pictures or words from adolescents to young adults and middle-aged adults. When the pictures and words are congruent, all the participants showed significantly higher recognition scores for unattended stimuli in comparison with incongruent condition. Adolescents and young adults did not show recognition differences when primary tasks were attending pictures or words. Conclusion: The current findings showed that all participants presented better recognition scores for attended stimuli in comparison with unattended stimuli, and the recognition scores decreased from the adolescents to young and middle-aged adults. The findings partly supported the attention capacity models of IB.

  2. Adaptation to emotional conflict: evidence from a novel face emotion paradigm.

    PubMed

    Clayson, Peter E; Larson, Michael J

    2013-01-01

    The preponderance of research on trial-by-trial recruitment of affective control (e.g., conflict adaptation) relies on stimuli wherein lexical word information conflicts with facial affective stimulus properties (e.g., the face-Stroop paradigm where an emotional word is overlaid on a facial expression). Several studies, however, indicate different neural time course and properties for processing of affective lexical stimuli versus affective facial stimuli. The current investigation used a novel task to examine control processes implemented following conflicting emotional stimuli with conflict-inducing affective face stimuli in the absence of affective words. Forty-one individuals completed a task wherein the affective-valence of the eyes and mouth were either congruent (happy eyes, happy mouth) or incongruent (happy eyes, angry mouth) while high-density event-related potentials (ERPs) were recorded. There was a significant congruency effect and significant conflict adaptation effects for error rates. Although response times (RTs) showed a significant congruency effect, the effect of previous-trial congruency on current-trial RTs was only present for current congruent trials. Temporospatial principal components analysis showed a P3-like ERP source localized using FieldTrip software to the medial cingulate gyrus that was smaller on incongruent than congruent trials and was significantly influenced by the recruitment of control processes following previous-trial emotional conflict (i.e., there was significant conflict adaptation in the ERPs). Results show that a face-only paradigm may be sufficient to elicit emotional conflict and suggest a system for rapidly detecting conflicting emotional stimuli and subsequently adjusting control resources, similar to cognitive conflict detection processes, when using conflicting facial expressions without words.

  3. Alertness Modulates Conflict Adaptation and Feature Integration in an Opposite Way

    PubMed Central

    Chen, Jia; Huang, Xiting; Chen, Antao

    2013-01-01

    Previous studies show that the congruency sequence effect can result from both the conflict adaptation effect (CAE) and feature integration effect which can be observed as the repetition priming effect (RPE) and feature overlap effect (FOE) depending on different experimental conditions. Evidence from neuroimaging studies suggests that a close correlation exists between the neural mechanisms of alertness-related modulations and the congruency sequence effect. However, little is known about whether and how alertness mediates the congruency sequence effect. In Experiment 1, the Attentional Networks Test (ANT) and a modified flanker task were used to evaluate whether the alertness of the attentional functions had a correlation with the CAE and RPE. In Experimental 2, the ANT and another modified flanker task were used to investigate whether alertness of the attentional functions correlate with the CAE and FOE. In Experiment 1, through the correlative analysis, we found a significant positive correlation between alertness and the CAE, and a negative correlation between the alertness and the RPE. Moreover, a significant negative correlation existed between CAE and RPE. In Experiment 2, we found a marginally significant negative correlation between the CAE and the RPE, but the correlation between alertness and FOE, CAE and FOE was not significant. These results suggest that alertness can modulate conflict adaptation and feature integration in an opposite way. Participants at the high alerting level group may tend to use the top-down cognitive processing strategy, whereas participants at the low alerting level group tend to use the bottom-up processing strategy. PMID:24250824

  4. The Neural Basis of Taste-visual Modal Conflict Control in Appetitive and Aversive Gustatory Context.

    PubMed

    Xiao, Xiao; Dupuis-Roy, Nicolas; Jiang, Jun; Du, Xue; Zhang, Mingmin; Zhang, Qinglin

    2018-02-21

    The functional magnetic resonance imaging (fMRI) technique was used to investigate brain activations related to conflict control in a taste-visual cross-modal pairing task. On each trial, participants had to decide whether the taste of a gustatory stimulus matched or did not match the expected taste of the food item depicted in an image. There were four conditions: Negative match (NM; sour gustatory stimulus and image of sour food), negative mismatch (NMM; sour gustatory stimulus and image of sweet food), positive match (PM; sweet gustatory stimulus and image of sweet food), positive mismatch (PMM; sweet gustatory stimulus and image of sour food). Blood oxygenation level-dependent (BOLD) contrasts between the NMM and the NM conditions revealed an increased activity in the middle frontal gyrus (MFG) (BA 6), the lingual gyrus (LG) (BA 18), and the postcentral gyrus. Furthermore, the NMM minus NM BOLD differences observed in the MFG were correlated with the NMM minus NM differences in response time. These activations were specifically associated with conflict control during the aversive gustatory stimulation. BOLD contrasts between the PMM and the PM condition revealed no significant positive activation, which supported the hypothesis that the human brain is especially sensitive to aversive stimuli. Altogether, these results suggest that the MFG is associated with the taste-visual cross-modal conflict control. A possible role of the LG as an information conflict detector at an early perceptual stage is further discussed, along with a possible involvement of the postcentral gyrus in the processing of the taste-visual cross-modal sensory contrast. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  5. What colour does that feel? Tactile--visual mapping and the development of cross-modality.

    PubMed

    Ludwig, Vera U; Simner, Julia

    2013-04-01

    Humans share implicit preferences for cross-modal mappings (e.g., low pitch sounds are preferentially paired with darker colours). Individuals with synaesthesia experience cross-modal mappings to a conscious degree (e.g., they may see colours when they hear sounds). The neonatal synaesthesia hypothesis claims that all humans may be born with this explicit cross-modal perception, which dies out in most people through childhood, leaving only implicit associations in the average adult. Although there is evidence for decreasing cross-modality throughout early infancy, it is unclear whether this decline continues to take place throughout childhood and adolescence. This large-scale study had two goals. First, we aimed to establish whether human non-synaesthetes systematically map tactile and visual dimensions - a combination that has rarely been studied. Second, we asked whether tactile-visual associations may be more pronounced in younger compared to older participants. 210 participants between the ages of 5-74 years assigned colours to tactile stimuli. Smoothness, softness and roundness of stimuli positively correlated with luminance of the chosen colour; and smoothness and softness also positively correlated with chroma. Moreover, tactile sensations were associated with specific colours (e.g., softness with pink). There were no age differences for luminance effects. Chroma effects, however, were found exclusively in children and adolescents. Our findings are consistent with the neonatal synaesthesia hypothesis which suggests that all humans are born with strong cross-modal perception which is pruned away or inhibited throughout development. Moreover, the findings suggest that a decline of some forms of cross-modality may take place over a much longer time span than previously assumed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Neuronal Correlates of Cross-Modal Transfer in the Cerebellum and Pontine Nuclei

    PubMed Central

    Campolattaro, Matthew M.; Kashef, Alireza; Lee, Inah; Freeman, John H.

    2011-01-01

    Cross-modal transfer occurs when learning established with a stimulus from one sensory modality facilitates subsequent learning with a new stimulus from a different sensory modality. The current study examined neuronal correlates of cross-modal transfer of Pavlovian eyeblink conditioning in rats. Neuronal activity was recorded from tetrodes within the anterior interpositus nucleus (IPN) of the cerebellum and basilar pontine nucleus (PN) during different phases of training. After stimulus pre-exposure and unpaired training sessions with a tone conditioned stimulus (CS), light CS, and periorbital stimulation unconditioned stimulus (US), rats received associative training with one of the CSs and the US (CS1-US). Training then continued on the same day with the other CS to assess cross-modal transfer (CS2-US). The final training session included associative training with both CSs on separate trials to establish stronger cross-modal transfer (CS1/CS2). Neurons in the IPN and PN showed primarily unimodal responses during pre-training sessions. Learning-related facilitation of activity correlated with the conditioned response (CR) developed in the IPN and PN during CS1-US training. Subsequent CS2-US training resulted in acquisition of CRs and learning-related neuronal activity in the IPN but substantially less little learning-related activity in the PN. Additional CS1/CS2 training increased CRs and learning-related activity in the IPN and PN during CS2-US trials. The findings suggest that cross-modal neuronal plasticity in the PN is driven by excitatory feedback from the IPN to the PN. Interacting plasticity mechanisms in the IPN and PN may underlie behavioral cross-modal transfer in eyeblink conditioning. PMID:21411647

  7. Learning Discriminative Binary Codes for Large-scale Cross-modal Retrieval.

    PubMed

    Xu, Xing; Shen, Fumin; Yang, Yang; Shen, Heng Tao; Li, Xuelong

    2017-05-01

    Hashing based methods have attracted considerable attention for efficient cross-modal retrieval on large-scale multimedia data. The core problem of cross-modal hashing is how to learn compact binary codes that construct the underlying correlations between heterogeneous features from different modalities. A majority of recent approaches aim at learning hash functions to preserve the pairwise similarities defined by given class labels. However, these methods fail to explicitly explore the discriminative property of class labels during hash function learning. In addition, they usually discard the discrete constraints imposed on the to-be-learned binary codes, and compromise to solve a relaxed problem with quantization to obtain the approximate binary solution. Therefore, the binary codes generated by these methods are suboptimal and less discriminative to different classes. To overcome these drawbacks, we propose a novel cross-modal hashing method, termed discrete cross-modal hashing (DCH), which directly learns discriminative binary codes while retaining the discrete constraints. Specifically, DCH learns modality-specific hash functions for generating unified binary codes, and these binary codes are viewed as representative features for discriminative classification with class labels. An effective discrete optimization algorithm is developed for DCH to jointly learn the modality-specific hash function and the unified binary codes. Extensive experiments on three benchmark data sets highlight the superiority of DCH under various cross-modal scenarios and show its state-of-the-art performance.

  8. Context Modulates Congruency Effects in Selective Attention to Social Cues.

    PubMed

    Ravagli, Andrea; Marini, Francesco; Marino, Barbara F M; Ricciardelli, Paola

    2018-01-01

    Head and gaze directions are used during social interactions as essential cues to infer where someone attends. When head and gaze are oriented toward opposite directions, we need to extract socially meaningful information despite stimulus conflict. Recently, a cognitive and neural mechanism for filtering-out conflicting stimuli has been identified while performing non-social attention tasks. This mechanism is engaged proactively when conflict is anticipated in a high proportion of trials and reactively when conflict occurs infrequently. Here, we investigated whether a similar mechanism is at play for limiting distraction from conflicting social cues during gaze or head direction discrimination tasks in contexts with different probabilities of conflict. Results showed that, for the gaze direction task only (Experiment 1), inverse efficiency (IE) scores for distractor-absent trials (i.e., faces with averted gaze and centrally oriented head) were larger (indicating worse performance) when these trials were intermixed with congruent/incongruent distractor-present trials (i.e., faces with averted gaze and tilted head in the same/opposite direction) relative to when the same distractor-absent trials were shown in isolation. Moreover, on distractor-present trials, IE scores for congruent (vs. incongruent) head-gaze pairs in blocks with rare conflict were larger than in blocks with frequent conflict, suggesting that adaptation to conflict was more efficient than adaptation to infrequent events. However, when the task required discrimination of head orientation while ignoring gaze direction, performance was not impacted by both block-level and current trial congruency (Experiment 2), unless the cognitive load of the task was increased by adding a concurrent task (Experiment 3). Overall, our study demonstrates that during attention to social cues proactive cognitive control mechanisms are modulated by the expectation of conflicting stimulus information at both the block- and trial-sequence level, and by the type of task and cognitive load. This helps to clarify the inherent differences in the distracting potential of head and gaze cues during speeded social attention tasks.

  9. Enemies and friends in the neighborhood: orthographic similarity effects in semantic categorization.

    PubMed

    Pecher, Diane; Zeelenberg, René; Wagenmakers, Eric-Jan

    2005-01-01

    Studies investigating orthographic similarity effects in semantic tasks have produced inconsistent results. The authors investigated orthographic similarity effects in animacy decision and in contrast with previous studies, they took semantic congruency into account. In Experiments 1 and 2, performance to a target (cat) was better if a previously studied neighbor (rat) was congruent (i.e., belonged to the same animate-inanimate category) than it was if it was incongruent (e.g., mat). In Experiments 3 and 4, performance was better for targets with more preexisting congruent neighbors than for targets with more preexisting incongruent neighbors. These results demonstrate that orthographic similarity effects in semantic categorization are conditional on semantic congruency. This strongly suggests that semantic information becomes available before orthographic processing has been completed. 2005 APA

  10. Cross-modal individual recognition in wild African lions.

    PubMed

    Gilfillan, Geoffrey; Vitale, Jessica; McNutt, John Weldon; McComb, Karen

    2016-08-01

    Individual recognition is considered to have been fundamental in the evolution of complex social systems and is thought to be a widespread ability throughout the animal kingdom. Although robust evidence for individual recognition remains limited, recent experimental paradigms that examine cross-modal processing have demonstrated individual recognition in a range of captive non-human animals. It is now highly relevant to test whether cross-modal individual recognition exists within wild populations and thus examine how it is employed during natural social interactions. We address this question by testing audio-visual cross-modal individual recognition in wild African lions (Panthera leo) using an expectancy-violation paradigm. When presented with a scenario where the playback of a loud-call (roaring) broadcast from behind a visual block is incongruent with the conspecific previously seen there, subjects responded more strongly than during the congruent scenario where the call and individual matched. These findings suggest that lions are capable of audio-visual cross-modal individual recognition and provide a useful method for studying this ability in wild populations. © 2016 The Author(s).

  11. Influence of auditory spatial attention on cross-modal semantic priming effect: evidence from N400 effect.

    PubMed

    Wang, Hongyan; Zhang, Gaoyan; Liu, Baolin

    2017-01-01

    Semantic priming is an important research topic in the field of cognitive neuroscience. Previous studies have shown that the uni-modal semantic priming effect can be modulated by attention. However, the influence of attention on cross-modal semantic priming is unclear. To investigate this issue, the present study combined a cross-modal semantic priming paradigm with an auditory spatial attention paradigm, presenting the visual pictures as the prime stimuli and the semantically related or unrelated sounds as the target stimuli. Event-related potentials results showed that when the target sound was attended to, the N400 effect was evoked. The N400 effect was also observed when the target sound was not attended to, demonstrating that the cross-modal semantic priming effect persists even though the target stimulus is not focused on. Further analyses revealed that the N400 effect evoked by the unattended sound was significantly lower than the effect evoked by the attended sound. This contrast provides new evidence that the cross-modal semantic priming effect can be modulated by attention.

  12. Visual adaptation enhances action sound discrimination.

    PubMed

    Barraclough, Nick E; Page, Steve A; Keefe, Bruce D

    2017-01-01

    Prolonged exposure, or adaptation, to a stimulus in 1 modality can bias, but also enhance, perception of a subsequent stimulus presented within the same modality. However, recent research has also found that adaptation in 1 modality can bias perception in another modality. Here, we show a novel crossmodal adaptation effect, where adaptation to a visual stimulus enhances subsequent auditory perception. We found that when compared to no adaptation, prior adaptation to visual, auditory, or audiovisual hand actions enhanced discrimination between 2 subsequently presented hand action sounds. Discrimination was most enhanced when the visual action "matched" the auditory action. In addition, prior adaptation to a visual, auditory, or audiovisual action caused subsequent ambiguous action sounds to be perceived as less like the adaptor. In contrast, these crossmodal action aftereffects were not generated by adaptation to the names of actions. Enhanced crossmodal discrimination and crossmodal perceptual aftereffects may result from separate mechanisms operating in audiovisual action sensitive neurons within perceptual systems. Adaptation-induced crossmodal enhancements cannot be explained by postperceptual responses or decisions. More generally, these results together indicate that adaptation is a ubiquitous mechanism for optimizing perceptual processing of multisensory stimuli.

  13. Neonatal Restriction of Tactile Inputs Leads to Long-Lasting Impairments of Cross-Modal Processing

    PubMed Central

    Röder, Brigitte; Hanganu-Opatz, Ileana L.

    2015-01-01

    Optimal behavior relies on the combination of inputs from multiple senses through complex interactions within neocortical networks. The ontogeny of this multisensory interplay is still unknown. Here, we identify critical factors that control the development of visual-tactile processing by combining in vivo electrophysiology with anatomical/functional assessment of cortico-cortical communication and behavioral investigation of pigmented rats. We demonstrate that the transient reduction of unimodal (tactile) inputs during a short period of neonatal development prior to the first cross-modal experience affects feed-forward subcortico-cortical interactions by attenuating the cross-modal enhancement of evoked responses in the adult primary somatosensory cortex. Moreover, the neonatal manipulation alters cortico-cortical interactions by decreasing the cross-modal synchrony and directionality in line with the sparsification of direct projections between primary somatosensory and visual cortices. At the behavioral level, these functional and structural deficits resulted in lower cross-modal matching abilities. Thus, neonatal unimodal experience during defined developmental stages is necessary for setting up the neuronal networks of multisensory processing. PMID:26600123

  14. The Relationship between Stroop Interference and Facilitation Effects: Statistical Artifacts, Baselines, and a Reassessment

    ERIC Educational Resources Information Center

    Brown, Tracy L.

    2011-01-01

    The relationship between interference and facilitation effects in the Stroop task is poorly understood yet central to its implications. At question is the modal view that they arise from a single mechanism--the congruency of color and word. Two developments have challenged that view: (a) the belief that facilitation effects are fractionally small…

  15. Revealing List-Level Control in the Stroop Task by Uncovering Its Benefits and a Cost

    ERIC Educational Resources Information Center

    Bugg, Julie M.; McDaniel, Mark A.; Scullin, Michael K.; Braver, Todd S.

    2011-01-01

    Interference is reduced in mostly incongruent relative to mostly congruent lists. Classic accounts of this list-wide proportion congruence effect assume that list-level control processes strategically modulate word reading. Contemporary accounts posit that reliance on the word is modulated poststimulus onset by item-specific information (e.g.,…

  16. Converging Evidence for Control of Color-Word Stroop Interference at the Item Level

    ERIC Educational Resources Information Center

    Bugg, Julie M.; Hutchison, Keith A.

    2013-01-01

    Prior studies have shown that cognitive control is implemented at the list and context levels in the color-word Stroop task. At first blush, the finding that Stroop interference is reduced for mostly incongruent items as compared with mostly congruent items (i.e., the item-specific proportion congruence [ISPC] effect) appears to provide evidence…

  17. Sounds activate visual cortex and improve visual discrimination.

    PubMed

    Feng, Wenfeng; Störmer, Viola S; Martinez, Antigona; McDonald, John J; Hillyard, Steven A

    2014-07-16

    A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. Copyright © 2014 the authors 0270-6474/14/349817-08$15.00/0.

  18. Olfactory discrimination: when vision matters?

    PubMed

    Demattè, M Luisa; Sanabria, Daniel; Spence, Charles

    2009-02-01

    Many previous studies have attempted to investigate the effect of visual cues on olfactory perception in humans. The majority of this research has only looked at the modulatory effect of color, which has typically been explained in terms of multisensory perceptual interactions. However, such crossmodal effects may equally well relate to interactions taking place at a higher level of information processing as well. In fact, it is well-known that semantic knowledge can have a substantial effect on people's olfactory perception. In the present study, we therefore investigated the influence of visual cues, consisting of color patches and/or shapes, on people's olfactory discrimination performance. Participants had to make speeded odor discrimination responses (lemon vs. strawberry) while viewing a red or yellow color patch, an outline drawing of a strawberry or lemon, or a combination of these color and shape cues. Even though participants were instructed to ignore the visual stimuli, our results demonstrate that the accuracy of their odor discrimination responses was influenced by visual distractors. This result shows that both color and shape information are taken into account during speeded olfactory discrimination, even when such information is completely task irrelevant, hinting at the automaticity of such higher level visual-olfactory crossmodal interactions.

  19. Automated cross-modal mapping in robotic eye/hand systems using plastic radial basis function networks

    NASA Astrophysics Data System (ADS)

    Meng, Qinggang; Lee, M. H.

    2007-03-01

    Advanced autonomous artificial systems will need incremental learning and adaptive abilities similar to those seen in humans. Knowledge from biology, psychology and neuroscience is now inspiring new approaches for systems that have sensory-motor capabilities and operate in complex environments. Eye/hand coordination is an important cross-modal cognitive function, and is also typical of many of the other coordinations that must be involved in the control and operation of embodied intelligent systems. This paper examines a biologically inspired approach for incrementally constructing compact mapping networks for eye/hand coordination. We present a simplified node-decoupled extended Kalman filter for radial basis function networks, and compare this with other learning algorithms. An experimental system consisting of a robot arm and a pan-and-tilt head with a colour camera is used to produce results and test the algorithms in this paper. We also present three approaches for adapting to structural changes during eye/hand coordination tasks, and the robustness of the algorithms under noise are investigated. The learning and adaptation approaches in this paper have similarities with current ideas about neural growth in the brains of humans and animals during tool-use, and infants during early cognitive development.

  20. Distinguishing response conflict and task conflict in the Stroop task: evidence from ex-Gaussian distribution analysis.

    PubMed

    Steinhauser, Marco; Hübner, Ronald

    2009-10-01

    It has been suggested that performance in the Stroop task is influenced by response conflict as well as task conflict. The present study investigated the idea that both conflict types can be isolated by applying ex-Gaussian distribution analysis which decomposes response time into a Gaussian and an exponential component. Two experiments were conducted in which manual versions of a standard Stroop task (Experiment 1) and a separated Stroop task (Experiment 2) were performed under task-switching conditions. Effects of response congruency and stimulus bivalency were used to measure response conflict and task conflict, respectively. Ex-Gaussian analysis revealed that response conflict was mainly observed in the Gaussian component, whereas task conflict was stronger in the exponential component. Moreover, task conflict in the exponential component was selectively enhanced under task-switching conditions. The results suggest that ex-Gaussian analysis can be used as a tool to isolate different conflict types in the Stroop task. PsycINFO Database Record (c) 2009 APA, all rights reserved.

  1. Reward-Based Learning as a Function of Severity of Substance Abuse Risk in Drug-Naïve Youth with ADHD.

    PubMed

    Parvaz, Muhammad A; Kim, Kristen; Froudist-Walsh, Sean; Newcorn, Jeffrey H; Ivanov, Iliyan

    2018-06-20

    Attention-deficit/hyperactivity disorder (ADHD) is associated with elevated risk for later development of substance use disorders (SUD), specifically because youth with ADHD, similar to individuals with SUD, exhibit deficits in learning abilities and reward processing. Another known risk factor for SUD is familial history of substance dependence. Youth with familial SUD history show reward processing deficits, higher prevalence of externalizing disorders, and higher impulsivity scores. Thus, the main objective of this proof-of-concept study is to investigate whether risk loading (ADHD and parental substance use) for developing SUD in drug-naïve youth impacts reward-related learning. Forty-one drug-naïve youth, stratified into three groups: Healthy Controls (HC, n = 13; neither ADHD nor parental SUD), Low Risk (LR, n = 13; ADHD only), and High Risk (HR, n = 15; ADHD and parental SUD), performed a novel Anticipation, Conflict, and Reward (ACR) task. In addition to conventional reaction time (RT) and accuracy analyses, we analyzed computational variables including learning rates and assessed the influence of learned predictions of reward probability and stimulus congruency on RT. The multivariate ANOVA on learning rate, congruence, and prediction revealed a significant main Group effect across these variables [F(3, 37) = 3.79, p = 0.018]. There were significant linear effects for learning rate (Contrast Estimate = 0.181, p = 0.038) and the influence of stimulus congruency on RTs (Contrast Estimate = 1.16, p = 0.017). Post hoc comparisons revealed that HR youth showed the most significant deficits in accuracy and learning rates, while stimulus congruency had a lower impact on RTs in this group. LR youth showed scores between those of the HC and HR youth. These preliminary results suggest that deficits in learning and in adjusting to task difficulty are a function of increasing risk loading for SUD in drug-naïve youth. These results also highlight the importance of developing and applying computational models to study intricate details in behavior that typical analytic methodology may not be sensitive to.

  2. Interplay Between the Object and Its Symbol: The Size-Congruency Effect

    PubMed Central

    Shen, Manqiong; Xie, Jiushu; Liu, Wenjuan; Lin, Wenjie; Chen, Zhuoming; Marmolejo-Ramos, Fernando; Wang, Ruiming

    2016-01-01

    Grounded cognition suggests that conceptual processing shares cognitive resources with perceptual processing. Hence, conceptual processing should be affected by perceptual processing, and vice versa. The current study explored the relationship between conceptual and perceptual processing of size. Within a pair of words, we manipulated the font size of each word, which was either congruent or incongruent with the actual size of the referred object. In Experiment 1a, participants compared object sizes that were referred to by word pairs. Higher accuracy was observed in the congruent condition (e.g., word pairs referring to larger objects in larger font sizes) than in the incongruent condition. This is known as the size-congruency effect. In Experiments 1b and 2, participants compared the font sizes of these word pairs. The size-congruency effect was not observed. In Experiments 3a and 3b, participants compared object and font sizes of word pairs depending on a task cue. Results showed that perceptual processing affected conceptual processing, and vice versa. This suggested that the association between conceptual and perceptual processes may be bidirectional but further modulated by semantic processing. Specifically, conceptual processing might only affect perceptual processing when semantic information is activated. The current study PMID:27512529

  3. Can contingency learning alone account for item-specific control? Evidence from within- and between-language ISPC effects.

    PubMed

    Atalay, Nart Bedin; Misirlisoy, Mine

    2012-11-01

    The item-specific proportion congruence (ISPC) manipulation (Jacoby, Lindsay, & Hessels, 2003) produces larger Stroop interference for mostly congruent items than mostly incongruent items. This effect has been attributed to dynamic control over word-reading processes. However, proportion congruence of an item in the ISPC manipulation is completely confounded with response contingency, suggesting the alternative hypothesis, that the ISPC effect is a result of learning response contingencies (Schmidt & Besner, 2008). The current study asks whether the ISPC effect can be explained by a pure stimulus-response contingency-learning account, or whether other control processes play a role as well, by comparing within- and between-language conditions in a bilingual task. Experiment 1 showed that contingency learning for noncolor words was larger for the within-language than the between-language condition. Experiment 2 revealed significant ISPC effects for both within- and between-language conditions; importantly, the effect was larger in the former. The results of the contingency analyses for Experiment 2 were parallel to that of Experiment 1 and did not show an interaction between contingency and congruency. Put together, these sets of results support the view that contingency-learning processes dominate color-word ISPC effects.

  4. Enhancing emotional experiences to dance through music: the role of valence and arousal in the cross-modal bias.

    PubMed

    Christensen, Julia F; Gaigg, Sebastian B; Gomila, Antoni; Oke, Peter; Calvo-Merino, Beatriz

    2014-01-01

    It is well established that emotional responses to stimuli presented to one perceptive modality (e.g., visual) are modulated by the concurrent presentation of affective information to another modality (e.g., auditory)-an effect known as the cross-modal bias. However, the affective mechanisms mediating this effect are still not fully understood. It remains unclear what role different dimensions of stimulus valence and arousal play in mediating the effect, and to what extent cross-modal influences impact not only our perception and conscious affective experiences, but also our psychophysiological emotional response. We addressed these issues by measuring participants' subjective emotion ratings and their Galvanic Skin Responses (GSR) in a cross-modal affect perception paradigm employing videos of ballet dance movements and instrumental classical music as the stimuli. We chose these stimuli to explore the cross-modal bias in a context of stimuli (ballet dance movements) that most participants would have relatively little prior experience with. Results showed (i) that the cross-modal bias was more pronounced for sad than for happy movements, whereas it was equivalent when contrasting high vs. low arousal movements; and (ii) that movement valence did not modulate participants' GSR, while movement arousal did, such that GSR was potentiated in the case of low arousal movements with sad music and when high arousal movements were paired with happy music. Results are discussed in the context of the affective dimension of neuroentrainment and with regards to implications for the art community.

  5. Developmental changes in the inferior frontal cortex for selecting semantic representations

    PubMed Central

    Lee, Shu-Hui; Booth, James R.; Chen, Shiou-Yuan; Chou, Tai-Li

    2012-01-01

    Functional magnetic resonance imaging (fMRI) was used to examine the neural correlates of semantic judgments to Chinese words in a group of 10–15 year old Chinese children. Two semantic tasks were used: visual–visual versus visual–auditory presentation. The first word was visually presented (i.e. character) and the second word was either visually or auditorily presented, and the participant had to determine if these two words were related in meaning. Different from English, Chinese has many homophones in which each spoken word corresponds to many characters. The visual–auditory task, therefore, required greater engagement of cognitive control for the participants to select a semantically appropriate answer for the second homophonic word. Weaker association pairs produced greater activation in the mid-ventral region of left inferior frontal gyrus (BA 45) for both tasks. However, this effect was stronger for the visual–auditory task than for the visual–visual task and this difference was stronger for older compared to younger children. The findings suggest greater involvement of semantic selection mechanisms in the cross-modal task requiring the access of the appropriate meaning of homophonic spoken words, especially for older children. PMID:22337757

  6. Integration of internal and external facial features in 8- to 10-year-old children and adults.

    PubMed

    Meinhardt-Injac, Bozana; Persike, Malte; Meinhardt, Günter

    2014-06-01

    Investigation of whole-part and composite effects in 4- to 6-year-old children gave rise to claims that face perception is fully mature within the first decade of life (Crookes & McKone, 2009). However, only internal features were tested, and the role of external features was not addressed, although external features are highly relevant for holistic face perception (Sinha & Poggio, 1996; Axelrod & Yovel, 2010, 2011). In this study, 8- to 10-year-old children and adults performed a same-different matching task with faces and watches. In this task participants attended to either internal or external features. Holistic face perception was tested using a congruency paradigm, in which face and non-face stimuli either agreed or disagreed in both features (congruent contexts) or just in the attended ones (incongruent contexts). In both age groups, pronounced context congruency and inversion effects were found for faces, but not for watches. These findings indicate holistic feature integration for faces. While inversion effects were highly similar in both age groups, context congruency effects were stronger for children. Moreover, children's face matching performance was generally better when attending to external compared to internal features. Adults tended to perform better when attending to internal features. Our results indicate that both adults and 8- to 10-year-old children integrate external and internal facial features into holistic face representations. However, in children's face representations external features are much more relevant. These findings suggest that face perception is holistic but still not adult-like at the end of the first decade of life. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Flexible and inflexible task sets: asymmetric interference when switching between emotional expression, sex, and age classification of perceived faces.

    PubMed

    Schuch, Stefanie; Werheid, Katja; Koch, Iring

    2012-01-01

    The present study investigated whether the processing characteristics of categorizing emotional facial expressions are different from those of categorizing facial age and sex information. Given that emotions change rapidly, it was hypothesized that processing facial expressions involves a more flexible task set that causes less between-task interference than the task sets involved in processing age or sex of a face. Participants switched between three tasks: categorizing a face as looking happy or angry (emotion task), young or old (age task), and male or female (sex task). Interference between tasks was measured by global interference and response interference. Both measures revealed patterns of asymmetric interference. Global between-task interference was reduced when a task was mixed with the emotion task. Response interference, as measured by congruency effects, was larger for the emotion task than for the nonemotional tasks. The results support the idea that processing emotional facial expression constitutes a more flexible task set that causes less interference (i.e., task-set "inertia") than processing the age or sex of a face.

  8. Unconscious Congruency Priming from Unpracticed Words Is Modulated by Prime-Target Semantic Relatedness

    ERIC Educational Resources Information Center

    Ortells, Juan J.; Mari-Beffa, Paloma; Plaza-Ayllon, Vanesa

    2013-01-01

    Participants performed a 2-choice categorization task on visible word targets that were preceded by novel (unpracticed) prime words. The prime words were presented for 33 ms and followed either immediately (Experiments 1-3) or after a variable delay (Experiments 1 and 4) by a pattern mask. Both subjective and objective measures of prime visibility…

  9. Effects of Peripheral Eccentricity and Head Orientation on Gaze Discrimination

    PubMed Central

    Palanica, Adam; Itier, Roxane J.

    2017-01-01

    Visual search tasks support a special role for direct gaze in human cognition, while classic gaze judgment tasks suggest the congruency between head orientation and gaze direction plays a central role in gaze perception. Moreover, whether gaze direction can be accurately discriminated in the periphery using covert attention is unknown. In the present study, individual faces in frontal and in deviated head orientations with a direct or an averted gaze were flashed for 150 ms across the visual field; participants focused on a centred fixation while judging the gaze direction. Gaze discrimination speed and accuracy varied with head orientation and eccentricity. The limit of accurate gaze discrimination was less than ±6° eccentricity. Response times suggested a processing facilitation for direct gaze in fovea, irrespective of head orientation, however, by ±3° eccentricity, head orientation started biasing gaze judgments, and this bias increased with eccentricity. Results also suggested a special processing of frontal heads with direct gaze in central vision, rather than a general congruency effect between eye and head cues. Thus, while both head and eye cues contribute to gaze discrimination, their role differs with eccentricity. PMID:28344501

  10. Auditory conflict and congruence in frontotemporal dementia.

    PubMed

    Clark, Camilla N; Nicholas, Jennifer M; Agustus, Jennifer L; Hardy, Christopher J D; Russell, Lucy L; Brotherhood, Emilie V; Dick, Katrina M; Marshall, Charles R; Mummery, Catherine J; Rohrer, Jonathan D; Warren, Jason D

    2017-09-01

    Impaired analysis of signal conflict and congruence may contribute to diverse socio-emotional symptoms in frontotemporal dementias, however the underlying mechanisms have not been defined. Here we addressed this issue in patients with behavioural variant frontotemporal dementia (bvFTD; n = 19) and semantic dementia (SD; n = 10) relative to healthy older individuals (n = 20). We created auditory scenes in which semantic and emotional congruity of constituent sounds were independently probed; associated tasks controlled for auditory perceptual similarity, scene parsing and semantic competence. Neuroanatomical correlates of auditory congruity processing were assessed using voxel-based morphometry. Relative to healthy controls, both the bvFTD and SD groups had impaired semantic and emotional congruity processing (after taking auditory control task performance into account) and reduced affective integration of sounds into scenes. Grey matter correlates of auditory semantic congruity processing were identified in distributed regions encompassing prefrontal, parieto-temporal and insular areas and correlates of auditory emotional congruity in partly overlapping temporal, insular and striatal regions. Our findings suggest that decoding of auditory signal relatedness may probe a generic cognitive mechanism and neural architecture underpinning frontotemporal dementia syndromes. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  11. Attentional distractor interference may be diminished by concurrent working memory load in normal participants and traumatic brain injury patients.

    PubMed

    Gil-Gómez de Liaño, Beatriz; Umiltà, Carlo; Stablum, Franca; Tebaldi, Francesca; Cantagallo, Anna

    2010-12-01

    A reduction in congruency effects under working memory (WM) load has been previously described using different attentional paradigms (e.g., Kim, Kim, & Chun, 2005; Smilek, Enns, Eastwood, & Merikle, 2006). One hypothesis is that different types of WM load have different effects on attentional selection, depending on whether a specific memory load demands resources in common with target or distractor processing. In particular, if information in WM is related to the distractors in the selective attention task, there is a reduction in distraction (Kim et al., 2005). However, although previous results seem to point to a decrease in interference under high WM load conditions (Kim et al., 2005), the lack of a neutral baseline for the congruency effects makes it difficult to differentiate between a decrease in interference or in facilitation. In the present work we included neutral trials in the task introduced by Kim et al. (2005) and tested normal participants and traumatic brain injury patients. Results support a reduction in the processing of distractors under WM load, at least for incongruent trials in both groups. Theoretical as well as applied implications are discussed. Copyright © 2010 Elsevier Inc. All rights reserved.

  12. Characterizing switching and congruency effects in the Implicit Association Test as reactive and proactive cognitive control.

    PubMed

    Hilgard, Joseph; Bartholow, Bruce D; Dickter, Cheryl L; Blanton, Hart

    2015-03-01

    Recent research has identified an important role for task switching, a cognitive control process often associated with executive functioning, in the Implicit Association Test (IAT). However, switching does not fully account for IAT effects, particularly when performance is scored using more recent d-score formulations. The current study sought to characterize multiple control processes involved in IAT performance through the use of event-related brain potentials (ERPs). Participants performed a race-evaluative IAT while ERPs were recorded. Behaviorally, participants experienced superadditive reaction time costs of incongruency and task switching, consistent with previous studies. The ERP showed a marked medial frontal negativity (MFN) 250-450 ms post-stimulus at midline fronto-central locations that were more negative for incongruent than congruent trials but more positive for switch than for no-switch trials, suggesting separable control processes are engaged by these two factors. Greater behavioral IAT bias was associated with both greater switch-related and congruency-related ERP activity. Findings are discussed in terms of the Dual Mechanisms of Control model of reactive and proactive cognitive control. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  13. Differential processing of part-to-whole and part-to-part face priming: an ERP study.

    PubMed

    Jemel, B; George, N; Chaby, L; Fiori, N; Renault, B

    1999-04-06

    We provide electrophysiological evidence supporting the hypothesis that part and whole face processing involve distinct functional mechanisms. We used a congruency judgment task and studied part-to-whole and part-to-part priming effects. Neither part-to-whole nor part-to-part conditions elicited early congruency effects on face-specific ERP components, suggesting that activation of the internal representations should occur later on. However, these components showed differential responsiveness to whole faces and isolated eyes. In addition, although late ERP components were affected when the eye targets were not associated with the prime in both conditions, their temporal and topographical features depended on the latter. These differential effects suggest the existence of distributed neural networks in the inferior temporal cortex where part and whole facial representations may be stored.

  14. Synaesthesia: when coloured sounds taste sweet.

    PubMed

    Beeli, Gian; Esslen, Michaela; Jäncke, Lutz

    2005-03-03

    Synaesthesia is the involuntary physical experience of a cross-modal linkage--for example, hearing a tone (the inducing stimulus) evokes an additional sensation of seeing a colour (concurrent perception). Of the different types of synaesthesia, most have colour as the concurrent perception, with concurrent perceptions of smell or taste being rare. Here we describe the case of a musician who experiences different tastes in response to hearing different musical tone intervals, and who makes use of her synaesthetic sensations in the complex task of tone-interval identification. To our knowledge, this combination of inducing stimulus and concurrent perception has not been described before.

  15. Cross-modal plasticity in developmental and age-related hearing loss: Clinical implications.

    PubMed

    Glick, Hannah; Sharma, Anu

    2017-01-01

    This review explores cross-modal cortical plasticity as a result of auditory deprivation in populations with hearing loss across the age spectrum, from development to adulthood. Cross-modal plasticity refers to the phenomenon when deprivation in one sensory modality (e.g. the auditory modality as in deafness or hearing loss) results in the recruitment of cortical resources of the deprived modality by intact sensory modalities (e.g. visual or somatosensory systems). We discuss recruitment of auditory cortical resources for visual and somatosensory processing in deafness and in lesser degrees of hearing loss. We describe developmental cross-modal re-organization in the context of congenital or pre-lingual deafness in childhood and in the context of adult-onset, age-related hearing loss, with a focus on how cross-modal plasticity relates to clinical outcomes. We provide both single-subject and group-level evidence of cross-modal re-organization by the visual and somatosensory systems in bilateral, congenital deafness, single-sided deafness, adults with early-stage, mild-moderate hearing loss, and individual adult and pediatric patients exhibit excellent and average speech perception with hearing aids and cochlear implants. We discuss a framework in which changes in cortical resource allocation secondary to hearing loss results in decreased intra-modal plasticity in auditory cortex, accompanied by increased cross-modal recruitment of auditory cortices by the other sensory systems, and simultaneous compensatory activation of frontal cortices. The frontal cortices, as we will discuss, play an important role in mediating cognitive compensation in hearing loss. Given the wide range of variability in behavioral performance following audiological intervention, changes in cortical plasticity may play a valuable role in the prediction of clinical outcomes following intervention. Further, the development of new technologies and rehabilitation strategies that incorporate brain-based biomarkers may help better serve hearing impaired populations across the lifespan. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Cross-modal project prioritization : a TPCB peer exchange.

    DOT National Transportation Integrated Search

    2015-05-01

    This report highlights key recommendations and best practices identified at the peer exchange on Cross-Modal Project Prioritization, held on December 16 and 17, 2014, in Raleigh, North Carolina. This event was sponsored by the Transportation Planning...

  17. Oxytocin mediates early experience-dependent cross-modal plasticity in the sensory cortices.

    PubMed

    Zheng, Jing-Jing; Li, Shu-Jing; Zhang, Xiao-Di; Miao, Wan-Ying; Zhang, Dinghong; Yao, Haishan; Yu, Xiang

    2014-03-01

    Sensory experience is critical to development and plasticity of neural circuits. Here we report a new form of plasticity in neonatal mice, where early sensory experience cross-modally regulates development of all sensory cortices via oxytocin signaling. Unimodal sensory deprivation from birth through whisker deprivation or dark rearing reduced excitatory synaptic transmission in the correspondent sensory cortex and cross-modally in other sensory cortices. Sensory experience regulated synthesis and secretion of the neuropeptide oxytocin as well as its level in the cortex. Both in vivo oxytocin injection and increased sensory experience elevated excitatory synaptic transmission in multiple sensory cortices and significantly rescued the effects of sensory deprivation. Together, these results identify a new function for oxytocin in promoting cross-modal, experience-dependent cortical development. This link between sensory experience and oxytocin is particularly relevant to autism, where hypersensitivity or hyposensitivity to sensory inputs is prevalent and oxytocin is a hotly debated potential therapy.

  18. Multiple reference frames in haptic spatial processing

    NASA Astrophysics Data System (ADS)

    Volčič, R.

    2008-08-01

    The present thesis focused on haptic spatial processing. In particular, our interest was directed to the perception of spatial relations with the main focus on the perception of orientation. To this end, we studied haptic perception in different tasks, either in isolation or in combination with vision. The parallelity task, where participants have to match the orientations of two spatially separated bars, was used in its two-dimensional and three-dimensional versions in Chapter 2 and Chapter 3, respectively. The influence of non-informative vision and visual interference on performance in the parallelity task was studied in Chapter 4. A different task, the mental rotation task, was introduced in a purely haptic study in Chapter 5 and in a visuo-haptic cross-modal study in Chapter 6. The interaction of multiple reference frames and their influence on haptic spatial processing were the common denominators of these studies. In this thesis we approached the problems of which reference frames play the major role in haptic spatial processing and how the relative roles of distinct reference frames change depending on the available information and the constraints imposed by different tasks. We found that the influence of a reference frame centered on the hand was the major cause of the deviations from veridicality observed in both the two-dimensional and three-dimensional studies. The results were described by a weighted average model, in which the hand-centered egocentric reference frame is supposed to have a biasing influence on the allocentric reference frame. Performance in haptic spatial processing has been shown to depend also on sources of information or processing that are not strictly connected to the task at hand. When non-informative vision was provided, a beneficial effect was observed in the haptic performance. This improvement was interpreted as a shift from the egocentric to the allocentric reference frame. Moreover, interfering visual information presented in the vicinity of the haptic stimuli parametrically modulated the magnitude of the deviations. The influence of the hand-centered reference frame was shown also in the haptic mental rotation task where participants were quicker in judging the parity of objects when these were aligned with respect to the hands than when they were physically aligned. Similarly, in the visuo-haptic cross-modal mental rotation task the parity judgments were influenced by the orientation of the exploring hand with respect to the viewing direction. This effect was shown to be modulated also by an intervening temporal delay that supposedly counteracts the influence of the hand-centered reference frame. We suggest that the hand-centered reference frame is embedded in a hierarchical structure of reference frames where some of these emerge depending on the demands and the circumstances of the surrounding environment and the needs of an active perceiver.

  19. Motivational orientations and task autonomy fit: effects on organizational attraction.

    PubMed

    Wu, Yu-Chi

    2012-02-01

    The main purpose of this study was to investigate whether there is congruence between applicant needs (i.e., motivational orientations) and what is available (i.e., task autonomy) from an organizational perspective based on the fit between needs and supply. The fit between work motivation and task autonomy was examined to see whether it was associated with organizational attraction. This experimental study included two phases. Phase 1 participants consisted of 446 undergraduate students, of whom 228 were recruited to participate in Phase 2. The fit relations between task autonomy and intrinsic motivation and between task control and extrinsic motivation were characterized. Findings indicated that the fit between work motivation and task autonomy was positively associated with organizational attraction. Based on these results, it may be inferred that employers should emphasize job characteristics such as autonomy or control orientations to attract individuals, and focus on the most suitable work motivations for their organizations.

  20. Congenital Anophthalmia and Binocular Neonatal Enucleation Differently Affect the Proteome of Primary and Secondary Visual Cortices in Mice.

    PubMed

    Laramée, Marie-Eve; Smolders, Katrien; Hu, Tjing-Tjing; Bronchti, Gilles; Boire, Denis; Arckens, Lutgarde

    2016-01-01

    In blind individuals, visually deprived occipital areas are activated by non-visual stimuli. The extent of this cross-modal activation depends on the age at onset of blindness. Cross-modal inputs have access to several anatomical pathways to reactivate deprived visual areas. Ectopic cross-modal subcortical connections have been shown in anophthalmic animals but not in animals deprived of sight at a later age. Direct and indirect cross-modal cortical connections toward visual areas could also be involved, yet the number of neurons implicated is similar between blind mice and sighted controls. Changes at the axon terminal, dendritic spine or synaptic level are therefore expected upon loss of visual inputs. Here, the proteome of V1, V2M and V2L from P0-enucleated, anophthalmic and sighted mice, sharing a common genetic background (C57BL/6J x ZRDCT/An), was investigated by 2-D DIGE and Western analyses to identify molecular adaptations to enucleation and/or anophthalmia. Few proteins were differentially expressed in enucleated or anophthalmic mice in comparison to sighted mice. The loss of sight affected three pathways: metabolism, synaptic transmission and morphogenesis. Most changes were detected in V1, followed by V2M. Overall, cross-modal adaptations could be promoted in both models of early blindness but not through the exact same molecular strategy. A lower metabolic activity observed in visual areas of blind mice suggests that even if cross-modal inputs reactivate visual areas, they could remain suboptimally processed.

  1. Congenital Anophthalmia and Binocular Neonatal Enucleation Differently Affect the Proteome of Primary and Secondary Visual Cortices in Mice

    PubMed Central

    Smolders, Katrien; Hu, Tjing-Tjing; Bronchti, Gilles; Boire, Denis; Arckens, Lutgarde

    2016-01-01

    In blind individuals, visually deprived occipital areas are activated by non-visual stimuli. The extent of this cross-modal activation depends on the age at onset of blindness. Cross-modal inputs have access to several anatomical pathways to reactivate deprived visual areas. Ectopic cross-modal subcortical connections have been shown in anophthalmic animals but not in animals deprived of sight at a later age. Direct and indirect cross-modal cortical connections toward visual areas could also be involved, yet the number of neurons implicated is similar between blind mice and sighted controls. Changes at the axon terminal, dendritic spine or synaptic level are therefore expected upon loss of visual inputs. Here, the proteome of V1, V2M and V2L from P0-enucleated, anophthalmic and sighted mice, sharing a common genetic background (C57BL/6J x ZRDCT/An), was investigated by 2-D DIGE and Western analyses to identify molecular adaptations to enucleation and/or anophthalmia. Few proteins were differentially expressed in enucleated or anophthalmic mice in comparison to sighted mice. The loss of sight affected three pathways: metabolism, synaptic transmission and morphogenesis. Most changes were detected in V1, followed by V2M. Overall, cross-modal adaptations could be promoted in both models of early blindness but not through the exact same molecular strategy. A lower metabolic activity observed in visual areas of blind mice suggests that even if cross-modal inputs reactivate visual areas, they could remain suboptimally processed. PMID:27410964

  2. Linear Subspace Ranking Hashing for Cross-Modal Retrieval.

    PubMed

    Li, Kai; Qi, Guo-Jun; Ye, Jun; Hua, Kien A

    2017-09-01

    Hashing has attracted a great deal of research in recent years due to its effectiveness for the retrieval and indexing of large-scale high-dimensional multimedia data. In this paper, we propose a novel ranking-based hashing framework that maps data from different modalities into a common Hamming space where the cross-modal similarity can be measured using Hamming distance. Unlike existing cross-modal hashing algorithms where the learned hash functions are binary space partitioning functions, such as the sign and threshold function, the proposed hashing scheme takes advantage of a new class of hash functions closely related to rank correlation measures which are known to be scale-invariant, numerically stable, and highly nonlinear. Specifically, we jointly learn two groups of linear subspaces, one for each modality, so that features' ranking orders in different linear subspaces maximally preserve the cross-modal similarities. We show that the ranking-based hash function has a natural probabilistic approximation which transforms the original highly discontinuous optimization problem into one that can be efficiently solved using simple gradient descent algorithms. The proposed hashing framework is also flexible in the sense that the optimization procedures are not tied up to any specific form of loss function, which is typical for existing cross-modal hashing methods, but rather we can flexibly accommodate different loss functions with minimal changes to the learning steps. We demonstrate through extensive experiments on four widely-used real-world multimodal datasets that the proposed cross-modal hashing method can achieve competitive performance against several state-of-the-arts with only moderate training and testing time.

  3. Odor Valence Linearly Modulates Attractiveness, but Not Age Assessment, of Invariant Facial Features in a Memory-Based Rating Task

    PubMed Central

    Seubert, Janina; Gregory, Kristen M.; Chamberland, Jessica; Dessirier, Jean-Marc; Lundström, Johan N.

    2014-01-01

    Scented cosmetic products are used across cultures as a way to favorably influence one's appearance. While crossmodal effects of odor valence on perceived attractiveness of facial features have been demonstrated experimentally, it is unknown whether they represent a phenomenon specific to affective processing. In this experiment, we presented odors in the context of a face battery with systematic feature manipulations during a speeded response task. Modulatory effects of linear increases of odor valence were investigated by juxtaposing subsequent memory-based ratings tasks – one predominantly affective (attractiveness) and a second, cognitive (age). The linear modulation pattern observed for attractiveness was consistent with additive effects of face and odor appraisal. Effects of odor valence on age perception were not linearly modulated and may be the result of cognitive interference. Affective and cognitive processing of faces thus appear to differ in their susceptibility to modulation by odors, likely as a result of privileged access of olfactory stimuli to affective brain networks. These results are critically discussed with respect to potential biases introduced by the preceding speeded response task. PMID:24874703

  4. The impact of phonetic dissimilarity on the perception of foreign accented speech

    NASA Astrophysics Data System (ADS)

    Weil, Shawn A.

    2003-10-01

    Non-normative speech (i.e., synthetic speech, pathological speech, foreign accented speech) is more difficult to process for native listeners than is normative speech. Does perceptual dissimilarity affect only intelligibility, or are there other costs to processing? The current series of experiments investigates both the intelligibility and time course of foreign accented speech (FAS) perception. Native English listeners heard single English words spoken by both native English speakers and non-native speakers (Mandarin or Russian). Words were chosen based on the similarity between the phonetic inventories of the respective languages. Three experimental designs were used: a cross-modal matching task, a word repetition (shadowing) task, and two subjective ratings tasks which measured impressions of accentedness and effortfulness. The results replicate previous investigations that have found that FAS significantly lowers word intelligibility. Furthermore, in FAS as well as perceptual effort, in the word repetition task, correct responses are slower to accented words than to nonaccented words. An analysis indicates that both intelligibility and reaction time are, in part, functions of the similarity between the talker's utterance and the listener's representation of the word.

  5. Visual attention modulates brain activation to angry voices.

    PubMed

    Mothes-Lasch, Martin; Mentzel, Hans-Joachim; Miltner, Wolfgang H R; Straube, Thomas

    2011-06-29

    In accordance with influential models proposing prioritized processing of threat, previous studies have shown automatic brain responses to angry prosody in the amygdala and the auditory cortex under auditory distraction conditions. However, it is unknown whether the automatic processing of angry prosody is also observed during cross-modal distraction. The current fMRI study investigated brain responses to angry versus neutral prosodic stimuli during visual distraction. During scanning, participants were exposed to angry or neutral prosodic stimuli while visual symbols were displayed simultaneously. By means of task requirements, participants either attended to the voices or to the visual stimuli. While the auditory task revealed pronounced activation in the auditory cortex and amygdala to angry versus neutral prosody, this effect was absent during the visual task. Thus, our results show a limitation of the automaticity of the activation of the amygdala and auditory cortex to angry prosody. The activation of these areas to threat-related voices depends on modality-specific attention.

  6. A tweaking principle for executive control: neuronal circuit mechanism for rule-based task switching and conflict resolution.

    PubMed

    Ardid, Salva; Wang, Xiao-Jing

    2013-12-11

    A hallmark of executive control is the brain's agility to shift between different tasks depending on the behavioral rule currently in play. In this work, we propose a "tweaking hypothesis" for task switching: a weak rule signal provides a small bias that is dramatically amplified by reverberating attractor dynamics in neural circuits for stimulus categorization and action selection, leading to an all-or-none reconfiguration of sensory-motor mapping. Based on this principle, we developed a biologically realistic model with multiple modules for task switching. We found that the model quantitatively accounts for complex task switching behavior: switch cost, congruency effect, and task-response interaction; as well as monkey's single-neuron activity associated with task switching. The model yields several testable predictions, in particular, that category-selective neurons play a key role in resolving sensory-motor conflict. This work represents a neural circuit model for task switching and sheds insights in the brain mechanism of a fundamental cognitive capability.

  7. An investigation of response and stimulus modality transfer effects after dual-task training in younger and older.

    PubMed

    Lussier, Maxime; Gagnon, Christine; Bherer, Louis

    2012-01-01

    It has been shown that dual-task training leads to significant improvement in dual-task performance in younger and older adults. However, the extent to which training benefits to untrained tasks requires further investigation. The present study assessed (a) whether dual-task training leads to cross-modality transfer in untrained tasks using new stimuli and/or motor responses modalities, (b) whether transfer effects are related to improved ability to prepare and maintain multiple task-set and/or enhanced response coordination, (c) whether there are age-related differences in transfer effects. Twenty-three younger and 23 older adults were randomly assigned to dual-task training or control conditions. All participants were assessed before and after training on three dual-task transfer conditions; (1) stimulus modality transfer (2) response modality transfer (3) stimulus and response modalities transfer task. Training group showed larger improvement than the control group in the three transfer dual-task conditions, which suggests that training leads to more than specific learning of stimuli/response associations. Attentional costs analyses showed that training led to improved dual-task cost, only in conditions that involved new stimuli or response modalities, but not both. Moreover, training did not lead to a reduced task-set cost in the transfer conditions, which suggests some limitations in transfer effects that can be expected. Overall, the present study supports the notion that cognitive plasticity for attentional control is preserved in late adulthood.

  8. Ad-hoc and context-dependent adjustments of selective attention in conflict control: an ERP study with visual probes.

    PubMed

    Nigbur, R; Schneider, J; Sommer, W; Dimigen, O; Stürmer, B

    2015-02-15

    Cognitive conflict control in flanker tasks has often been described using the zoom-lens metaphor of selective attention. However, whether and how selective attention - in terms of suppression and enhancement - operates in this context has remained unclear. To examine the dynamic interplay of selective attention and cognitive control we used electrophysiological measures and presented task-irrelevant visual probe stimuli at foveal, parafoveal, and peripheral display positions. Target-flanker congruency varied either randomly from trial to trial (mixed-block) or block-wise (fixed-block) in order to induce reactive versus proactive control modes, respectively. Three EEG measures were used to capture ad-hoc adjustments within trials as well as effects of context-based predictions: the N1 component of the visual evoked potential (VEP) to probes, the VEP to targets, and the conflict-related midfrontal N2 component. Results from probe-VEPs indicate that enhanced processing of the foveal target rather than suppression of the peripheral flankers supports interference control. In incongruent mixed-block trials VEPs were larger to probes near the targets. In the fixed-blocks probe-VEPs were not modulated, but contrary to the mixed-block the preceding target-related VEP was affected by congruency. Results of the control-related N2 reveal largest amplitudes in the unpredictable context, which did not differentiate for stimulus and response incongruency. In contrast, in the predictable context, N2 amplitudes were reduced overall and differentiated between stimulus and response incongruency. Taken together these results imply that predictability alters interference control by a reconfiguration of stimulus processing. During unpredictable sequences participants adjust their attentional focus dynamically on a trial-by-trial basis as reflected in congruency-dependent probe-VEP-modulation. This reactive control mode also elicits larger N2 amplitudes. In contrast, when task demands are predictable, participants focus selective attention earlier as reflected in the target-related VEPs. This proactive control mode leads to smaller N2 amplitudes and absent probe effects. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Integration of auditory and kinesthetic information in motion: alterations in Parkinson's disease.

    PubMed

    Sabaté, Magdalena; Llanos, Catalina; Rodríguez, Manuel

    2008-07-01

    The main aim in this work was to study the interaction between auditory and kinesthetic stimuli and its influence on motion control. The study was performed on healthy subjects and patients with Parkinson's disease (PD). Thirty-five right-handed volunteers (young, PD, and age-matched healthy participants, and PD-patients) were studied with three different motor tasks (slow cyclic movements, fast cyclic movements, and slow continuous movements) and under the action of kinesthetic stimuli and sounds at different beat rates. The action of kinesthesia was evaluated by comparing real movements with virtual movements (movements imaged but not executed). The fast cyclic task was accelerated by kinesthetic but not by auditory stimuli. The slow cyclic task changed with the beat rate of sounds but not with kinesthetic stimuli. The slow continuous task showed an integrated response to both sensorial modalities. These data show that the influence of the multisensory integration on motion changes with the motor task and that some motor patterns are modulated by the simultaneous action of auditory and kinesthetic information, a cross-modal integration that was different in PD-patients. PsycINFO Database Record (c) 2008 APA, all rights reserved.

  10. A crossmodal crossover: opposite effects of visual and auditory perceptual load on steady-state evoked potentials to irrelevant visual stimuli.

    PubMed

    Jacoby, Oscar; Hall, Sarah E; Mattingley, Jason B

    2012-07-16

    Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Early Sign Language Experience Goes Along with an Increased Cross-modal Gain for Affective Prosodic Recognition in Congenitally Deaf CI Users.

    PubMed

    Fengler, Ineke; Delfau, Pia-Céline; Röder, Brigitte

    2018-04-01

    It is yet unclear whether congenitally deaf cochlear implant (CD CI) users' visual and multisensory emotion perception is influenced by their history in sign language acquisition. We hypothesized that early-signing CD CI users, relative to late-signing CD CI users and hearing, non-signing controls, show better facial expression recognition and rely more on the facial cues of audio-visual emotional stimuli. Two groups of young adult CD CI users-early signers (ES CI users; n = 11) and late signers (LS CI users; n = 10)-and a group of hearing, non-signing, age-matched controls (n = 12) performed an emotion recognition task with auditory, visual, and cross-modal emotionally congruent and incongruent speech stimuli. On different trials, participants categorized either the facial or the vocal expressions. The ES CI users more accurately recognized affective prosody than the LS CI users in the presence of congruent facial information. Furthermore, the ES CI users, but not the LS CI users, gained more than the controls from congruent visual stimuli when recognizing affective prosody. Both CI groups performed overall worse than the controls in recognizing affective prosody. These results suggest that early sign language experience affects multisensory emotion perception in CD CI users.

  12. In defense of abstract conceptual representations.

    PubMed

    Binder, Jeffrey R

    2016-08-01

    An extensive program of research in the past 2 decades has focused on the role of modal sensory, motor, and affective brain systems in storing and retrieving concept knowledge. This focus has led in some circles to an underestimation of the need for more abstract, supramodal conceptual representations in semantic cognition. Evidence for supramodal processing comes from neuroimaging work documenting a large, well-defined cortical network that responds to meaningful stimuli regardless of modal content. The nodes in this network correspond to high-level "convergence zones" that receive broadly crossmodal input and presumably process crossmodal conjunctions. It is proposed that highly conjunctive representations are needed for several critical functions, including capturing conceptual similarity structure, enabling thematic associative relationships independent of conceptual similarity, and providing efficient "chunking" of concept representations for a range of higher order tasks that require concepts to be configured as situations. These hypothesized functions account for a wide range of neuroimaging results showing modulation of the supramodal convergence zone network by associative strength, lexicality, familiarity, imageability, frequency, and semantic compositionality. The evidence supports a hierarchical model of knowledge representation in which modal systems provide a mechanism for concept acquisition and serve to ground individual concepts in external reality, whereas broadly conjunctive, supramodal representations play an equally important role in concept association and situation knowledge.

  13. Impairments in multisensory processing are not universal to the autism spectrum: no evidence for crossmodal priming deficits in Asperger syndrome.

    PubMed

    David, Nicole; R Schneider, Till; Vogeley, Kai; Engel, Andreas K

    2011-10-01

    Individuals suffering from autism spectrum disorders (ASD) often show a tendency for detail- or feature-based perception (also referred to as "local processing bias") instead of more holistic stimulus processing typical for unaffected people. This local processing bias has been demonstrated for the visual and auditory domains and there is evidence that multisensory processing may also be affected in ASD. Most multisensory processing paradigms used social-communicative stimuli, such as human speech or faces, probing the processing of simultaneously occuring sensory signals. Multisensory processing, however, is not limited to simultaneous stimulation. In this study, we investigated whether multisensory processing deficits in ASD persist when semantically complex but nonsocial stimuli are presented in succession. Fifteen adult individuals with Asperger syndrome and 15 control persons participated in a visual-audio priming task, which required the classification of sounds that were either primed by semantically congruent or incongruent preceding pictures of objects. As expected, performance on congruent trials was faster and more accurate compared with incongruent trials (crossmodal priming effect). The Asperger group, however, did not differ significantly from the control group. Our results do not support a general multisensory processing deficit, which is universal to the entire autism spectrum. Copyright © 2011, International Society for Autism Research, Wiley-Liss, Inc.

  14. The role of semantic and phonological factors in word recognition: an ERP cross-modal priming study of derivational morphology.

    PubMed

    Kielar, Aneta; Joanisse, Marc F

    2011-01-01

    Theories of morphological processing differ on the issue of how lexical and grammatical information are stored and accessed. A key point of contention is whether complex forms are decomposed during recognition (e.g., establish+ment), compared to forms that cannot be analyzed into constituent morphemes (e.g., apartment). In the present study, we examined these issues with respect to English derivational morphology by measuring ERP responses during a cross-modal priming lexical decision task. ERP priming effects for semantically and phonologically transparent derived words (government-govern) were compared to those of semantically opaque derived words (apartment-apart) as well as "quasi-regular" items that represent intermediate cases of morphological transparency (dresser-dress). Additional conditions independently manipulated semantic and phonological relatedness in non-derived words (semantics: couch-sofa; phonology: panel-pan). The degree of N400 ERP priming to morphological forms varied depending on the amount of semantic and phonological overlap between word types, rather than respecting a bivariate distinction between derived and opaque forms. Moreover, these effects could not be accounted for by semantic or phonological relatedness alone. The findings support the theory that morphological relatedness is graded rather than absolute, and depend on the joint contribution of form and meaning overlap. Copyright © 2010 Elsevier Ltd. All rights reserved.

  15. Development of compositional and contextual communicable congruence in robots by using dynamic neural network models.

    PubMed

    Park, Gibeom; Tani, Jun

    2015-12-01

    The current study presents neurorobotics experiments on acquisition of skills for "communicable congruence" with human via learning. A dynamic neural network model which is characterized by its multiple timescale dynamics property was utilized as a neuromorphic model for controlling a humanoid robot. In the experimental task, the humanoid robot was trained to generate specific sequential movement patterns as responding to various sequences of imperative gesture patterns demonstrated by the human subjects by following predefined compositional semantic rules. The experimental results showed that (1) the adopted MTRNN can achieve generalization by learning in the lower feature perception level by using a limited set of tutoring patterns, (2) the MTRNN can learn to extract compositional semantic rules with generalization in its higher level characterized by slow timescale dynamics, (3) the MTRNN can develop another type of cognitive capability for controlling the internal contextual processes as situated to on-going task sequences without being provided with cues for explicitly indicating task segmentation points. The analysis on the dynamic property developed in the MTRNN via learning indicated that the aforementioned cognitive mechanisms were achieved by self-organization of adequate functional hierarchy by utilizing the constraint of the multiple timescale property and the topological connectivity imposed on the network configuration. These results of the current research could contribute to developments of socially intelligent robots endowed with cognitive communicative competency similar to that of human. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. The working memory stroop effect: when internal representations clash with external stimuli.

    PubMed

    Kiyonaga, Anastasia; Egner, Tobias

    2014-08-01

    Working memory (WM) has recently been described as internally directed attention, which implies that WM content should affect behavior exactly like an externally perceived and attended stimulus. We tested whether holding a color word in WM, rather than attending to it in the external environment, can produce interference in a color-discrimination task, which would mimic the classic Stroop effect. Over three experiments, the WM Stroop effect recapitulated core properties of the classic attentional Stroop effect, displaying equivalent congruency effects, additive contributions from stimulus- and response-level congruency, and susceptibility to modulation by the percentage of congruent and incongruent trials. Moreover, WM maintenance was inversely related to attentional demands during the WM delay between stimulus presentation and recall, with poorer memory performance following incongruent than congruent trials. Together, these results suggest that WM and attention rely on the same resources and operate over the same representations. © The Author(s) 2014.

  17. Opposite ERP effects for conscious and unconscious semantic processing under continuous flash suppression.

    PubMed

    Yang, Yung-Hao; Zhou, Jifan; Li, Kuei-An; Hung, Tifan; Pegna, Alan J; Yeh, Su-Ling

    2017-09-01

    We examined whether semantic processing occurs without awareness using continuous flash suppression (CFS). In two priming tasks, participants were required to judge whether a target was a word or a non-word, and to report whether the masked prime was visible. Experiment 1 manipulated the lexical congruency between the prime-target pairs and Experiment 2 manipulated their semantic relatedness. Despite the absence of behavioral priming effects (Experiment 1), the ERP results revealed that an N4 component was sensitive to the prime-target lexical congruency (Experiment 1) and semantic relatedness (Experiment 2) when the prime was rendered invisible under CFS. However, these results were reversed with respect to those that emerged when the stimuli were perceived consciously. Our findings suggest that some form of lexical and semantic processing can occur during CFS-induced unawareness, but are associated with different electrophysiological outcomes. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. The Role of RT Carry-Over for Congruence Sequence Effects in Masked Priming

    ERIC Educational Resources Information Center

    Huber-Huber, Christoph; Ansorge, Ulrich

    2017-01-01

    The present study disentangles 2 sources of the congruence sequence effect with masked primes: congruence and response time of the previous trial (reaction time [RT] carry-over). Using arrows as primes and targets and a metacontrast masking procedure we found congruence as well as congruence sequence effects. In addition, congruence sequence…

  19. Contribution of fronto-striatal regions to emotional valence and repetition under cognitive conflict.

    PubMed

    Chun, Ji-Won; Park, Hae-Jeong; Kim, Dai Jin; Kim, Eosu; Kim, Jae-Jin

    2017-07-01

    Conflict processing mediated by fronto-striatal regions may be influenced by emotional properties of stimuli. This study aimed to examine the effects of emotion repetition on cognitive control in a conflict-provoking situation. Twenty-one healthy subjects were scanned using functional magnetic resonance imaging while performing a sequential cognitive conflict task composed of emotional stimuli. The regional effects were analyzed according to the repetition or non-repetition of cognitive congruency and emotional valence between the preceding and current trials. Post-incongruence interference in error rate and reaction time was significantly smaller than post-congruence interference, particularly under repeated positive and non-repeated positive, respectively, and post-incongruence interference, compared to post-congruence interference, increased activity in the ACC, DLPFC, and striatum. ACC and DLPFC activities were significantly correlated with error rate or reaction time in some conditions, and fronto-striatal connections were related to the conflict processing heightened by negative emotion. These findings suggest that the repetition of emotional stimuli adaptively regulates cognitive control and the fronto-striatal circuit may engage in the conflict adaptation process induced by emotion repetition. Both repetition enhancement and repetition suppression of prefrontal activity may underlie the relationship between emotion and conflict adaptation. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Cross-modal enhancement of speech detection in young and older adults: does signal content matter?

    PubMed

    Tye-Murray, Nancy; Spehar, Brent; Myerson, Joel; Sommers, Mitchell S; Hale, Sandra

    2011-01-01

    The purpose of the present study was to examine the effects of age and visual content on cross-modal enhancement of auditory speech detection. Visual content consisted of three clearly distinct types of visual information: an unaltered video clip of a talker's face, a low-contrast version of the same clip, and a mouth-like Lissajous figure. It was hypothesized that both young and older adults would exhibit reduced enhancement as visual content diverged from the original clip of the talker's face, but that the decrease would be greater for older participants. Nineteen young adults and 19 older adults were asked to detect a single spoken syllable (/ba/) in speech-shaped noise, and the level of the signal was adaptively varied to establish the signal-to-noise ratio (SNR) at threshold. There was an auditory-only baseline condition and three audiovisual conditions in which the syllable was accompanied by one of the three visual signals (the unaltered clip of the talker's face, the low-contrast version of that clip, or the Lissajous figure). For each audiovisual condition, the SNR at threshold was compared with the SNR at threshold for the auditory-only condition to measure the amount of cross-modal enhancement. Young adults exhibited significant cross-modal enhancement with all three types of visual stimuli, with the greatest amount of enhancement observed for the unaltered clip of the talker's face. Older adults, in contrast, exhibited significant cross-modal enhancement only with the unaltered face. Results of this study suggest that visual signal content affects cross-modal enhancement of speech detection in both young and older adults. They also support a hypothesized age-related deficit in processing low-contrast visual speech stimuli, even in older adults with normal contrast sensitivity.

  1. Do different perceptual task sets modulate electrophysiological correlates of masked visuomotor priming? Attention to shape and color put to the test.

    PubMed

    Zovko, Monika; Kiefer, Markus

    2013-02-01

    According to classical theories, automatic processes operate independently of attention. Recent evidence, however, shows that masked visuomotor priming, an example of an automatic process, depends on attention to visual form versus semantics. In a continuation of this approach, we probed feature-specific attention within the perceptual domain and tested in two event-related potential (ERP) studies whether masked visuomotor priming in a shape decision task specifically depends on attentional sensitization of visual pathways for shape in contrast to color. Prior to the masked priming procedure, a shape or a color decision task served to induce corresponding task sets. ERP analyses revealed visuomotor priming effects over the occipitoparietal scalp only after the shape, but not after the color induction task. Thus, top-down control coordinates automatic processing streams in congruency with higher-level goals even at a fine-grained level. Copyright © 2012 Society for Psychophysiological Research.

  2. Congruency effects in dot comparison tasks: convex hull is more important than dot area.

    PubMed

    Gilmore, Camilla; Cragg, Lucy; Hogan, Grace; Inglis, Matthew

    2016-11-16

    The dot comparison task, in which participants select the more numerous of two dot arrays, has become the predominant method of assessing Approximate Number System (ANS) acuity. Creation of the dot arrays requires the manipulation of visual characteristics, such as dot size and convex hull. For the task to provide a valid measure of ANS acuity, participants must ignore these characteristics and respond on the basis of number. Here, we report two experiments that explore the influence of dot area and convex hull on participants' accuracy on dot comparison tasks. We found that individuals' ability to ignore dot area information increases with age and display time. However, the influence of convex hull information remains stable across development and with additional time. This suggests that convex hull information is more difficult to inhibit when making judgements about numerosity and therefore it is crucial to control this when creating dot comparison tasks.

  3. Cross-Modal Binding in Developmental Dyslexia

    ERIC Educational Resources Information Center

    Jones, Manon W.; Branigan, Holly P.; Parra, Mario A.; Logie, Robert H.

    2013-01-01

    The ability to learn visual-phonological associations is a unique predictor of word reading, and individuals with developmental dyslexia show impaired ability in learning these associations. In this study, we compared developmentally dyslexic and nondyslexic adults on their ability to form cross-modal associations (or "bindings") based…

  4. A Cross-Modal Assessment of Reading Achievement in Children.

    ERIC Educational Resources Information Center

    Webb, Kathryn; And Others

    1982-01-01

    This study examined the ability of the Listen and Look (LL) test of cross-modal perception and the Metropolitan Readiness Test (MRT) to predict reading achievement. Data from 79 first-grade pupils were analyzed. Both the LL and MRT demonstrated predictive validity. (Author/BW)

  5. Cross-mode bioelectrical impedance analysis in a standing position for estimating fat-free mass validated against dual-energy x-ray absorptiometry.

    PubMed

    Huang, Ai-Chun; Chen, Yu-Yawn; Chuang, Chih-Lin; Chiang, Li-Ming; Lu, Hsueh-Kuan; Lin, Hung-Chi; Chen, Kuen-Tsann; Hsiao, An-Chi; Hsieh, Kuen-Chang

    2015-11-01

    Bioelectrical impedance analysis (BIA) is commonly used to assess body composition. Cross-mode (left hand to right foot, Z(CR)) BIA presumably uses the longest current path in the human body, which may generate better results when estimating fat-free mass (FFM). We compared the cross-mode with the hand-to-foot mode (right hand to right foot, Z(HF)) using dual-energy x-ray absorptiometry (DXA) as the reference. We hypothesized that when comparing anthropometric parameters using stepwise regression analysis, the impedance value from the cross-mode analysis would have better prediction accuracy than that from the hand-to-foot mode analysis. We studied 264 men and 232 women (mean ages, 32.19 ± 14.95 and 34.51 ± 14.96 years, respectively; mean body mass indexes, 24.54 ± 3.74 and 23.44 ± 4.61 kg/m2, respectively). The DXA-measured FFMs in men and women were 58.85 ± 8.15 and 40.48 ± 5.64 kg, respectively. Multiple stepwise linear regression analyses were performed to construct sex-specific FFM equations. The correlations of FFM measured by DXA vs. FFM from hand-to-foot mode and estimated FFM by cross-mode were 0.85 and 0.86 in women, with standard errors of estimate of 2.96 and 2.92 kg, respectively. In men, they were 0.91 and 0.91, with standard errors of the estimates of 3.34 and 3.48 kg, respectively. Bland-Altman plots showed limits of agreement of -6.78 to 6.78 kg for FFM from hand-to-foot mode and -7.06 to 7.06 kg for estimated FFM by cross-mode for men, and -5.91 to 5.91 and -5.84 to 5.84 kg, respectively, for women. Paired t tests showed no significant differences between the 2 modes (P > .05). Hence, cross-mode BIA appears to represent a reasonable and practical application for assessing FFM in Chinese populations. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Task demands affect spatial reference frame weighting during tactile localization in sighted and congenitally blind adults

    PubMed Central

    Schubert, Jonathan T. W.; Badde, Stephanie; Röder, Brigitte

    2017-01-01

    Task demands modulate tactile localization in sighted humans, presumably through weight adjustments in the spatial integration of anatomical, skin-based, and external, posture-based information. In contrast, previous studies have suggested that congenitally blind humans, by default, refrain from automatic spatial integration and localize touch using only skin-based information. Here, sighted and congenitally blind participants localized tactile targets on the palm or back of one hand, while ignoring simultaneous tactile distractors at congruent or incongruent locations on the other hand. We probed the interplay of anatomical and external location codes for spatial congruency effects by varying hand posture: the palms either both faced down, or one faced down and one up. In the latter posture, externally congruent target and distractor locations were anatomically incongruent and vice versa. Target locations had to be reported either anatomically (“palm” or “back” of the hand), or externally (“up” or “down” in space). Under anatomical instructions, performance was more accurate for anatomically congruent than incongruent target-distractor pairs. In contrast, under external instructions, performance was more accurate for externally congruent than incongruent pairs. These modulations were evident in sighted and blind individuals. Notably, distractor effects were overall far smaller in blind than in sighted participants, despite comparable target-distractor identification performance. Thus, the absence of developmental vision seems to be associated with an increased ability to focus tactile attention towards a non-spatially defined target. Nevertheless, that blind individuals exhibited effects of hand posture and task instructions in their congruency effects suggests that, like the sighted, they automatically integrate anatomical and external information during tactile localization. Moreover, spatial integration in tactile processing is, thus, flexibly adapted by top-down information—here, task instruction—even in the absence of developmental vision. PMID:29228023

  7. Neural Correlates of Task-Irrelevant First and Second Language Emotion Words – Evidence from the Emotional Face–Word Stroop Task

    PubMed Central

    Fan, Lin; Xu, Qiang; Wang, Xiaoxi; Zhang, Feng; Yang, Yaping; Liu, Xiaoping

    2016-01-01

    Emotionally valenced words have thus far not been empirically examined in a bilingual population with the emotional face–word Stroop paradigm. Chinese-English bilinguals were asked to identify the facial expressions of emotion with their first (L1) or second (L2) language task-irrelevant emotion words superimposed on the face pictures. We attempted to examine how the emotional content of words modulated behavioral performance and cerebral functioning in the bilinguals’ two languages. The results indicated that there were significant congruency effects for both L1 and L2 emotion words, and that identifiable differences in the magnitude of the Stroop effect between the two languages were also observed, suggesting L1 is more capable of activating the emotional response to word stimuli. For event-related potentials data, an N350–550 effect was observed only in the L1 task with greater negativity for incongruent than congruent trials. The size of the N350–550 effect differed across languages, whereas no identifiable language distinction was observed in the effect of conflict slow potential (conflict SP). Finally, more pronounced negative amplitude at 230–330 ms was observed in L1 than in L2, but only for incongruent trials. This negativity, likened to an orthographic decoding N250, may reflect the extent of attention to emotion word processing at word-form level, while the N350–550 reflects a complicated set of processes in the conflict processing. Overall, the face–word congruency effect has reflected identifiable language distinction at 230–330 and 350-550 ms, which provides supporting evidence for the theoretical proposals assuming attenuated emotionality of L2 processing. PMID:27847485

  8. Prevailing theories of consciousness are challenged by novel cross-modal associations acquired between subliminal stimuli.

    PubMed

    Scott, Ryan B; Samaha, Jason; Chrisley, Ron; Dienes, Zoltan

    2018-06-01

    While theories of consciousness differ substantially, the 'conscious access hypothesis', which aligns consciousness with the global accessibility of information across cortical regions, is present in many of the prevailing frameworks. This account holds that consciousness is necessary to integrate information arising from independent functions such as the specialist processing required by different senses. We directly tested this account by evaluating the potential for associative learning between novel pairs of subliminal stimuli presented in different sensory modalities. First, pairs of subliminal stimuli were presented and then their association assessed by examining the ability of the first stimulus to prime classification of the second. In Experiments 1-4 the stimuli were word-pairs consisting of a male name preceding either a creative or uncreative profession. Participants were subliminally exposed to two name-profession pairs where one name was paired with a creative profession and the other an uncreative profession. A supraliminal task followed requiring the timed classification of one of those two professions. The target profession was preceded by either the name with which it had been subliminally paired (concordant) or the alternate name (discordant). Experiment 1 presented stimuli auditorily, Experiment 2 visually, and Experiment 3 presented names auditorily and professions visually. All three experiments revealed the same inverse priming effect with concordant test pairs associated with significantly slower classification judgements. Experiment 4 sought to establish if learning would be more efficient with supraliminal stimuli and found evidence that a different strategy is adopted when stimuli are consciously perceived. Finally, Experiment 5 replicated the unconscious cross-modal association achieved in Experiment 3 utilising non-linguistic stimuli. The results demonstrate the acquisition of novel cross-modal associations between stimuli which are not consciously perceived and thus challenge the global access hypothesis and those theories embracing it. Copyright © 2018. Published by Elsevier B.V.

  9. Cross-Modal Decoding of Neural Patterns Associated with Working Memory: Evidence for Attention-Based Accounts of Working Memory

    PubMed Central

    Majerus, Steve; Cowan, Nelson; Péters, Frédéric; Van Calster, Laurens; Phillips, Christophe; Schrouff, Jessica

    2016-01-01

    Recent studies suggest common neural substrates involved in verbal and visual working memory (WM), interpreted as reflecting shared attention-based, short-term retention mechanisms. We used a machine-learning approach to determine more directly the extent to which common neural patterns characterize retention in verbal WM and visual WM. Verbal WM was assessed via a standard delayed probe recognition task for letter sequences of variable length. Visual WM was assessed via a visual array WM task involving the maintenance of variable amounts of visual information in the focus of attention. We trained a classifier to distinguish neural activation patterns associated with high- and low-visual WM load and tested the ability of this classifier to predict verbal WM load (high–low) from their associated neural activation patterns, and vice versa. We observed significant between-task prediction of load effects during WM maintenance, in posterior parietal and superior frontal regions of the dorsal attention network; in contrast, between-task prediction in sensory processing cortices was restricted to the encoding stage. Furthermore, between-task prediction of load effects was strongest in those participants presenting the highest capacity for the visual WM task. This study provides novel evidence for common, attention-based neural patterns supporting verbal and visual WM. PMID:25146374

  10. Large-scale Cross-modality Search via Collective Matrix Factorization Hashing.

    PubMed

    Ding, Guiguang; Guo, Yuchen; Zhou, Jile; Gao, Yue

    2016-09-08

    By transforming data into binary representation, i.e., Hashing, we can perform high-speed search with low storage cost, and thus Hashing has collected increasing research interest in the recent years. Recently, how to generate Hashcode for multimodal data (e.g., images with textual tags, documents with photos, etc) for large-scale cross-modality search (e.g., searching semantically related images in database for a document query) is an important research issue because of the fast growth of multimodal data in the Web. To address this issue, a novel framework for multimodal Hashing is proposed, termed as Collective Matrix Factorization Hashing (CMFH). The key idea of CMFH is to learn unified Hashcodes for different modalities of one multimodal instance in the shared latent semantic space in which different modalities can be effectively connected. Therefore, accurate cross-modality search is supported. Based on the general framework, we extend it in the unsupervised scenario where it tries to preserve the Euclidean structure, and in the supervised scenario where it fully exploits the label information of data. The corresponding theoretical analysis and the optimization algorithms are given. We conducted comprehensive experiments on three benchmark datasets for cross-modality search. The experimental results demonstrate that CMFH can significantly outperform several state-of-the-art cross-modality Hashing methods, which validates the effectiveness of the proposed CMFH.

  11. Compensating for age limits through emotional crossmodal integration

    PubMed Central

    Chaby, Laurence; Boullay, Viviane Luherne-du; Chetouani, Mohamed; Plaza, Monique

    2015-01-01

    Social interactions in daily life necessitate the integration of social signals from different sensory modalities. In the aging literature, it is well established that the recognition of emotion in facial expressions declines with advancing age, and this also occurs with vocal expressions. By contrast, crossmodal integration processing in healthy aging individuals is less documented. Here, we investigated the age-related effects on emotion recognition when faces and voices were presented alone or simultaneously, allowing for crossmodal integration. In this study, 31 young adults (M = 25.8 years) and 31 older adults (M = 67.2 years) were instructed to identify several basic emotions (happiness, sadness, anger, fear, disgust) and a neutral expression, which were displayed as visual (facial expressions), auditory (non-verbal affective vocalizations) or crossmodal (simultaneous, congruent facial and vocal affective expressions) stimuli. The results showed that older adults performed slower and worse than younger adults at recognizing negative emotions from isolated faces and voices. In the crossmodal condition, although slower, older adults were as accurate as younger except for anger. Importantly, additional analyses using the “race model” demonstrate that older adults benefited to the same extent as younger adults from the combination of facial and vocal emotional stimuli. These results help explain some conflicting results in the literature and may clarify emotional abilities related to daily life that are partially spared among older adults. PMID:26074845

  12. Value-driven attentional capture in the auditory domain.

    PubMed

    Anderson, Brian A

    2016-01-01

    It is now well established that the visual attention system is shaped by reward learning. When visual features are associated with a reward outcome, they acquire high priority and can automatically capture visual attention. To date, evidence for value-driven attentional capture has been limited entirely to the visual system. In the present study, I demonstrate that previously reward-associated sounds also capture attention, interfering more strongly with the performance of a visual task. This finding suggests that value-driven attention reflects a broad principle of information processing that can be extended to other sensory modalities and that value-driven attention can bias cross-modal stimulus competition.

  13. An ERP-study of brand and no-name products.

    PubMed

    Thomas, Anika; Hammer, Anke; Beibst, Gabriele; Münte, Thomas F

    2013-11-23

    Brands create product personalities that are thought to affect consumer decisions. Here we assessed, using the Go/No-go Association Task (GNAT) from social psychology, whether brands as opposed to no-name products are associated with implicit positive attitudes. Healthy young German participants viewed series of photos of cosmetics and food items (half of them brands) intermixed with positive and negative words. In any given run, one category of goods (e.g., cosmetics) and one kind of words (e.g., positive) had to be responded to, whereas responses had to be withheld for the other categories. Event-related brain potentials were recorded during the task. Unexpectedly, there were no response-time differences between congruent (brand and positive words) and incongruent (brand and negative words) pairings but ERPs showed differences as a function of congruency in the 600-750 ms time-window hinting at the existence of implicit attitudes towards brand and no-name stimuli. This finding deserves further investigation in future studies. Moreover, the amplitude of the late positive component (LPC) was found to be enhanced for brand as opposed to no-name stimuli. Congruency effects suggest that ERPs are sensitive to implicit attitudes. Moreover, the results for the LPC imply that pictures of brand products are more arousing than those of no-name products, which may ultimately contribute to consumer decisions.

  14. Reactive recruitment of attentional control in math anxiety: an ERP study of numeric conflict monitoring and adaptation.

    PubMed

    Suárez-Pellicioni, Macarena; Núñez-Peña, María Isabel; Colomé, Àngels

    2014-01-01

    This study uses event-related brain potentials (ERPs) to investigate the electrophysiological correlates of numeric conflict monitoring in math-anxious individuals, by analyzing whether math anxiety is related to abnormal processing in early conflict detection (as shown by the N450 component) and/or in a later, response-related stage of processing (as shown by the conflict sustained potential; Conflict-SP). Conflict adaptation effects were also studied by analyzing the effect of the previous trial's congruence in current interference. To this end, 17 low math-anxious (LMA) and 17 high math-anxious (HMA) individuals were presented with a numerical Stroop task. Groups were extreme in math anxiety but did not differ in trait or state anxiety or in simple math ability. The interference effect of the current trial (incongruent-congruent) and the interference effect preceded by congruence and by incongruity were analyzed both for behavioral measures and for ERPs. A greater interference effect was found for response times in the HMA group than in the LMA one. Regarding ERPs, the LMA group showed a greater N450 component for the interference effect preceded by congruence than when preceded by incongruity, while the HMA group showed greater Conflict-SP amplitude for the interference effect preceded by congruence than when preceded by incongruity. Our study showed that the electrophysiological correlates of numeric interference in HMA individuals comprise the absence of a conflict adaptation effect in the first stage of conflict processing (N450) and an abnormal subsequent up-regulation of cognitive control in order to overcome the conflict (Conflict-SP). More concretely, our study shows that math anxiety is related to a reactive and compensatory recruitment of control resources that is implemented only when previously exposed to a stimuli presenting conflicting information.

  15. Reactive Recruitment of Attentional Control in Math Anxiety: An ERP Study of Numeric Conflict Monitoring and Adaptation

    PubMed Central

    Suárez-Pellicioni, Macarena; Núñez-Peña, María Isabel; Colomé, Àngels

    2014-01-01

    This study uses event-related brain potentials (ERPs) to investigate the electrophysiological correlates of numeric conflict monitoring in math-anxious individuals, by analyzing whether math anxiety is related to abnormal processing in early conflict detection (as shown by the N450 component) and/or in a later, response-related stage of processing (as shown by the conflict sustained potential; Conflict-SP). Conflict adaptation effects were also studied by analyzing the effect of the previous trial’s congruence in current interference. To this end, 17 low math-anxious (LMA) and 17 high math-anxious (HMA) individuals were presented with a numerical Stroop task. Groups were extreme in math anxiety but did not differ in trait or state anxiety or in simple math ability. The interference effect of the current trial (incongruent-congruent) and the interference effect preceded by congruence and by incongruity were analyzed both for behavioral measures and for ERPs. A greater interference effect was found for response times in the HMA group than in the LMA one. Regarding ERPs, the LMA group showed a greater N450 component for the interference effect preceded by congruence than when preceded by incongruity, while the HMA group showed greater Conflict-SP amplitude for the interference effect preceded by congruence than when preceded by incongruity. Our study showed that the electrophysiological correlates of numeric interference in HMA individuals comprise the absence of a conflict adaptation effect in the first stage of conflict processing (N450) and an abnormal subsequent up-regulation of cognitive control in order to overcome the conflict (Conflict-SP). More concretely, our study shows that math anxiety is related to a reactive and compensatory recruitment of control resources that is implemented only when previously exposed to a stimuli presenting conflicting information. PMID:24918584

  16. A Non-Verbal Turing Test: Differentiating Mind from Machine in Gaze-Based Social Interaction

    PubMed Central

    Pfeiffer, Ulrich J.; Timmermans, Bert; Bente, Gary; Vogeley, Kai; Schilbach, Leonhard

    2011-01-01

    In social interaction, gaze behavior provides important signals that have a significant impact on our perception of others. Previous investigations, however, have relied on paradigms in which participants are passive observers of other persons’ gazes and do not adjust their gaze behavior as is the case in real-life social encounters. We used an interactive eye-tracking paradigm that allows participants to interact with an anthropomorphic virtual character whose gaze behavior is responsive to where the participant looks on the stimulus screen in real time. The character’s gaze reactions were systematically varied along a continuum from a maximal probability of gaze aversion to a maximal probability of gaze-following during brief interactions, thereby varying contingency and congruency of the reactions. We investigated how these variations influenced whether participants believed that the character was controlled by another person (i.e., a confederate) or a computer program. In a series of experiments, the human confederate was either introduced as naïve to the task, cooperative, or competitive. Results demonstrate that the ascription of humanness increases with higher congruency of gaze reactions when participants are interacting with a naïve partner. In contrast, humanness ascription is driven by the degree of contingency irrespective of congruency when the confederate was introduced as cooperative. Conversely, during interaction with a competitive confederate, judgments were neither based on congruency nor on contingency. These results offer important insights into what renders the experience of an interaction truly social: Humans appear to have a default expectation of reciprocation that can be influenced drastically by the presumed disposition of the interactor to either cooperate or compete. PMID:22096599

  17. Filling-in visual motion with sounds.

    PubMed

    Väljamäe, A; Soto-Faraco, S

    2008-10-01

    Information about the motion of objects can be extracted by multiple sensory modalities, and, as a consequence, object motion perception typically involves the integration of multi-sensory information. Often, in naturalistic settings, the flow of such information can be rather discontinuous (e.g. a cat racing through the furniture in a cluttered room is partly seen and partly heard). This study addressed audio-visual interactions in the perception of time-sampled object motion by measuring adaptation after-effects. We found significant auditory after-effects following adaptation to unisensory auditory and visual motion in depth, sampled at 12.5 Hz. The visually induced (cross-modal) auditory motion after-effect was eliminated if visual adaptors flashed at half of the rate (6.25 Hz). Remarkably, the addition of the high-rate acoustic flutter (12.5 Hz) to this ineffective, sparsely time-sampled, visual adaptor restored the auditory after-effect to a level comparable to what was seen with high-rate bimodal adaptors (flashes and beeps). Our results suggest that this auditory-induced reinstatement of the motion after-effect from the poor visual signals resulted from the occurrence of sound-induced illusory flashes. This effect was found to be dependent both on the directional congruency between modalities and on the rate of auditory flutter. The auditory filling-in of time-sampled visual motion supports the feasibility of using reduced frame rate visual content in multisensory broadcasting and virtual reality applications.

  18. Does evaluative learning rely on the perception of contingency?: manipulating contingency and US density during evaluative conditioning.

    PubMed

    Kattner, Florian; Ellermeier, Wolfgang

    2011-01-01

    An experiment is reported studying the impact of objective contingency and contingency judgments on cross-modal evaluative conditioning (EC). Both contingency judgments and evaluative responses were measured after a contingency learning task in which previously neutral sounds served as either weak or strong predictors of affective pictures. Experimental manipulations of contingency and US density were shown to affect contingency judgments. Stronger contingencies were perceived with high contingency and with low US density. The contingency learning task also produced a reliable EC effect. The magnitude of this effect was influenced by an interaction of statistical contingency and US density. Furthermore, the magnitude of EC was correlated with the subjective contingency judgments. Taken together, the results imply that propositional knowledge about the CS-US relationship, as reflected in contingency judgments, moderates evaluative learning. The data are discussed with respect to different accounts of EC.

  19. The functional neuroanatomy of multitasking: combining dual tasking with a short term memory task.

    PubMed

    Deprez, Sabine; Vandenbulcke, Mathieu; Peeters, Ron; Emsell, Louise; Amant, Frederic; Sunaert, Stefan

    2013-09-01

    Insight into the neural architecture of multitasking is crucial when investigating the pathophysiology of multitasking deficits in clinical populations. Presently, little is known about how the brain combines dual-tasking with a concurrent short-term memory task, despite the relevance of this mental operation in daily life and the frequency of complaints related to this process, in disease. In this study we aimed to examine how the brain responds when a memory task is added to dual-tasking. Thirty-three right-handed healthy volunteers (20 females, mean age 39.9 ± 5.8) were examined with functional brain imaging (fMRI). The paradigm consisted of two cross-modal single tasks (a visual and auditory temporal same-different task with short delay), a dual-task combining both single tasks simultaneously and a multi-task condition, combining the dual-task with an additional short-term memory task (temporal same-different visual task with long delay). Dual-tasking compared to both individual visual and auditory single tasks activated a predominantly right-sided fronto-parietal network and the cerebellum. When adding the additional short-term memory task, a larger and more bilateral frontoparietal network was recruited. We found enhanced activity during multitasking in components of the network that were already involved in dual-tasking, suggesting increased working memory demands, as well as recruitment of multitask-specific components including areas that are likely to be involved in online holding of visual stimuli in short-term memory such as occipito-temporal cortex. These results confirm concurrent neural processing of a visual short-term memory task during dual-tasking and provide evidence for an effective fMRI multitasking paradigm. © 2013 Elsevier Ltd. All rights reserved.

  20. Factors Associated with Congruence Between Preferred and Actual Place of Death

    PubMed Central

    Bell, Christina L.; Somogyi-Zalud, Emese; Masaki, Kamal H.

    2009-01-01

    Congruence between preferred and actual place of death may be an essential component in terminal care. Most patients prefer a home death, but many patients do not die in their preferred location. Specialized (physician, hospice and palliative) home care visits may increase home deaths, but factors associated with congruence have not been systematically reviewed. This study sought to review the extent of congruence reported in the literature, and examine factors that may influence congruence. In July 2009, a comprehensive literature search was performed using MEDLINE, Psych Info, CINAHL, and Web of Science. Reference lists, related articles, and the past five years of six palliative care journals were also searched. Overall congruence rates (percentage of met preferences for all locations of death) were calculated for each study using reported data to allow cross-study comparison. Eighteen articles described 30% to 91% congruence. Eight specialized home care studies reported 59% to 91% congruence. A physician-led home care program reported 91% congruence. Of the 10 studies without specialized home care for all patients, seven reported 56% to 71% congruence and most reported unique care programs. Of the remaining three studies without specialized home care for all patients, two reported 43% to 46% congruence among hospital inpatients, and one elicited patient preference “if everything were possible,” with 30% congruence. Physician support, hospice enrollment, and family support improved congruence in multiple studies. Research in this important area must consider potential sources of bias, the method of eliciting patient preference, and the absence of a single ideal place of death. PMID:20116205

  1. Factors associated with congruence between preferred and actual place of death.

    PubMed

    Bell, Christina L; Somogyi-Zalud, Emese; Masaki, Kamal H

    2010-03-01

    Congruence between preferred and actual place of death may be an essential component in terminal care. Most patients prefer a home death, but many patients do not die in their preferred location. Specialized (physician, hospice, and palliative) home care visits may increase home deaths, but factors associated with congruence have not been systematically reviewed. This study sought to review the extent of congruence reported in the literature and examine factors that may influence congruence. In July 2009, a comprehensive literature search was performed using MEDLINE, PsychInfo, CINAHL, and Web of Science. Reference lists, related articles, and the past five years of six palliative care journals were also searched. Overall congruence rates (percentage of met preferences for all locations of death) were calculated for each study using reported data to allow cross-study comparison. Eighteen articles described 30%-91% congruence. Eight specialized home care studies reported 59%-91% congruence. A physician-led home care program reported 91% congruence. Of the 10 studies without specialized home care for all patients, seven reported 56%-71% congruence and most reported unique care programs. Of the remaining three studies without specialized home care for all patients, two reported 43%-46% congruence among hospital inpatients, and one elicited patient preference "if everything were possible," with 30% congruence. Physician support, hospice enrollment, and family support improved congruence in multiple studies. Research in this important area must consider potential sources of bias, the method of eliciting patient preference, and the absence of a single ideal place of death. (c) 2010 U.S. Cancer Pain Relief Committee. Published by Elsevier Inc. All rights reserved.

  2. Vague Congruences and Quotient Lattice Implication Algebras

    PubMed Central

    Qin, Xiaoyan; Xu, Yang

    2014-01-01

    The aim of this paper is to further develop the congruence theory on lattice implication algebras. Firstly, we introduce the notions of vague similarity relations based on vague relations and vague congruence relations. Secondly, the equivalent characterizations of vague congruence relations are investigated. Thirdly, the relation between the set of vague filters and the set of vague congruences is studied. Finally, we construct a new lattice implication algebra induced by a vague congruence, and the homomorphism theorem is given. PMID:25133207

  3. Cross-Modal Interactions in the Experience of Musical Performances: Physiological Correlates

    ERIC Educational Resources Information Center

    Chapados, Catherine; Levitin, Daniel J.

    2008-01-01

    This experiment was conducted to investigate cross-modal interactions in the emotional experience of music listeners. Previous research showed that visual information present in a musical performance is rich in expressive content, and moderates the subjective emotional experience of a participant listening and/or observing musical stimuli [Vines,…

  4. The Function of Consciousness in Multisensory Integration

    ERIC Educational Resources Information Center

    Palmer, Terry D.; Ramsey, Ashley K.

    2012-01-01

    The function of consciousness was explored in two contexts of audio-visual speech, cross-modal visual attention guidance and McGurk cross-modal integration. Experiments 1, 2, and 3 utilized a novel cueing paradigm in which two different flash suppressed lip-streams cooccured with speech sounds matching one of these streams. A visual target was…

  5. Plasticity of Ability to Form Cross-Modal Representations in Infant Japanese Macaques

    ERIC Educational Resources Information Center

    Adachi, Ikuma; Kuwahata, Hiroko; Fujita, Kazuo; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2009-01-01

    In a previous study, Adachi, Kuwahata, Fujita, Tomonaga & Matsuzawa demonstrated that infant Japanese macaques (Macaca fuscata) form cross-modal representations of conspecifics but not of humans. However, because the subjects in the experiment were raised in a large social group and had considerably less exposure to humans than to…

  6. Multisensory brand search: How the meaning of sounds guides consumers' visual attention.

    PubMed

    Knoeferle, Klemens M; Knoeferle, Pia; Velasco, Carlos; Spence, Charles

    2016-06-01

    Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. Cross-modality Sharpening of Visual Cortical Processing through Layer 1-Mediated Inhibition and Disinhibition

    PubMed Central

    Ibrahim, Leena A.; Mesik, Lukas; Ji, Xu-ying; Fang, Qi; Li, Hai-fu; Li, Ya-tang; Zingg, Brian; Zhang, Li I.; Tao, Huizhong Whit

    2016-01-01

    Summary Cross-modality interaction in sensory perception is advantageous for animals’ survival. How cortical sensory processing is cross-modally modulated and what are the underlying neural circuits remain poorly understood. In mouse primary visual cortex (V1), we discovered that orientation selectivity of layer (L)2/3 but not L4 excitatory neurons was sharpened in the presence of sound or optogenetic activation of projections from primary auditory cortex (A1) to V1. The effect was manifested by decreased average visual responses yet increased responses at the preferred orientation. It was more pronounced at lower visual contrast, and was diminished by suppressing L1 activity. L1 neurons were strongly innervated by A1-V1 axons and excited by sound, while visual responses of L2/3 vasoactive intestinal peptide (VIP) neurons were suppressed by sound, both preferentially at the cell's preferred orientation. These results suggest that the cross-modality modulation is achieved primarily through L1 neuron and L2/3 VIP-cell mediated inhibitory and disinhibitory circuits. PMID:26898778

  8. Effects of visual working memory on brain information processing of irrelevant auditory stimuli.

    PubMed

    Qu, Jiagui; Rizak, Joshua D; Zhao, Lun; Li, Minghong; Ma, Yuanye

    2014-01-01

    Selective attention has traditionally been viewed as a sensory processing modulator that promotes cognitive processing efficiency by favoring relevant stimuli while inhibiting irrelevant stimuli. However, the cross-modal processing of irrelevant information during working memory (WM) has been rarely investigated. In this study, the modulation of irrelevant auditory information by the brain during a visual WM task was investigated. The N100 auditory evoked potential (N100-AEP) following an auditory click was used to evaluate the selective attention to auditory stimulus during WM processing and at rest. N100-AEP amplitudes were found to be significantly affected in the left-prefrontal, mid-prefrontal, right-prefrontal, left-frontal, and mid-frontal regions while performing a high WM load task. In contrast, no significant differences were found between N100-AEP amplitudes in WM states and rest states under a low WM load task in all recorded brain regions. Furthermore, no differences were found between the time latencies of N100-AEP troughs in WM states and rest states while performing either the high or low WM load task. These findings suggested that the prefrontal cortex (PFC) may integrate information from different sensory channels to protect perceptual integrity during cognitive processing.

  9. Speaker's voice as a memory cue.

    PubMed

    Campeanu, Sandra; Craik, Fergus I M; Alain, Claude

    2015-02-01

    Speaker's voice occupies a central role as the cornerstone of auditory social interaction. Here, we review the evidence suggesting that speaker's voice constitutes an integral context cue in auditory memory. Investigation into the nature of voice representation as a memory cue is essential to understanding auditory memory and the neural correlates which underlie it. Evidence from behavioral and electrophysiological studies suggest that while specific voice reinstatement (i.e., same speaker) often appears to facilitate word memory even without attention to voice at study, the presence of a partial benefit of similar voices between study and test is less clear. In terms of explicit memory experiments utilizing unfamiliar voices, encoding methods appear to play a pivotal role. Voice congruency effects have been found when voice is specifically attended at study (i.e., when relatively shallow, perceptual encoding takes place). These behavioral findings coincide with neural indices of memory performance such as the parietal old/new recollection effect and the late right frontal effect. The former distinguishes between correctly identified old words and correctly identified new words, and reflects voice congruency only when voice is attended at study. Characterization of the latter likely depends upon voice memory, rather than word memory. There is also evidence to suggest that voice effects can be found in implicit memory paradigms. However, the presence of voice effects appears to depend greatly on the task employed. Using a word identification task, perceptual similarity between study and test conditions is, like for explicit memory tests, crucial. In addition, the type of noise employed appears to have a differential effect. While voice effects have been observed when white noise is used at both study and test, using multi-talker babble does not confer the same results. In terms of neuroimaging research modulations, characterization of an implicit memory effect reflective of voice congruency is currently lacking. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Reduced frontal theta oscillations indicate altered crossmodal prediction error processing in schizophrenia

    PubMed Central

    Keil, Julian; Balz, Johanna; Gallinat, Jürgen; Senkowski, Daniel

    2016-01-01

    Our brain generates predictions about forthcoming stimuli and compares predicted with incoming input. Failures in predicting events might contribute to hallucinations and delusions in schizophrenia (SZ). When a stimulus violates prediction, neural activity that reflects prediction error (PE) processing is found. While PE processing deficits have been reported in unisensory paradigms, it is unknown whether SZ patients (SZP) show altered crossmodal PE processing. We measured high-density electroencephalography and applied source estimation approaches to investigate crossmodal PE processing generated by audiovisual speech. In SZP and healthy control participants (HC), we used an established paradigm in which high- and low-predictive visual syllables were paired with congruent or incongruent auditory syllables. We examined crossmodal PE processing in SZP and HC by comparing differences in event-related potentials and neural oscillations between incongruent and congruent high- and low-predictive audiovisual syllables. In both groups event-related potentials between 206 and 250 ms were larger in high- compared with low-predictive syllables, suggesting intact audiovisual incongruence detection in the auditory cortex of SZP. The analysis of oscillatory responses revealed theta-band (4–7 Hz) power enhancement in high- compared with low-predictive syllables between 230 and 370 ms in the frontal cortex of HC but not SZP. Thus aberrant frontal theta-band oscillations reflect crossmodal PE processing deficits in SZ. The present study suggests a top-down multisensory processing deficit and highlights the role of dysfunctional frontal oscillations for the SZ psychopathology. PMID:27358314

  11. Congruence Reconsidered.

    ERIC Educational Resources Information Center

    Tudor, Keith; Worrall, Mike

    1994-01-01

    Discusses Carl Rogers' definitions of congruence, and identifies four specific requirements for the concept and practice of therapeutic congruence. Examines the interface between congruence and the other necessary and sufficient conditions of change, drawing on examples from practice. (JPS)

  12. Working memory capacity and task goals modulate error-related ERPs.

    PubMed

    Coleman, James R; Watson, Jason M; Strayer, David L

    2018-03-01

    The present study investigated individual differences in information processing following errant behavior. Participants were initially classified as high or as low working memory capacity using the Operation Span Task. In a subsequent session, they then performed a high congruency version of the flanker task under both speed and accuracy stress. We recorded ERPs and behavioral measures of accuracy and response time in the flanker task with a primary focus on processing following an error. The error-related negativity was larger for the high working memory capacity group than for the low working memory capacity group. The positivity following an error (Pe) was modulated to a greater extent by speed-accuracy instruction for the high working memory capacity group than for the low working memory capacity group. These data help to explicate the neural bases of individual differences in working memory capacity and cognitive control. © 2017 Society for Psychophysiological Research.

  13. Negative Affect, Decision Making, and Attentional Networks.

    PubMed

    Ortega, Ana Raquel; Ramírez, Encarnación; Colmenero, José María; García-Viedma, Ma Del Rosario

    2017-02-01

    This study focuses on whether risk avoidance in decision making depends on negative affect or it is specific to anxious individuals. The Balloon Analogue Risk Task was used to obtain an objective measure in a risk situation with anxious, depressive, and control individuals. The role of attentional networks was also studied using the Attentional Network Test-Interaction (ANT-I) task with neutral stimuli. A significant difference was observed between anxious and depressive individuals in assumed risk in decision making. We found no differences between anxious and normal individuals in the alert, orientation, and congruency effects obtained in the ANT-I task. The results showed that there was no significant relationship between the risk avoidance and the indexes of alertness, orienting, and control. Future research shall determine whether emotionally relevant stimulation leads to attentional control deficit or whether differences between anxious and no anxious individuals are due to the type of strategy followed in choice tasks.

  14. Exploring relations between task conflict and informational conflict in the Stroop task.

    PubMed

    Entel, Olga; Tzelgov, Joseph; Bereby-Meyer, Yoella; Shahar, Nitzan

    2015-11-01

    In this study, we tested the proposal that the Stroop task involves two conflicts--task conflict and informational conflict. Task conflict was defined as the latency difference between color words and non-letter neutrals, and manipulated by varying the proportion of color words versus non-letter neutrals. Informational conflict was defined as the latency difference between incongruent and congruent trials and manipulated by varying the congruent-to-incongruent trial ratio. We replicated previous findings showing that increasing the ratio of incongruent-to-congruent trials reduces the latency difference between the incongruent and congruent condition (i.e., informational conflict), as does increasing the proportion of color words (i.e., task conflict). A significant under-additive interaction between the two proportion manipulations (congruent vs. incongruent and color words vs. neutrals) indicated that the effects of task conflict and informational conflict were not additive. By assessing task conflict as the contrast between color words and neutrals, we found that task conflict existed in all of our experimental conditions. Under specific conditions, when task conflict dominated behavior by explaining most of the variability between congruency conditions, we also found negative facilitation, thus demonstrating that this effect is a special case of task conflict.

  15. The heterogeneous world of congruency sequence effects: an update.

    PubMed

    Duthoo, Wout; Abrahamse, Elger L; Braem, Senne; Boehler, Carsten N; Notebaert, Wim

    2014-01-01

    Congruency sequence effects (CSEs) refer to the observation that congruency effects in conflict tasks are typically smaller following incongruent compared to following congruent trials. This measure has long been thought to provide a unique window into top-down attentional adjustments and their underlying brain mechanisms. According to the renowned conflict monitoring theory, CSEs reflect enhanced selective attention following conflict detection. Still, alternative accounts suggested that bottom-up associative learning suffices to explain the pattern of reaction times and error rates. A couple of years ago, a review by Egner (2007) pitted these two rivalry accounts against each other, concluding that both conflict adaptation and feature integration contribute to the CSE. Since then, a wealth of studies has further debated this issue, and two additional accounts have been proposed, offering intriguing alternative explanations. Contingency learning accounts put forward that predictive relationships between stimuli and responses drive the CSE, whereas the repetition expectancy hypothesis suggests that top-down, expectancy-driven control adjustments affect the CSE. In the present paper, we build further on the previous review (Egner, 2007) by summarizing and integrating recent behavioral and neurophysiological studies on the CSE. In doing so, we evaluate the relative contribution and theoretical value of the different attentional and memory-based accounts. Moreover, we review how all of these influences can be experimentally isolated, and discuss designs and procedures that can critically judge between them.

  16. The heterogeneous world of congruency sequence effects: an update

    PubMed Central

    Duthoo, Wout; Abrahamse, Elger L.; Braem, Senne; Boehler, Carsten N.; Notebaert, Wim

    2014-01-01

    Congruency sequence effects (CSEs) refer to the observation that congruency effects in conflict tasks are typically smaller following incongruent compared to following congruent trials. This measure has long been thought to provide a unique window into top-down attentional adjustments and their underlying brain mechanisms. According to the renowned conflict monitoring theory, CSEs reflect enhanced selective attention following conflict detection. Still, alternative accounts suggested that bottom-up associative learning suffices to explain the pattern of reaction times and error rates. A couple of years ago, a review by Egner (2007) pitted these two rivalry accounts against each other, concluding that both conflict adaptation and feature integration contribute to the CSE. Since then, a wealth of studies has further debated this issue, and two additional accounts have been proposed, offering intriguing alternative explanations. Contingency learning accounts put forward that predictive relationships between stimuli and responses drive the CSE, whereas the repetition expectancy hypothesis suggests that top-down, expectancy-driven control adjustments affect the CSE. In the present paper, we build further on the previous review (Egner, 2007) by summarizing and integrating recent behavioral and neurophysiological studies on the CSE. In doing so, we evaluate the relative contribution and theoretical value of the different attentional and memory-based accounts. Moreover, we review how all of these influences can be experimentally isolated, and discuss designs and procedures that can critically judge between them. PMID:25250005

  17. Neural correlates of virtual route recognition in congenital blindness.

    PubMed

    Kupers, Ron; Chebat, Daniel R; Madsen, Kristoffer H; Paulson, Olaf B; Ptito, Maurice

    2010-07-13

    Despite the importance of vision for spatial navigation, blind subjects retain the ability to represent spatial information and to move independently in space to localize and reach targets. However, the neural correlates of navigation in subjects lacking vision remain elusive. We therefore used functional MRI (fMRI) to explore the cortical network underlying successful navigation in blind subjects. We first trained congenitally blind and blindfolded sighted control subjects to perform a virtual navigation task with the tongue display unit (TDU), a tactile-to-vision sensory substitution device that translates a visual image into electrotactile stimulation applied to the tongue. After training, participants repeated the navigation task during fMRI. Although both groups successfully learned to use the TDU in the virtual navigation task, the brain activation patterns showed substantial differences. Blind but not blindfolded sighted control subjects activated the parahippocampus and visual cortex during navigation, areas that are recruited during topographical learning and spatial representation in sighted subjects. When the navigation task was performed under full vision in a second group of sighted participants, the activation pattern strongly resembled the one obtained in the blind when using the TDU. This suggests that in the absence of vision, cross-modal plasticity permits the recruitment of the same cortical network used for spatial navigation tasks in sighted subjects.

  18. Effects of Frequency Separation and Diotic/Dichotic Presentations on the Alternation Frequency Limits in Audition Derived from a Temporal Phase Discrimination Task.

    PubMed

    Kanaya, Shoko; Fujisaki, Waka; Nishida, Shin'ya; Furukawa, Shigeto; Yokosawa, Kazuhiko

    2015-02-01

    Temporal phase discrimination is a useful psychophysical task to evaluate how sensory signals, synchronously detected in parallel, are perceptually bound by human observers. In this task two stimulus sequences synchronously alternate between two states (say, A-B-A-B and X-Y-X-Y) in either of two temporal phases (ie A and B are respectively paired with X and Y, or vice versa). The critical alternation frequency beyond which participants cannot discriminate the temporal phase is measured as an index characterizing the temporal property of the underlying binding process. This task has been used to reveal the mechanisms underlying visual and cross-modal bindings. To directly compare these binding mechanisms with those in another modality, this study used the temporal phase discrimination task to reveal the processes underlying auditory bindings. The two sequences were alternations between two pitches. We manipulated the distance between the two sequences by changing intersequence frequency separation, or presentation ears (diotic vs dichotic). Results showed that the alternation frequency limit ranged from 7 to 30 Hz, becoming higher as the intersequence distance decreased, as is the case with vision. However, unlike vision, auditory phase discrimination limits were higher and more variable across participants. © 2015 SAGE Publications.

  19. Disorders of representation and control in semantic cognition: Effects of familiarity, typicality, and specificity

    PubMed Central

    Rogers, Timothy T.; Patterson, Karalyn; Jefferies, Elizabeth; Lambon Ralph, Matthew A.

    2015-01-01

    We present a case-series comparison of patients with cross-modal semantic impairments consequent on either (a) bilateral anterior temporal lobe atrophy in semantic dementia (SD) or (b) left-hemisphere fronto-parietal and/or posterior temporal stroke in semantic aphasia (SA). Both groups were assessed on a new test battery designed to measure how performance is influenced by concept familiarity, typicality and specificity. In line with previous findings, performance in SD was strongly modulated by all of these factors, with better performance for more familiar items (regardless of typicality), for more typical items (regardless of familiarity) and for tasks that did not require very specific classification, consistent with the gradual degradation of conceptual knowledge in SD. The SA group showed significant impairments on all tasks but their sensitivity to familiarity, typicality and specificity was more variable and governed by task-specific effects of these factors on controlled semantic processing. The results are discussed with reference to theories about the complementary roles of representation and manipulation of semantic knowledge. PMID:25934635

  20. The effect of sleep deprivation on BOLD activity elicited by a divided attention task.

    PubMed

    Jackson, Melinda L; Hughes, Matthew E; Croft, Rodney J; Howard, Mark E; Crewther, David; Kennedy, Gerard A; Owens, Katherine; Pierce, Rob J; O'Donoghue, Fergal J; Johnston, Patrick

    2011-06-01

    Sleep loss, widespread in today's society and associated with a number of clinical conditions, has a detrimental effect on a variety of cognitive domains including attention. This study examined the sequelae of sleep deprivation upon BOLD fMRI activation during divided attention. Twelve healthy males completed two randomized sessions; one after 27 h of sleep deprivation and one after a normal night of sleep. During each session, BOLD fMRI was measured while subjects completed a cross-modal divided attention task (visual and auditory). After normal sleep, increased BOLD activation was observed bilaterally in the superior frontal gyrus and the inferior parietal lobe during divided attention performance. Subjects reported feeling significantly more sleepy in the sleep deprivation session, and there was a trend towards poorer divided attention task performance. Sleep deprivation led to a down regulation of activation in the left superior frontal gyrus, possibly reflecting an attenuation of top-down control mechanisms on the attentional system. These findings have implications for understanding the neural correlates of divided attention and the neurofunctional changes that occur in individuals who are sleep deprived.

  1. Unintentional Activation of Translation Equivalents in Bilinguals Leads to Attention Capture in a Cross-Modal Visual Task

    PubMed Central

    Singh, Niharika; Mishra, Ramesh Kumar

    2015-01-01

    Using a variant of the visual world eye tracking paradigm, we examined if language non- selective activation of translation equivalents leads to attention capture and distraction in a visual task in bilinguals. High and low proficient Hindi-English speaking bilinguals were instructed to programme a saccade towards a line drawing which changed colour among other distractor objects. A spoken word, irrelevant to the main task, was presented before the colour change. On critical trials, one of the line drawings was a phonologically related word of the translation equivalent of the spoken word. Results showed that saccade latency was significantly higher towards the target in the presence of this cross-linguistic translation competitor compared to when the display contained completely unrelated objects. Participants were also slower when the display contained the referent of the spoken word among the distractors. However, the bilingual groups did not differ with regard to the interference effect observed. These findings suggest that spoken words activates translation equivalent which bias attention leading to interference in goal directed action in the visual domain. PMID:25775184

  2. Cross-Modal Correspondences Enhance Performance on a Colour-to-Sound Sensory Substitution Device.

    PubMed

    Hamilton-Fletcher, Giles; Wright, Thomas D; Ward, Jamie

    Visual sensory substitution devices (SSDs) can represent visual characteristics through distinct patterns of sound, allowing a visually impaired user access to visual information. Previous SSDs have avoided colour and when they do encode colour, have assigned sounds to colour in a largely unprincipled way. This study introduces a new tablet-based SSD termed the ‘Creole’ (so called because it combines tactile scanning with image sonification) and a new algorithm for converting colour to sound that is based on established cross-modal correspondences (intuitive mappings between different sensory dimensions). To test the utility of correspondences, we examined the colour–sound associative memory and object recognition abilities of sighted users who had their device either coded in line with or opposite to sound–colour correspondences. Improved colour memory and reduced colour-errors were made by users who had the correspondence-based mappings. Interestingly, the colour–sound mappings that provided the highest improvements during the associative memory task also saw the greatest gains for recognising realistic objects that also featured these colours, indicating a transfer of abilities from memory to recognition. These users were also marginally better at matching sounds to images varying in luminance, even though luminance was coded identically across the different versions of the device. These findings are discussed with relevance for both colour and correspondences for sensory substitution use.

  3. Sound Symbolism in Infancy: Evidence for Sound-Shape Cross-Modal Correspondences in 4-Month-Olds

    ERIC Educational Resources Information Center

    Ozturk, Ozge; Krehm, Madelaine; Vouloumanos, Athena

    2013-01-01

    Perceptual experiences in one modality are often dependent on activity from other sensory modalities. These cross-modal correspondences are also evident in language. Adults and toddlers spontaneously and consistently map particular words (e.g., "kiki") to particular shapes (e.g., angular shapes). However, the origins of these systematic mappings…

  4. Processing Sentences with Literal versus Figurative Use of Verbs: An ERP Study with Children with Language Impairments, Nonverbal Impairments, and Typical Development

    PubMed Central

    Lorusso, Maria Luisa; Burigo, Michele; Borsa, Virginia; Molteni, Massimo

    2015-01-01

    Forty native Italian children (age 6–15) performed a sentence plausibility judgment task. ERP recordings were available for 12 children with specific language impairment (SLI), 11 children with nonverbal learning disabilities (NVLD), and 13 control children. Participants listened to verb-object combinations and judged them as acceptable or unacceptable. Stimuli belonged to four conditions, where concreteness and congruency were manipulated. All groups made more errors responding to abstract and to congruent sentences. Moreover, SLI participants performed worse than NVLD participants with abstract sentences. ERPs were analyzed in the time window 300–500 ms. SLI children show atypical, reversed effects of concreteness and congruence as compared to control and NVLD children, respectively. The results suggest that linguistic impairments disrupt abstract language processing more than visual-motor impairments. Moreover, ROI and SPM analyses of ERPs point to a predominant involvement of the left rather than the right hemisphere in the comprehension of figurative expressions. PMID:26246693

  5. Short-term plasticity in auditory cognition.

    PubMed

    Jääskeläinen, Iiro P; Ahveninen, Jyrki; Belliveau, John W; Raij, Tommi; Sams, Mikko

    2007-12-01

    Converging lines of evidence suggest that auditory system short-term plasticity can enable several perceptual and cognitive functions that have been previously considered as relatively distinct phenomena. Here we review recent findings suggesting that auditory stimulation, auditory selective attention and cross-modal effects of visual stimulation each cause transient excitatory and (surround) inhibitory modulations in the auditory cortex. These modulations might adaptively tune hierarchically organized sound feature maps of the auditory cortex (e.g. tonotopy), thus filtering relevant sounds during rapidly changing environmental and task demands. This could support auditory sensory memory, pre-attentive detection of sound novelty, enhanced perception during selective attention, influence of visual processing on auditory perception and longer-term plastic changes associated with perceptual learning.

  6. Sex differences in functional activation patterns revealed by increased emotion processing demands.

    PubMed

    Hall, Geoffrey B C; Witelson, Sandra F; Szechtman, Henry; Nahmias, Claude

    2004-02-09

    Two [O(15)] PET studies assessed sex differences regional brain activation in the recognition of emotional stimuli. Study I revealed that the recognition of emotion in visual faces resulted in bilateral frontal activation in women, and unilateral right-sided activation in men. In study II, the complexity of the emotional face task was increased through tje addition of associated auditory emotional stimuli. Men again showed unilateral frontal activation, in this case to the left; whereas women did not show bilateral frontal activation, but showed greater limbic activity. These results suggest that when processing broader cross-modal emotional stimuli, men engage more in associative cognitive strategies while women draw more on primary emotional references.

  7. Cross-Modal Decoding of Neural Patterns Associated with Working Memory: Evidence for Attention-Based Accounts of Working Memory.

    PubMed

    Majerus, Steve; Cowan, Nelson; Péters, Frédéric; Van Calster, Laurens; Phillips, Christophe; Schrouff, Jessica

    2016-01-01

    Recent studies suggest common neural substrates involved in verbal and visual working memory (WM), interpreted as reflecting shared attention-based, short-term retention mechanisms. We used a machine-learning approach to determine more directly the extent to which common neural patterns characterize retention in verbal WM and visual WM. Verbal WM was assessed via a standard delayed probe recognition task for letter sequences of variable length. Visual WM was assessed via a visual array WM task involving the maintenance of variable amounts of visual information in the focus of attention. We trained a classifier to distinguish neural activation patterns associated with high- and low-visual WM load and tested the ability of this classifier to predict verbal WM load (high-low) from their associated neural activation patterns, and vice versa. We observed significant between-task prediction of load effects during WM maintenance, in posterior parietal and superior frontal regions of the dorsal attention network; in contrast, between-task prediction in sensory processing cortices was restricted to the encoding stage. Furthermore, between-task prediction of load effects was strongest in those participants presenting the highest capacity for the visual WM task. This study provides novel evidence for common, attention-based neural patterns supporting verbal and visual WM. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Biases in rhythmic sensorimotor coordination: effects of modality and intentionality.

    PubMed

    Debats, Nienke B; Ridderikhoff, Arne; de Boer, Betteco J; Peper, C Lieke E

    2013-08-01

    Sensorimotor biases were examined for intentional (tracking task) and unintentional (distractor task) rhythmic coordination. The tracking task involved unimanual tracking of either an oscillating visual signal or the passive movements of the contralateral hand (proprioceptive signal). In both conditions the required coordination patterns (isodirectional and mirror-symmetric) were defined relative to the body midline and the hands were not visible. For proprioceptive tracking the two patterns did not differ in stability, whereas for visual tracking the isodirectional pattern was performed more stably than the mirror-symmetric pattern. However, when visual feedback about the unimanual hand movements was provided during visual tracking, the isodirectional pattern ceased to be dominant. Together these results indicated that the stability of the coordination patterns did not depend on the modality of the target signal per se, but on the combination of sensory signals that needed to be processed (unimodal vs. cross-modal). The distractor task entailed rhythmic unimanual movements during which a rhythmic visual or proprioceptive distractor signal had to be ignored. The observed biases were similar as for intentional coordination, suggesting that intentionality did not affect the underlying sensorimotor processes qualitatively. Intentional tracking was characterized by active sensory pursuit, through muscle activity in the passively moved arm (proprioceptive tracking task) and rhythmic eye movements (visual tracking task). Presumably this pursuit afforded predictive information serving the coordination process. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Map display design

    NASA Technical Reports Server (NTRS)

    Aretz, Anthony J.

    1990-01-01

    This paper presents a cognitive model of a pilot's navigation task and describes an experiment comparing a visual momentum map display to the traditional track-up and north-up approaches. The data show the advantage to a track-up map is its congruence with the ego-centered forward view; however, the development of survey knowledge is hindered by the inconsistency of the rotating display. The stable alignment of a north-up map aids the acquisition of survey knowledge, but there is a cost associated with the mental rotation of the display to a track-up alignment for ego-centered tasks. The results also show that visual momentum can be used to reduce the mental rotation costs of a north-up display.

  10. An ERP-study of brand and no-name products

    PubMed Central

    2013-01-01

    Background Brands create product personalities that are thought to affect consumer decisions. Here we assessed, using the Go/No-go Association Task (GNAT) from social psychology, whether brands as opposed to no-name products are associated with implicit positive attitudes. Healthy young German participants viewed series of photos of cosmetics and food items (half of them brands) intermixed with positive and negative words. In any given run, one category of goods (e.g., cosmetics) and one kind of words (e.g., positive) had to be responded to, whereas responses had to be withheld for the other categories. Event-related brain potentials were recorded during the task. Results Unexpectedly, there were no response-time differences between congruent (brand and positive words) and incongruent (brand and negative words) pairings but ERPs showed differences as a function of congruency in the 600–750 ms time-window hinting at the existence of implicit attitudes towards brand and no-name stimuli. This finding deserves further investigation in future studies. Moreover, the amplitude of the late positive component (LPC) was found to be enhanced for brand as opposed to no-name stimuli. Conclusions Congruency effects suggest that ERPs are sensitive to implicit attitudes. Moreover, the results for the LPC imply that pictures of brand products are more arousing than those of no-name products, which may ultimately contribute to consumer decisions. PMID:24267403

  11. Changes of the directional brain networks related with brain plasticity in patients with long-term unilateral sensorineural hearing loss.

    PubMed

    Zhang, G-Y; Yang, M; Liu, B; Huang, Z-C; Li, J; Chen, J-Y; Chen, H; Zhang, P-P; Liu, L-J; Wang, J; Teng, G-J

    2016-01-28

    Previous studies often report that early auditory deprivation or congenital deafness contributes to cross-modal reorganization in the auditory-deprived cortex, and this cross-modal reorganization limits clinical benefit from cochlear prosthetics. However, there are inconsistencies among study results on cortical reorganization in those subjects with long-term unilateral sensorineural hearing loss (USNHL). It is also unclear whether there exists a similar cross-modal plasticity of the auditory cortex for acquired monaural deafness and early or congenital deafness. To address this issue, we constructed the directional brain functional networks based on entropy connectivity of resting-state functional MRI and researched changes of the networks. Thirty-four long-term USNHL individuals and seventeen normally hearing individuals participated in the test, and all USNHL patients had acquired deafness. We found that certain brain regions of the sensorimotor and visual networks presented enhanced synchronous output entropy connectivity with the left primary auditory cortex in the left long-term USNHL individuals as compared with normally hearing individuals. Especially, the left USNHL showed more significant changes of entropy connectivity than the right USNHL. No significant plastic changes were observed in the right USNHL. Our results indicate that the left primary auditory cortex (non-auditory-deprived cortex) in patients with left USNHL has been reorganized by visual and sensorimotor modalities through cross-modal plasticity. Furthermore, the cross-modal reorganization also alters the directional brain functional networks. The auditory deprivation from the left or right side generates different influences on the human brain. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  12. Spatiotemporal Processing in Crossmodal Interactions for Perception of the External World: A Review

    PubMed Central

    Hidaka, Souta; Teramoto, Wataru; Sugita, Yoichi

    2015-01-01

    Research regarding crossmodal interactions has garnered much interest in the last few decades. A variety of studies have demonstrated that multisensory information (vision, audition, tactile sensation, and so on) can perceptually interact with each other in the spatial and temporal domains. Findings regarding crossmodal interactions in the spatiotemporal domain (i.e., motion processing) have also been reported, with updates in the last few years. In this review, we summarize past and recent findings on spatiotemporal processing in crossmodal interactions regarding perception of the external world. A traditional view regarding crossmodal interactions holds that vision is superior to audition in spatial processing, but audition is dominant over vision in temporal processing. Similarly, vision is considered to have dominant effects over the other sensory modalities (i.e., visual capture) in spatiotemporal processing. However, recent findings demonstrate that sound could have a driving effect on visual motion perception. Moreover, studies regarding perceptual associative learning reported that, after association is established between a sound sequence without spatial information and visual motion information, the sound sequence could trigger visual motion perception. Other sensory information, such as motor action or smell, has also exhibited similar driving effects on visual motion perception. Additionally, recent brain imaging studies demonstrate that similar activation patterns could be observed in several brain areas, including the motion processing areas, between spatiotemporal information from different sensory modalities. Based on these findings, we suggest that multimodal information could mutually interact in spatiotemporal processing in the percept of the external world and that common perceptual and neural underlying mechanisms would exist for spatiotemporal processing. PMID:26733827

  13. Generalizing attentional control across dimensions and tasks: evidence from transfer of proportion-congruent effects.

    PubMed

    Wühr, Peter; Duthoo, Wout; Notebaert, Wim

    2015-01-01

    Three experiments investigated transfer of list-wide proportion congruent (LWPC) effects from a set of congruent and incongruent items with different frequency (inducer task) to a set of congruent and incongruent items with equal frequency (diagnostic task). Experiments 1 and 2 mixed items from horizontal and vertical Simon tasks. Tasks always involved different stimuli that varied on the same dimension (colour) in Experiment 1 and on different dimensions (colour, shape) in Experiment 2. Experiment 3 mixed trials from a manual Simon task with trials from a vocal Stroop task, with colour being the relevant stimulus in both tasks. There were two major results. First, we observed transfer of LWPC effects in Experiments 1 and 3, when tasks shared the relevant dimension, but not in Experiment 2. Second, sequential modulations of congruency effects transferred in Experiment 1 only. Hence, the different transfer patterns suggest that LWPC effects and sequential modulations arise from different mechanisms. Moreover, the observation of transfer supports an account of LWPC effects in terms of list-wide cognitive control, while being at odds with accounts in terms of stimulus-response (contingency) learning and item-specific control.

  14. Re Viewing Listening: "Clip Culture" and Cross-Modal Learning in the Music Classroom

    ERIC Educational Resources Information Center

    Webb, Michael

    2010-01-01

    This article envisions a new, cross-modal approach to classroom music listening, one that takes advantage of students' rising screen literacy and the ever-expanding archive of music-related visual material available on DVD and on video sharing sites such as YouTube. It is grounded in current literature on music performance studies, embodied music…

  15. Parallel pathways for cross-modal memory retrieval in Drosophila.

    PubMed

    Zhang, Xiaonan; Ren, Qingzhong; Guo, Aike

    2013-05-15

    Memory-retrieval processing of cross-modal sensory preconditioning is vital for understanding the plasticity underlying the interactions between modalities. As part of the sensory preconditioning paradigm, it has been hypothesized that the conditioned response to an unreinforced cue depends on the memory of the reinforced cue via a sensory link between the two cues. To test this hypothesis, we studied cross-modal memory-retrieval processing in a genetically tractable model organism, Drosophila melanogaster. By expressing the dominant temperature-sensitive shibire(ts1) (shi(ts1)) transgene, which blocks synaptic vesicle recycling of specific neural subsets with the Gal4/UAS system at the restrictive temperature, we specifically blocked visual and olfactory memory retrieval, either alone or in combination; memory acquisition remained intact for these modalities. Blocking the memory retrieval of the reinforced olfactory cues did not impair the conditioned response to the unreinforced visual cues or vice versa, in contrast to the canonical memory-retrieval processing of sensory preconditioning. In addition, these conditioned responses can be abolished by blocking the memory retrieval of the two modalities simultaneously. In sum, our results indicated that a conditioned response to an unreinforced cue in cross-modal sensory preconditioning can be recalled through parallel pathways.

  16. Generalization of cross-modal stimulus equivalence classes: operant processes as components in human category formation.

    PubMed Central

    Lane, S D; Clow, J K; Innis, A; Critchfield, T S

    1998-01-01

    This study employed a stimulus-class rating procedure to explore whether stimulus equivalence and stimulus generalization can combine to promote the formation of open-ended categories incorporating cross-modal stimuli. A pretest of simple auditory discrimination indicated that subjects (college students) could discriminate among a range of tones used in the main study. Before beginning the main study, 10 subjects learned to use a rating procedure for categorizing sets of stimuli as class consistent or class inconsistent. After completing conditional discrimination training with new stimuli (shapes and tones), the subjects demonstrated the formation of cross-modal equivalence classes. Subsequently, the class-inclusion rating procedure was reinstituted, this time with cross-modal sets of stimuli drawn from the equivalence classes. On some occasions, the tones of the equivalence classes were replaced by novel tones. The probability that these novel sets would be rated as class consistent was generally a function of the auditory distance between the novel tone and the tone that was explicitly included in the equivalence class. These data extend prior work on generalization of equivalence classes, and support the role of operant processes in human category formation. PMID:9821680

  17. Epistemological Belief Congruency in Mathematics between Vocational Technology Students and Their Instructors

    ERIC Educational Resources Information Center

    Schommer-Aikins, Marlene; Unruh, Susan; Morphew, Jason

    2015-01-01

    Three questions were addressed in this study. Is there evidence of epistemological beliefs congruency between students and their instructor? Do students' epistemological beliefs, students' epistemological congruence, or both predict mathematical anxiety? Do students' epistemological beliefs, students' epistemological congruence, or both predict…

  18. Multiplex congruence network of natural numbers.

    PubMed

    Yan, Xiao-Yong; Wang, Wen-Xu; Chen, Guan-Rong; Shi, Ding-Hua

    2016-03-31

    Congruence theory has many applications in physical, social, biological and technological systems. Congruence arithmetic has been a fundamental tool for data security and computer algebra. However, much less attention was devoted to the topological features of congruence relations among natural numbers. Here, we explore the congruence relations in the setting of a multiplex network and unveil some unique and outstanding properties of the multiplex congruence network. Analytical results show that every layer therein is a sparse and heterogeneous subnetwork with a scale-free topology. Counterintuitively, every layer has an extremely strong controllability in spite of its scale-free structure that is usually difficult to control. Another amazing feature is that the controllability is robust against targeted attacks to critical nodes but vulnerable to random failures, which also differs from ordinary scale-free networks. The multi-chain structure with a small number of chain roots arising from each layer accounts for the strong controllability and the abnormal feature. The multiplex congruence network offers a graphical solution to the simultaneous congruences problem, which may have implication in cryptography based on simultaneous congruences. Our work also gains insight into the design of networks integrating advantages of both heterogeneous and homogeneous networks without inheriting their limitations.

  19. Multiplex congruence network of natural numbers

    NASA Astrophysics Data System (ADS)

    Yan, Xiao-Yong; Wang, Wen-Xu; Chen, Guan-Rong; Shi, Ding-Hua

    2016-03-01

    Congruence theory has many applications in physical, social, biological and technological systems. Congruence arithmetic has been a fundamental tool for data security and computer algebra. However, much less attention was devoted to the topological features of congruence relations among natural numbers. Here, we explore the congruence relations in the setting of a multiplex network and unveil some unique and outstanding properties of the multiplex congruence network. Analytical results show that every layer therein is a sparse and heterogeneous subnetwork with a scale-free topology. Counterintuitively, every layer has an extremely strong controllability in spite of its scale-free structure that is usually difficult to control. Another amazing feature is that the controllability is robust against targeted attacks to critical nodes but vulnerable to random failures, which also differs from ordinary scale-free networks. The multi-chain structure with a small number of chain roots arising from each layer accounts for the strong controllability and the abnormal feature. The multiplex congruence network offers a graphical solution to the simultaneous congruences problem, which may have implication in cryptography based on simultaneous congruences. Our work also gains insight into the design of networks integrating advantages of both heterogeneous and homogeneous networks without inheriting their limitations.

  20. Effects of motor congruence on visual working memory.

    PubMed

    Quak, Michel; Pecher, Diane; Zeelenberg, Rene

    2014-10-01

    Grounded-cognition theories suggest that memory shares processing resources with perception and action. The motor system could be used to help memorize visual objects. In two experiments, we tested the hypothesis that people use motor affordances to maintain object representations in working memory. Participants performed a working memory task on photographs of manipulable and nonmanipulable objects. The manipulable objects were objects that required either a precision grip (i.e., small items) or a power grip (i.e., large items) to use. A concurrent motor task that could be congruent or incongruent with the manipulable objects caused no difference in working memory performance relative to nonmanipulable objects. Moreover, the precision- or power-grip motor task did not affect memory performance on small and large items differently. These findings suggest that the motor system plays no part in visual working memory.

  1. More attentional focusing through binaural beats: evidence from the global-local task.

    PubMed

    Colzato, Lorenza S; Barone, Hayley; Sellaro, Roberta; Hommel, Bernhard

    2017-01-01

    A recent study showed that binaural beats have an impact on the efficiency of allocating attention over time. We were interested to see whether this impact affects attentional focusing or, even further, the top-down control over irrelevant information. Healthy adults listened to gamma-frequency (40 Hz) binaural beats, which are assumed to increase attentional concentration, or a constant tone of 340 Hz (control condition) for 3 min before and during a global-local task. While the size of the congruency effect (indicating the failure to suppress task-irrelevant information) was unaffected by the binaural beats, the global-precedence effect (reflecting attentional focusing) was considerably smaller after gamma-frequency binaural beats than after the control condition. Our findings suggest that high-frequency binaural beats bias the individual attentional processing style towards a reduced spotlight of attention.

  2. Body Build Satisfaction and the Congruency of Body Build Perceptions.

    ERIC Educational Resources Information Center

    Hankins, Norman E.; Bailey, Roger C.

    1979-01-01

    Females were administered the somatotype rating scale. Satisfied subjects showed greater congruency between their own and wished-for body build, and greater congruency between their own and friend/date body builds, but less congruency between their own body build and the female stereotype. (Author/BEF)

  3. The Limited Impact of Exposure Duration on Holistic Word Processing.

    PubMed

    Chen, Changming; Abbasi, Najam Ul Hasan; Song, Shuang; Chen, Jie; Li, Hong

    2016-01-01

    The current study explored the impact of stimuli exposure duration on holistic word processing measured by the complete composite paradigm (CPc paradigm). The participants were asked to match the cued target parts of two characters which were presented for either a long (600 ms) or a short duration (170 ms). They were also tested by two popular versions of the CPc paradigm: the "early-fixed" task where the attention cue was visible from the beginning of each trial at a fixed position, and the "delayed-random" task where the cue showed up after the study character at random locations. The holistic word effect, as indexed by the alignment × congruency interaction, was identified in both tasks and was unaffected by the stimuli duration in both tasks. Meanwhile, the "delayed-random" task did not bring about larger holistic word effect than the "early-fixed" task. These results suggest the exposure duration (from around 150 to 600 ms) has a limited impact on the holistic word effect, and have methodological implications for experiment designs in this field.

  4. Generalized lessons about sequence learning from the study of the serial reaction time task

    PubMed Central

    Schwarb, Hillary; Schumacher, Eric H.

    2012-01-01

    Over the last 20 years researchers have used the serial reaction time (SRT) task to investigate the nature of spatial sequence learning. They have used the task to identify the locus of spatial sequence learning, identify situations that enhance and those that impair learning, and identify the important cognitive processes that facilitate this type of learning. Although controversies remain, the SRT task has been integral in enhancing our understanding of implicit sequence learning. It is important, however, to ask what, if anything, the discoveries made using the SRT task tell us about implicit learning more generally. This review analyzes the state of the current spatial SRT sequence learning literature highlighting the stimulus-response rule hypothesis of sequence learning which we believe provides a unifying account of discrepant SRT data. It also challenges researchers to use the vast body of knowledge acquired with the SRT task to understand other implicit learning literatures too often ignored in the context of this particular task. This broad perspective will make it possible to identify congruences among data acquired using various different tasks that will allow us to generalize about the nature of implicit learning. PMID:22723815

  5. Medial Unicondylar Knee Arthroplasty Improves Patellofemoral Congruence: a Possible Mechanistic Explanation for Poor Association Between Patellofemoral Degeneration and Clinical Outcome.

    PubMed

    Thein, Ran; Zuiderbaan, Hendrik A; Khamaisy, Saker; Nawabi, Danyal H; Poultsides, Lazaros A; Pearle, Andrew D

    2015-11-01

    The purpose was to determine the effect of medial fixed bearing unicondylar knee arthroplasty (UKA) on postoperative patellofemoral joint (PFJ) congruence and analyze the relationship of preoperative PFJ degeneration on clinical outcome. We retrospectively reviewed 110 patients (113 knees) who underwent medial UKA. Radiographs were evaluated to ascertain PFJ degenerative changes and congruence. Clinical outcomes were assessed preoperatively and postoperatively. The postoperative absolute patellar congruence angle (10.05 ± 10.28) was significantly improved compared with the preoperative value (14.23 ± 11.22) (P = 0.0038). No correlation was found between preoperative PFJ congruence or degeneration severity, and WOMAC scores at two-year follow up. Pre-operative PFJ congruence and degenerative changes do not affect UKA clinical outcomes. This finding may be explained by the post-op PFJ congruence improvement. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Congruence of Meaning.

    ERIC Educational Resources Information Center

    Suppes, Patrick

    By looking at the history of geometry and the concept of congruence in geometry we can get a new perspective on how to think about the closeness in meaning of two sentences. As in the analysis of congruence in geometry, a definite and concrete set of proposals about congruence of meaning depends essentially on the kind of theoretical framework…

  7. Effects of Worker Classification, Crystallization, and Job Autonomy on Congruence-Satisfaction Relationships.

    ERIC Educational Resources Information Center

    Obermesik, John W.; Beehr, Terry A.

    A majority of the congruence-satisfaction literature has used interest measures based on Holland's theory, although the measures' accuracy in predicting job satisfaction is questionable. Divergent findings among studies on occupational congruence-job satisfaction may be due to ineffective measures of congruence and job satisfaction and lack of…

  8. An Investigation of the Sampling Distribution of the Congruence Coefficient.

    ERIC Educational Resources Information Center

    Broadbooks, Wendy J.; Elmore, Patricia B.

    This study developed and investigated an empirical sampling distribution of the congruence coefficient. The effects of sample size, number of variables, and population value of the congruence coefficient on the sampling distribution of the congruence coefficient were examined. Sample data were generated on the basis of the common factor model and…

  9. The effect of congruence in patient and therapist alliance on patient's symptomatic levels.

    PubMed

    Zilcha-Mano, Sigal; Snyder, John; Silberschatz, George

    2017-05-01

    The ability of alliance to predict outcome has been widely demonstrated, but less is known about the effect of the level of congruence between patient and therapist alliance ratings on outcome. In the current study we examined whether the degree of congruence between patient and therapist alliance ratings can predict symptomatic levels 1 month later in treatment. The sample consisted of 127 patient-therapist dyads. Patients and therapists reported on their alliance levels, and patients reported their symptomatic levels 1 month later. Polynomial regression and response surface analysis were used to examine congruence. Findings suggest that when the congruence level of patient and therapist alliance ratings was not taken into account, only the therapist's alliance served as a significant predictor of symptomatic levels. But when the degree of congruence between patient and therapist alliance ratings was considered, the degree of congruence was a significant predictor of symptomatic levels 1 month later in treatment. Findings support the importance of the level of congruence between patient and therapist alliance ratings in predicting patient's symptomatic levels.

  10. Axial linear patellar displacement: a new measurement of patellofemoral congruence.

    PubMed

    Urch, Scott E; Tritle, Benjamin A; Shelbourne, K Donald; Gray, Tinker

    2009-05-01

    The tools for measuring the congruence angle with digital radiography software can be difficult to use; therefore, the authors sought to develop a new, easy, and reliable method for measuring patellofemoral congruence. The abstract goes here and covers two columns. The abstract goes The linear displacement measurement will correlate well with the congruence angle measurement. here and covers two columns. Cohort study (diagnosis); Level of evidence, 2. On Merchant view radiographs obtained digitally, the authors measured the congruence angle and a new linear displacement measurement on preoperative and postoperative radiographs of 31 patients who suffered unilateral patellar dislocations and 100 uninjured subjects. The linear displacement measurement was obtained by drawing a reference line across the medial and lateral trochlear facets. Perpendicular lines were drawn from the depth of the sulcus through the reference line and from the apex of the posterior tip of the patella through the reference line. The distance between the perpendicular lines was the linear displacement measurement. The measurements were obtained twice at different sittings. The observer was blinded as to the previous measurements to establish reliability. Measurements were compared to determine whether the linear displacement measurement correlated with congruence angle. Intraobserver reliability was above r(2) = .90 for all measurements. In patients with patellar dislocations, the mean congruence angle preoperatively was 33.5 degrees , compared with 12.1 mm for linear displacement (r(2) = .92). The mean congruence angle postoperatively was 11.2 degrees, compared with 4.0 mm for linear displacement (r(2) = .89). For normal subjects, the mean congruence angle was -3 degrees and the mean linear displacement was 0.2 mm. The linear displacement measurement was found to correlate with congruence angle measurements and may be an easy and useful tool for clinicians to evaluate patellofemoral congruence objectively.

  11. Patient and caregiver congruence: the importance of dyads in heart failure care.

    PubMed

    Retrum, Jessica H; Nowels, Carolyn T; Bekelman, David B

    2013-01-01

    Informal (family) caregivers are integrally involved in chronic heart failure (HF) care. Few studies have examined HF patients and their informal caregiver as a unit in a relationship, or a dyad. Dyad congruence, or consistency in perspective, is relevant to numerous aspects of living with HF and HF care. Incongruence or lack of communication could impair disease management and advance care planning. The purpose of this qualitative study was to examine for congruence and incongruence between HF patients and their informal (family) caregivers. Secondary analyses examined the relationship of congruence to emotional distress and whether dyad relationship characteristics (eg, parent-child vs spouse) were associated with congruence. Thirty-four interviews consisting of HF patients and their current informal caregiver (N = 17 dyads) were conducted. Each dyad member was asked similar questions about managing HF symptoms, psychosocial care, and planning for the future. Interviews were transcribed and analyzed using the general inductive approach. Congruence, incongruence, and lack of communication between patients and caregivers were identified in areas such as managing illness, perceived care needs, perspectives about the future of HF, and end-of-life issues. Seven dyads were generally congruent, 4 were incongruent, and 6 demonstrated a combination of congruence and incongruence. Much of the tension and distress among dyads related to conflicting views about how emotions should be dealt with or expressed. Dyad relationship (parent-child vs spouse) was not clearly associated with congruence, although the relationship did appear to be related to perceived caregiving roles. Several areas of HF clinical and research relevance, including self-care, advance care planning, and communication, were affected by congruence. Further research is needed to define how congruence is related to other relationship characteristics, such as relationship quality, how congruence can best be measured quantitatively, and to what degree modifying congruence will lead to improved HF patient and caregiver outcomes.

  12. Preserved Discrimination Performance and Neural Processing during Crossmodal Attention in Aging

    PubMed Central

    Mishra, Jyoti; Gazzaley, Adam

    2013-01-01

    In a recent study in younger adults (19-29 year olds) we showed evidence that distributed audiovisual attention resulted in improved discrimination performance for audiovisual stimuli compared to focused visual attention. Here, we extend our findings to healthy older adults (60-90 year olds), showing that performance benefits of distributed audiovisual attention in this population match those of younger adults. Specifically, improved performance was revealed in faster response times for semantically congruent audiovisual stimuli during distributed relative to focused visual attention, without any differences in accuracy. For semantically incongruent stimuli, discrimination accuracy was significantly improved during distributed relative to focused attention. Furthermore, event-related neural processing showed intact crossmodal integration in higher performing older adults similar to younger adults. Thus, there was insufficient evidence to support an age-related deficit in crossmodal attention. PMID:24278464

  13. Characteristic sounds facilitate visual search.

    PubMed

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2008-06-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.

  14. Further Insight and Additional Inference Methods for Polynomial Regression Applied to the Analysis of Congruence

    ERIC Educational Resources Information Center

    Cohen, Ayala; Nahum-Shani, Inbal; Doveh, Etti

    2010-01-01

    In their seminal paper, Edwards and Parry (1993) presented the polynomial regression as a better alternative to applying difference score in the study of congruence. Although this method is increasingly applied in congruence research, its complexity relative to other methods for assessing congruence (e.g., difference score methods) was one of the…

  15. Modality-specific selective attention attenuates multisensory integration.

    PubMed

    Mozolic, Jennifer L; Hugenschmidt, Christina E; Peiffer, Ann M; Laurienti, Paul J

    2008-01-01

    Stimuli occurring in multiple sensory modalities that are temporally synchronous or spatially coincident can be integrated together to enhance perception. Additionally, the semantic content or meaning of a stimulus can influence cross-modal interactions, improving task performance when these stimuli convey semantically congruent or matching information, but impairing performance when they contain non-matching or distracting information. Attention is one mechanism that is known to alter processing of sensory stimuli by enhancing perception of task-relevant information and suppressing perception of task-irrelevant stimuli. It is not known, however, to what extent attention to a single sensory modality can minimize the impact of stimuli in the unattended sensory modality and reduce the integration of stimuli across multiple sensory modalities. Our hypothesis was that modality-specific selective attention would limit processing of stimuli in the unattended sensory modality, resulting in a reduction of performance enhancements produced by semantically matching multisensory stimuli, and a reduction in performance decrements produced by semantically non-matching multisensory stimuli. The results from two experiments utilizing a cued discrimination task demonstrate that selective attention to a single sensory modality prevents the integration of matching multisensory stimuli that is normally observed when attention is divided between sensory modalities. Attention did not reliably alter the amount of distraction caused by non-matching multisensory stimuli on this task; however, these findings highlight a critical role for modality-specific selective attention in modulating multisensory integration.

  16. Flexible conflict management: conflict avoidance and conflict adjustment in reactive cognitive control.

    PubMed

    Dignath, David; Kiesel, Andrea; Eder, Andreas B

    2015-07-01

    Conflict processing is assumed to serve two crucial, yet distinct functions: Regarding task performance, control is adjusted to overcome the conflict. Regarding task choice, control is harnessed to bias decision making away from the source of conflict. Despite recent theoretical progress, until now two lines of research addressed these conflict-management strategies independently of each other. In this research, we used a voluntary task-switching paradigm in combination with response interference tasks to study both strategies in concert. In Experiment 1, participants chose between two univalent tasks on each trial. Switch rates increased following conflict trials, indicating avoidance of conflict. Furthermore, congruency effects in reaction times and error rates were reduced following conflict trials, demonstrating conflict adjustment. In Experiment 2, we used bivalent instead of univalent stimuli. Conflict adjustment in task performance was unaffected by this manipulation, but conflict avoidance was not observed. Instead, task switches were reduced after conflict trials. In Experiment 3, we used tasks comprising univalent or bivalent stimuli. Only tasks with univalent revealed conflict avoidance, whereas conflict adjustment was found for all tasks. On the basis of established theories of cognitive control, an integrative process model is described that can account for flexible conflict management. (c) 2015 APA, all rights reserved.

  17. Crossmodal correspondences in product packaging. Assessing color-flavor correspondences for potato chips (crisps).

    PubMed

    Piqueras-Fiszman, Betina; Spence, Charles

    2011-12-01

    We report a study designed to investigate consumers' crossmodal associations between the color of packaging and flavor varieties in crisps (potato chips). This product category was chosen because of the long-established but conflicting color-flavor conventions that exist for the salt and vinegar and cheese and onion flavor varieties in the UK. The use of both implicit and explicit measures of this crossmodal association revealed that consumers responded more slowly, and made more errors, when they had to pair the color and flavor that they implicitly thought of as being "incongruent" with the same response key. Furthermore, clustering consumers by the brand that they normally purchased revealed that the main reason why this pattern of results was observed could be their differing acquaintance with one brand versus another. In addition, when participants tried the two types of crisps from "congruently" and "incongruently" colored packets, some were unable to guess the flavor correctly in the latter case. These strong crossmodal associations did not have a significant effect on participants' hedonic appraisal of the crisps, but did arouse confusion. These results are relevant in terms of R&D, since ascertaining the appropriate color of the packaging across flavor varieties ought normally to help achieve immediate product recognition and consumer satisfaction. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Cortical reorganization in postlingually deaf cochlear implant users: Intra-modal and cross-modal considerations.

    PubMed

    Stropahl, Maren; Chen, Ling-Chia; Debener, Stefan

    2017-01-01

    With the advances of cochlear implant (CI) technology, many deaf individuals can partially regain their hearing ability. However, there is a large variation in the level of recovery. Cortical changes induced by hearing deprivation and restoration with CIs have been thought to contribute to this variation. The current review aims to identify these cortical changes in postlingually deaf CI users and discusses their maladaptive or adaptive relationship to the CI outcome. Overall, intra-modal and cross-modal reorganization patterns have been identified in postlingually deaf CI users in visual and in auditory cortex. Even though cross-modal activation in auditory cortex is considered as maladaptive for speech recovery in CI users, a similar activation relates positively to lip reading skills. Furthermore, cross-modal activation of the visual cortex seems to be adaptive for speech recognition. Currently available evidence points to an involvement of further brain areas and suggests that a focus on the reversal of visual take-over of the auditory cortex may be too limited. Future investigations should consider expanded cortical as well as multi-sensory processing and capture different hierarchical processing steps. Furthermore, prospective longitudinal designs are needed to track the dynamics of cortical plasticity that takes place before and after implantation. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  19. Children With ADHD Show Impairments in Multiple Stages of Information Processing in a Stroop Task: An ERP Study.

    PubMed

    Kóbor, Andrea; Takács, Ádám; Bryce, Donna; Szűcs, Dénes; Honbolygó, Ferenc; Nagy, Péter; Csépe, Valéria

    2015-01-01

    This study investigated the role of impaired inhibitory control as a factor underlying attention deficit hyperactivity disorder (ADHD). Children with ADHD and typically developing children completed an animal Stroop task while electroencephalogram (EEG) was recorded. The lateralized readiness potential and event-related brain potentials associated with perceptual and conflict processing were analyzed. Children with ADHD were slower to give correct responses irrespective of congruency, and slower to prepare correct responses in the incongruent condition. This delay could result from enhanced effort allocation at earlier processing stages, indicated by differences in P1, N1, and conflict sustained potential. Results suggest multiple deficits in information processing rather than a specific response inhibition impairment.

  20. Emotional congruence with children and sexual offending against children: a meta-analytic review.

    PubMed

    McPhail, Ian V; Hermann, Chantal A; Nunes, Kevin L

    2013-08-01

    Emotional congruence with children is an exaggerated affective and cognitive affiliation with children that is posited to be involved in the initiation and maintenance of sexual offending against children. The current meta-analysis examined the relationship between emotional congruence with children and sexual offending against children, sexual recidivism, and change following sexual offender treatment. A systematic literature review of online academic databases, conference proceedings, governmental agency websites, and article, book chapter, and book reference lists was performed. Thirty studies on emotional congruence with children in sexual offenders against children (SOC) were included in a random-effects meta-analysis. Extrafamilial SOC-especially those with male victims--evidenced higher emotional congruence with children than most non--SOC comparison groups and intrafamilial SOC. In contrast, intrafamilial SOC evidenced less emotional congruence with children than many of the non-SOC comparison groups. Higher levels of emotional congruence with children were associated with moderately higher rates of sexual recidivism. The association between emotional congruence with children and sexual recidivism was significantly stronger in extrafamilial SOC samples (d = 0.58, 95% CI [0.31, 0.85]) compared with intrafamilial SOC samples (d = -0.15, 95% CI [-0.58, 0.27]). Similarly, emotional congruence with children showed a significant reduction from pre- to posttreatment for extrafamilial SOC (d = 0.41, 95% CI [0.33, 0.85]), but not for intrafamilial SOC (d = 0.06, 95% CI [-0.10, 0.22]). Emotional congruence with children is a characteristic of extrafamilial SOC, is moderately predictive of sexual recidivism, and is potentially amenable through treatment efforts. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  1. A sound and efficient measure of joint congruence.

    PubMed

    Conconi, Michele; Castelli, Vincenzo Parenti

    2014-09-01

    In the medical world, the term "congruence" is used to describe by visual inspection how the articular surfaces mate each other, evaluating the joint capability to distribute an applied load from a purely geometrical perspective. Congruence is commonly employed for assessing articular physiology and for the comparison between normal and pathological states. A measure of it would thus represent a valuable clinical tool. Several approaches for the quantification of joint congruence have been proposed in the biomechanical literature, differing on how the articular contact is modeled. This makes it difficult to compare different measures. In particular, in previous articles a congruence measure has been presented which proved to be efficient and suitable for the clinical practice, but it was still empirically defined. This article aims at providing a sound theoretical support to this congruence measure by means of the Winkler elastic foundation contact model which, with respect to others, has the advantage to hold also for highly conforming surfaces as most of the human articulations are. First, the geometrical relation between the applied load and the resulting peak of pressure is analytically derived from the elastic foundation contact model, providing a theoretically sound approach to the definition of a congruence measure. Then, the capability of congruence measure to capture the same geometrical relation is shown. Finally, the reliability of congruence measure is discussed. © IMechE 2014.

  2. Multimodal Discriminative Binary Embedding for Large-Scale Cross-Modal Retrieval.

    PubMed

    Wang, Di; Gao, Xinbo; Wang, Xiumei; He, Lihuo; Yuan, Bo

    2016-10-01

    Multimodal hashing, which conducts effective and efficient nearest neighbor search across heterogeneous data on large-scale multimedia databases, has been attracting increasing interest, given the explosive growth of multimedia content on the Internet. Recent multimodal hashing research mainly aims at learning the compact binary codes to preserve semantic information given by labels. The overwhelming majority of these methods are similarity preserving approaches which approximate pairwise similarity matrix with Hamming distances between the to-be-learnt binary hash codes. However, these methods ignore the discriminative property in hash learning process, which results in hash codes from different classes undistinguished, and therefore reduces the accuracy and robustness for the nearest neighbor search. To this end, we present a novel multimodal hashing method, named multimodal discriminative binary embedding (MDBE), which focuses on learning discriminative hash codes. First, the proposed method formulates the hash function learning in terms of classification, where the binary codes generated by the learned hash functions are expected to be discriminative. And then, it exploits the label information to discover the shared structures inside heterogeneous data. Finally, the learned structures are preserved for hash codes to produce similar binary codes in the same class. Hence, the proposed MDBE can preserve both discriminability and similarity for hash codes, and will enhance retrieval accuracy. Thorough experiments on benchmark data sets demonstrate that the proposed method achieves excellent accuracy and competitive computational efficiency compared with the state-of-the-art methods for large-scale cross-modal retrieval task.

  3. Dynamics of cortico-subcortical cross-modal operations involved in audio-visual object detection in humans.

    PubMed

    Fort, Alexandra; Delpuech, Claude; Pernier, Jacques; Giard, Marie-Hélène

    2002-10-01

    Very recently, a number of neuroimaging studies in humans have begun to investigate the question of how the brain integrates information from different sensory modalities to form unified percepts. Already, intermodal neural processing appears to depend on the modalities of inputs or the nature (speech/non-speech) of information to be combined. Yet, the variety of paradigms, stimuli and technics used make it difficult to understand the relationships between the factors operating at the perceptual level and the underlying physiological processes. In a previous experiment, we used event-related potentials to describe the spatio-temporal organization of audio-visual interactions during a bimodal object recognition task. Here we examined the network of cross-modal interactions involved in simple detection of the same objects. The objects were defined either by unimodal auditory or visual features alone, or by the combination of the two features. As expected, subjects detected bimodal stimuli more rapidly than either unimodal stimuli. Combined analysis of potentials, scalp current densities and dipole modeling revealed several interaction patterns within the first 200 micro s post-stimulus: in occipito-parietal visual areas (45-85 micro s), in deep brain structures, possibly the superior colliculus (105-140 micro s), and in right temporo-frontal regions (170-185 micro s). These interactions differed from those found during object identification in sensory-specific areas and possibly in the superior colliculus, indicating that the neural operations governing multisensory integration depend crucially on the nature of the perceptual processes involved.

  4. Priming the holiday spirit: persistent activation due to extraexperimental experiences.

    PubMed

    Coane, Jennifer H; Balota, David A

    2009-12-01

    The concept of activation is a critical component of many models of cognition. A key characteristic of activation is that recent experience with a concept or stimulus increases the accessibility of the corresponding representation. The extent to which increases in accessibility occur as a result of experiences outside of laboratory settings has not been extensively explored. In the present study, we presented lexical stimuli associated with different holidays and festivities over the course of a year in a lexical decision task. When stimulus meaning and time of testing were congruent (e.g., leprechaun in March), response times were faster and accuracy greater than when meaning and time of test were incongruent (e.g., leprechaun in November). Congruency also benefited performance on a surprise free recall task of the items presented earlier in the lexical decision task. The discussion focuses on potential theoretical accounts of this heightened accessibility of time-of-the-year-relevant concepts.

  5. A single bout of meditation biases cognitive control but not attentional focusing: Evidence from the global-local task.

    PubMed

    Colzato, Lorenza S; van der Wel, Pauline; Sellaro, Roberta; Hommel, Bernhard

    2016-01-01

    Recent studies show that a single bout of meditation can impact information processing. We were interested to see whether this impact extends to attentional focusing and the top-down control over irrelevant information. Healthy adults underwent brief single bouts of either focused attention meditation (FAM), which is assumed to increase top-down control, or open monitoring meditation (OMM), which is assumed to weaken top-down control, before performing a global-local task. While the size of the global-precedence effect (reflecting attentional focusing) was unaffected by type of meditation, the congruency effect (indicating the failure to suppress task-irrelevant information) was considerably larger after OMM than after FAM. Our findings suggest that engaging in particular kinds of meditation creates particular cognitive-control states that bias the individual processing style toward either goal-persistence or cognitive flexibility. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Holistic Processing in the Composite Task Depends on Face Size.

    PubMed

    Ross, David A; Gauthier, Isabel

    Holistic processing is a hallmark of face processing. There is evidence that holistic processing is strongest for faces at identification distance, 2 - 10 meters from the observer. However, this evidence is based on tasks that have been little used in the literature and that are indirect measures of holistic processing. We use the composite task- a well validated and frequently used paradigm - to measure the effect of viewing distance on holistic processing. In line with previous work, we find a congruency x alignment effect that is strongest for faces that are close (2m equivalent distance) than for faces that are further away (24m equivalent distance). In contrast, the alignment effect for same trials, used by several authors to measure holistic processing, produced results that are difficult to interpret. We conclude that our results converge with previous findings providing more direct evidence for an effect of size on holistic processing.

  7. Action Intentions Modulate Allocation of Visual Attention: Electrophysiological Evidence

    PubMed Central

    Wykowska, Agnieszka; Schubö, Anna

    2012-01-01

    In line with the Theory of Event Coding (Hommel et al., 2001), action planning has been shown to affect perceptual processing – an effect that has been attributed to a so-called intentional weighting mechanism (Wykowska et al., 2009; Hommel, 2010). This paper investigates the electrophysiological correlates of action-related modulations of selection mechanisms in visual perception. A paradigm combining a visual search task for size and luminance targets with a movement task (grasping or pointing) was introduced, and the EEG was recorded while participants were performing the tasks. The results showed that the behavioral congruency effects, i.e., better performance in congruent (relative to incongruent) action-perception trials have been reflected by a modulation of the P1 component as well as the N2pc (an ERP marker of spatial attention). These results support the argumentation that action planning modulates already early perceptual processing and attention mechanisms. PMID:23060841

  8. The Influence of Task-Irrelevant Music on Language Processing: Syntactic and Semantic Structures

    PubMed Central

    Hoch, Lisianne; Poulin-Charronnat, Benedicte; Tillmann, Barbara

    2011-01-01

    Recent research has suggested that music and language processing share neural resources, leading to new hypotheses about interference in the simultaneous processing of these two structures. The present study investigated the effect of a musical chord's tonal function on syntactic processing (Experiment 1) and semantic processing (Experiment 2) using a cross-modal paradigm and controlling for acoustic differences. Participants read sentences and performed a lexical decision task on the last word, which was, syntactically or semantically, expected or unexpected. The simultaneously presented (task-irrelevant) musical sequences ended on either an expected tonic or a less-expected subdominant chord. Experiment 1 revealed interactive effects between music-syntactic and linguistic-syntactic processing. Experiment 2 showed only main effects of both music-syntactic and linguistic-semantic expectations. An additional analysis over the two experiments revealed that linguistic violations interacted with musical violations, though not differently as a function of the type of linguistic violations. The present findings were discussed in light of currently available data on the processing of music as well as of syntax and semantics in language, leading to the hypothesis that resources might be shared for structural integration processes and sequencing. PMID:21713122

  9. Age differences in suprathreshold sensory function.

    PubMed

    Heft, Marc W; Robinson, Michael E

    2014-02-01

    While there is general agreement that vision and audition decline with aging, observations for the somatosensory senses and taste are less clear. The purpose of this study was to assess age differences in multimodal sensory perception in healthy, community-dwelling participants. Participants (100 females and 78 males aged 20-89 years) judged the magnitudes of sensations associated with graded levels of thermal, tactile, and taste stimuli in separate testing sessions using a cross-modality matching (CMM) procedure. During each testing session, participants also rated words that describe magnitudes of percepts associated with differing-level sensory stimuli. The words provided contextual anchors for the sensory ratings, and the word-rating task served as a control for the CMM. The mean sensory ratings were used as dependent variables in a MANOVA for each sensory domain, with age and sex as between-subject variables. These analyses were repeated with the grand means for the word ratings as a covariate to control for the rating task. The results of this study suggest that there are modest age differences for somatosensory and taste domains. While the magnitudes of these differences are mediated somewhat by age differences in the rating task, differences in warm temperature, tactile, and salty taste persist.

  10. Modality-specific alpha modulations facilitate long-term memory encoding in the presence of distracters.

    PubMed

    Jiang, Haiteng; van Gerven, Marcel A J; Jensen, Ole

    2015-03-01

    It has been proposed that long-term memory encoding is not only dependent on engaging task-relevant regions but also on disengaging task-irrelevant regions. In particular, oscillatory alpha activity has been shown to be involved in shaping the functional architecture of the working brain because it reflects the functional disengagement of specific regions in attention and memory tasks. We here ask if such allocation of resources by alpha oscillations generalizes to long-term memory encoding in a cross-modal setting in which we acquired the ongoing brain activity using magnetoencephalography. Participants were asked to encode pictures while ignoring simultaneously presented words and vice versa. We quantified the brain activity during rehearsal reflecting subsequent memory in the different attention conditions. The key finding was that successful long-term memory encoding is reflected by alpha power decreases in the sensory region of the to-be-attended modality and increases in the sensory region of the to-be-ignored modality to suppress distraction during rehearsal period. Our results corroborate related findings from attention studies by demonstrating that alpha activity is also important for the allocation of resources during long-term memory encoding in the presence of distracters.

  11. Clock synchronization by accelerated observers - Metric construction for arbitrary congruences of world lines

    NASA Technical Reports Server (NTRS)

    Henriksen, R. N.; Nelson, L. A.

    1985-01-01

    Clock synchronization in an arbitrarily accelerated observer congruence is considered. A general solution is obtained that maintains the isotropy and coordinate independence of the one-way speed of light. Attention is also given to various particular cases including, rotating disk congruence or ring congruence. An explicit, congruence-based spacetime metric is constructed according to Einstein's clock synchronization procedure and the equation for the geodesics of the space-time was derived using Hamilton-Jacobi method. The application of interferometric techniques (absolute phase radio interferometry, VLBI) to the detection of the 'global Sagnac effect' is also discussed.

  12. The association between patient-therapist MATRIX congruence and treatment outcome.

    PubMed

    Mendlovic, Shlomo; Saad, Amit; Roll, Uri; Ben Yehuda, Ariel; Tuval-Mashiah, Rivka; Atzil-Slonim, Dana

    2018-03-14

    The present study aimed to examine the association between patient-therapist micro-level congruence/incongruence ratio and psychotherapeutic outcome. Nine good- and nine poor-outcome psychodynamic treatments (segregated by comparing pre- and post-treatment BDI-II) were analyzed (N = 18) moment by moment using the MATRIX (total number of MATRIX codes analyzed = 11,125). MATRIX congruence was defined as similar adjacent MATRIX codes. the congruence/incongruence ratio tended to increase as the treatment progressed only in good-outcome treatments. Progression of MATRIX codes' congruence/incongruence ratio is associated with good outcome of psychotherapy.

  13. Task demands and the pressures of everyday life: associations between cardiovascular reactivity and work blood pressure and heart rate.

    PubMed

    Steptoe, A; Cropley, M; Joekes, K

    2000-01-01

    Associations between cardiovascular stress reactivity and blood pressure and heart rate recorded in everyday life were hypothesized to depend on the stressfulness of the ambulatory monitoring period relative to standardized tasks and on activity levels at the time of measurement. One hundred two female and 60 male school teachers carried out high- and low-demand tasks under standardized conditions and ambulatory monitoring during the working day. Stress ratings during the day were close to those recorded during the low-demand task. Reactions to the low-demand task were significant predictors of ambulatory blood pressure and heart rate independent of baseline, age, gender, and body mass. Associations were more consistent for ambulatory recordings taken when participants were seated than when they were standing and when the ambulatory monitoring day was considered to be as stressful as usual or more stressful than usual, and not less stressful than usual. Laboratory-field associations of cardiovascular activity depend in part on the congruence of stressfulness and physical activity level in the 2 situations.

  14. Harmonic context influences pitch class equivalence judgments through gestalt and congruency effects.

    PubMed

    Slana, Anka; Repovš, Grega; Fitch, W Tecumseh; Gingras, Bruno

    2016-05-01

    The context in which a stimulus is presented shapes the way it is processed. This effect has been studied extensively in the field of visual perception. Our understanding of how context affects the processing of auditory stimuli is, however, rather limited. Western music is primarily built on melodies (succession of pitches) typically accompanied by chords (harmonic context), which provides a natural template for the study of context effects in auditory processing. Here, we investigated whether pitch class equivalence judgments of tones are affected by the harmonic context within which the target tones are embedded. Nineteen musicians and 19 non-musicians completed a change detection task in which they were asked to determine whether two successively presented target tones, heard either in isolation or with a chordal accompaniment (same or different chords), belonged to the same pitch class. Both musicians and non-musicians were most accurate when the chords remained the same, less so in the absence of chordal accompaniment, and least when the chords differed between both target tones. Further analysis investigating possible mechanisms underpinning these effects of harmonic context on task performance revealed that both a change in gestalt (change in either chord or pitch class), as well as incongruency between change in target tone pitch class and change in chords, led to reduced accuracy and longer reaction times. Our results demonstrate that, similarly to visual processing, auditory processing is influenced by gestalt and congruency effects. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Modulation of the N400 component in relation to hypomanic personality traits in a word meaning ambiguity resolution task.

    PubMed

    Raucher-Chéné, Delphine; Terrien, Sarah; Gobin, Pamela; Gierski, Fabien; Kaladjian, Arthur; Besche-Richard, Chrystel

    2017-09-01

    High levels of hypomanic personality traits have been associated with an increased risk of developing bipolar disorder (BD). Changes in semantic content, impaired verbal associations, abnormal prosody, and abnormal speed of language are core features of BD, and are thought to be related to semantic processing abnormalities. In the present study, we used event-related potentials to investigate the relation between semantic processing (N400 component) and hypomanic personality traits. We assessed 65 healthy young adults on the Hypomanic Personality Scale (HPS). Event-related potentials were recorded during a semantic ambiguity resolution task exploring semantic ambiguity (polysemous word ending a sentence) and congruency (target word semantically related to the sentence). As expected, semantic ambiguity and congruency both elicited an N400 effect across our sample. Correlation analyses showed a significant positive relationship between the Social Vitality subscore of the HPS and N400 modulation in the frontal region of interest in the incongruent unambiguous condition, and in the frontocentral region of interest in the incongruent ambiguous condition. We found differences in semantic processing (i.e., detection of incongruence and semantic inhibition) in individuals with higher Social Vitality subscores. In the light of the literature, we discuss the notion that a semantic processing impairment could be a potential marker of vulnerability to BD, and one that needs to be explored further in this clinical population. © 2017 The Authors. Psychiatry and Clinical Neurosciences © 2017 Japanese Society of Psychiatry and Neurology.

  16. Are Tutor Behaviors in Problem-Based Learning Stable? A Generalizability Study of Social Congruence, Expertise and Cognitive Congruence

    ERIC Educational Resources Information Center

    Williams, Judith C.; Alwis, W. A. M.; Rotgans, Jerome I.

    2011-01-01

    The purpose of this study was to investigate the stability of three distinct tutor behaviors (1) use of subject-matter expertise, (2) social congruence and (3) cognitive congruence, in a problem-based learning (PBL) environment. The data comprised the input from 16,047 different students to a survey of 762 tutors administered in three consecutive…

  17. When music is salty: The crossmodal associations between sound and taste.

    PubMed

    Guetta, Rachel; Loui, Psyche

    2017-01-01

    Here we investigate associations between complex auditory and complex taste stimuli. A novel piece of music was composed and recorded in four different styles of musical articulation to reflect the four basic tastes groups (sweet, sour, salty, bitter). In Experiment 1, participants performed above chance at pairing the music clips with corresponding taste words. Experiment 2 uses multidimensional scaling to interpret how participants categorize these musical stimuli, and to show that auditory categories can be organized in a similar manner as taste categories. Experiment 3 introduces four different flavors of custom-made chocolate ganache and shows that participants can match music clips with the corresponding taste stimuli with above-chance accuracy. Experiment 4 demonstrates the partial role of pleasantness in crossmodal mappings between sound and taste. The present findings confirm that individuals are able to make crossmodal associations between complex auditory and gustatory stimuli, and that valence may mediate multisensory integration in the general population.

  18. Characteristic sounds facilitate visual search

    PubMed Central

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2009-01-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing “meow” did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds. PMID:18567253

  19. Conflict adaptation in patients diagnosed with schizophrenia.

    PubMed

    Abrahamse, Elger; Ruitenberg, Marit; Boddewyn, Sarah; Oreel, Edith; de Schryver, Maarten; Morrens, Manuel; van Dijck, Jean-Philippe

    2017-11-01

    Cognitive control impairments may contribute strongly to the overall cognitive deficits observed in patients diagnosed with schizophrenia. In the current study we explore a specific cognitive control function referred to as conflict adaptation. Previous studies on conflict adaptation in schizophrenia showed equivocal results, and, moreover, were plagued by confounded research designs. Here we assessed for the first time conflict adaptation in schizophrenia with a design that avoided the major confounds of feature integration and stimulus-response contingency learning. Sixteen patients diagnosed with schizophrenia and sixteen healthy, matched controls performed a vocal Stroop task to determine the congruency sequence effect - a marker of conflict adaptation. A reliable congruency sequence effect was observed for both healthy controls and patients diagnosed with schizophrenia. These findings indicate that schizophrenia is not necessarily accompanied by impaired conflict adaptation. As schizophrenia has been related to abnormal functioning in core conflict adaptation areas such as anterior cingulate and dorsolateral prefrontal cortex, further research is required to better understand the precise impact of such abnormal brain functioning at the behavioral level. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. The influence of thematic congruency, typicality and divided attention on memory for radio advertisements.

    PubMed

    Martín-Luengo, Beatriz; Luna, Karlos; Migueles, Malen

    2014-01-01

    We examined the effects of the thematic congruence between ads and the programme in which they are embedded. We also studied the typicality of the to-be-remembered information (high- and low-typicality elements), and the effect of divided attention in the memory for radio ad contents. Participants listened to four radio programmes with thematically congruent and incongruent ads embedded, and completed a true/false recognition test indicating the level of confidence in their answer. Half of the sample performed an additional task (divided attention group) while listening to the radio excerpts. In general, recognition memory was better for incongruent ads and low-typicality statements. Confidence in hits was higher in the undivided attention group, although there were no differences in performance. Our results suggest that the widespread idea of embedding ads into thematic-congruent programmes negatively affects memory for ads. In addition, low-typicality features that are usually highlighted by advertisers were better remembered than typical contents. Finally, metamemory evaluations were influenced by the inference that memory should be worse if we do several things at the same time.

  1. Distinct cognitive control mechanisms as revealed by modality-specific conflict adaptation effects.

    PubMed

    Yang, Guochun; Nan, Weizhi; Zheng, Ya; Wu, Haiyan; Li, Qi; Liu, Xun

    2017-04-01

    Cognitive control is essential to resolve conflict in stimulus-response compatibility (SRC) tasks. The SRC effect in the current trial is reduced after an incongruent trial as compared with a congruent trial, a phenomenon being termed conflict adaptation (CA). The CA effect is found to be domain-specific , such that it occurs when adjacent trials contain the same type of conflict, but disappears when the conflicts are of different types. Similar patterns have been observed when tasks involve different modalities, but the modality-specific effect may have been confounded by task switching. In the current study, we investigated whether or not cognitive control could transfer across auditory and visual conflicts when task-switching was controlled. Participants were asked to respond to a visual or auditory (Experiments 1A/B) stimulus, with conflict coming from either the same or a different modality. CA effects showed modality-specific patterns. To account for potential confounding effects caused by differences in task-irrelevant properties, we specifically examined the influence of task-irrelevant properties on CA effects within the visual modality (Experiments 2A/B). Significant CA effects were observed across different conflicts from distinct task-irrelevant properties, ruling out that the lack of cross-modal CA effects in Experiments 1A/B resulted from differences in task-irrelevant information. Task-irrelevant properties were further matched in Experiments 3A/B to examine the pure effect of modality. Results replicated Experiments 1A/B showing robust modality-specific CA effects. Taken together, we provide supporting evidences that modality affects cognitive control in conflict resolution, which should be taken into account in theories of cognitive control. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. Methodological review: measured and reported congruence between preferred and actual place of death.

    PubMed

    Bell, C L; Somogyi-Zalud, E; Masaki, K H

    2009-09-01

    Congruence between preferred and actual place of death is an important palliative care outcome reported in the literature. We examined methods of measuring and reporting congruence to highlight variations impairing cross-study comparisons. Medline, PsychInfo, CINAHL, and Web of Science were systematically searched for clinical research studies examining patient preference and congruence as an outcome. Data were extracted into a matrix, including purpose, reported congruence, and method for eliciting preference. Studies were graded for quality. Using tables of preferred versus actual places of death, an overall congruence (total met preferences out of total preferences) and a kappa statistic of agreement were determined for each study. Twelve studies were identified. Percentage of congruence was reported using four different definitions. Ten studies provided a table or partial table of preferred versus actual deaths for each place. Three studies provided kappa statistics. No study achieved better than moderate agreement when analysed using kappa statistics. A study which elicited ideal preference reported the lowest agreement, while longitudinal studies reporting final preferred place of death yielded the highest agreement (moderate agreement). Two other studies of select populations also yielded moderate agreement. There is marked variation in methods of eliciting and reporting congruence, even among studies focused on congruence as an outcome. Cross-study comparison would be enhanced by the use of similar questions to elicit preference, tables of preferred versus actual places of death, and kappa statistics of agreement.

  3. Disentangling effects of abiotic factors and biotic interactions on cross-taxon congruence in species turnover patterns of plants, moths and beetles.

    PubMed

    Duan, Meichun; Liu, Yunhui; Yu, Zhenrong; Baudry, Jacques; Li, Liangtao; Wang, Changliu; Axmacher, Jan C

    2016-04-01

    High cross-taxon congruence in species diversity patterns is essential for the use of surrogate taxa in biodiversity conservation, but presence and strength of congruence in species turnover patterns, and the relative contributions of abiotic environmental factors and biotic interaction towards this congruence, remain poorly understood. In our study, we used variation partitioning in multiple regressions to quantify cross-taxon congruence in community dissimilarities of vascular plants, geometrid and arciinid moths and carabid beetles, subsequently investigating their respective underpinning by abiotic factors and biotic interactions. Significant cross-taxon congruence observed across all taxon pairs was linked to their similar responses towards elevation change. Changes in the vegetation composition were closely linked to carabid turnover, with vegetation structure and associated microclimatic conditions proposed causes of this link. In contrast, moth assemblages appeared to be dominated by generalist species whose turnover was weakly associated with vegetation changes. Overall, abiotic factors exerted a stronger influence on cross-taxon congruence across our study sites than biotic interactions. The weak congruence in turnover observed particularly between plants and moths highlights the importance of multi-taxon approaches based on groupings of taxa with similar turnovers, rather than the use of single surrogate taxa or environmental proxies, in biodiversity assessments.

  4. Correlates of emotional congruence with children in sexual offenders against children: a test of theoretical models in an incarcerated sample.

    PubMed

    McPhail, Ian V; Hermann, Chantal A; Fernandez, Yolanda M

    2014-02-01

    Emotional congruence with children is a psychological construct theoretically involved in the etiology and maintenance of sexual offending against children. Research conducted to date has not examined the relationship between emotional congruence with children and other psychological meaningful risk factors for sexual offending against children. The current study derived potential correlates of emotional congruence with children from the published literature and proposed three models of emotional congruence with children that contain relatively unique sets of correlates: the blockage, sexual deviance, and psychological immaturity models. Using Area under the Curve analysis, we assessed the relationship between emotional congruence with children and offense characteristics, victim demographics, and psychologically meaningful risk factors in a sample of incarcerated sexual offenders against children (n=221). The sexual deviance model received the most support: emotional congruence with children was significantly associated with deviant sexual interests, sexual self-regulation problems, and cognition that condones and supports child molestation. The blockage model received partial support, and the immaturity model received the least support. Based on the results, we propose a set of further predictions regarding the relationships between emotional congruence with children and other psychologically meaningful risk factors to be examined in future research. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Congruence or discrepancy? Comparing patients' health valuations and physicians' treatment goals for rehabilitation for patients with chronic conditions.

    PubMed

    Nagl, Michaela; Farin, Erik

    2012-03-01

    The aim of this study was to test the congruence of patients' health valuations and physicians' treatment goals for the rehabilitation of chronically ill patients. In addition, patient characteristics associated with greater or less congruence were to be determined. In a questionnaire study, patients' health valuations and physicians' goals were assessed in three chronic conditions [breast cancer (BC), chronic ischemic heart disease (CIHD), and chronic back pain (CBP)] using a ranking method. Sociodemographic variables and health-related quality of life were assessed as patient-related factors that influence congruence. Congruence was determined at the group (Spearman's ρ) and individual levels (percentage of congruence). Patient-related influencing factors were calculated after a simple imputation using multiple logistic regression analysis. At the group level, there were often only low correlations. The mean percentage of congruence was 34.7% (BC), 48.5% (CIHD), and 31.9% (CBP). Patients with BC or CIHD who have a higher level of education showed greater congruence. Our results indicate some high discrepancy rates between physicians' treatment goals and patients' health valuations. It is possible that patients have preferences that do not correspond well with realistic rehabilitation goals or that physicians do not take patients' individual health valuations sufficiently into consideration when setting goals.

  6. Host and parasite morphology influence congruence between host and parasite phylogenies.

    PubMed

    Sweet, Andrew D; Bush, Sarah E; Gustafsson, Daniel R; Allen, Julie M; DiBlasi, Emily; Skeen, Heather R; Weckstein, Jason D; Johnson, Kevin P

    2018-03-23

    Comparisons of host and parasite phylogenies often show varying degrees of phylogenetic congruence. However, few studies have rigorously explored the factors driving this variation. Multiple factors such as host or parasite morphology may govern the degree of phylogenetic congruence. An ideal analysis for understanding the factors correlated with congruence would focus on a diverse host-parasite system for increased variation and statistical power. In this study, we focused on the Brueelia-complex, a diverse and widespread group of feather lice that primarily parasitise songbirds. We generated a molecular phylogeny of the lice and compared this tree with a phylogeny of their avian hosts. We also tested for the contribution of each host-parasite association to the overall congruence. The two trees overall were significantly congruent, but the contribution of individual associations to this congruence varied. To understand this variation, we developed a novel approach to test whether host, parasite or biogeographic factors were statistically associated with patterns of congruence. Both host plumage dimorphism and parasite ecomorphology were associated with patterns of congruence, whereas host body size, other plumage traits and biogeography were not. Our results lay the framework for future studies to further elucidate how these factors influence the process of host-parasite coevolution. Copyright © 2018 Australian Society for Parasitology. Published by Elsevier Ltd. All rights reserved.

  7. Disentangling effects of abiotic factors and biotic interactions on cross-taxon congruence in species turnover patterns of plants, moths and beetles

    PubMed Central

    Duan, Meichun; Liu, Yunhui; Yu, Zhenrong; Baudry, Jacques; Li, Liangtao; Wang, Changliu; Axmacher, Jan C.

    2016-01-01

    High cross-taxon congruence in species diversity patterns is essential for the use of surrogate taxa in biodiversity conservation, but presence and strength of congruence in species turnover patterns, and the relative contributions of abiotic environmental factors and biotic interaction towards this congruence, remain poorly understood. In our study, we used variation partitioning in multiple regressions to quantify cross-taxon congruence in community dissimilarities of vascular plants, geometrid and arciinid moths and carabid beetles, subsequently investigating their respective underpinning by abiotic factors and biotic interactions. Significant cross-taxon congruence observed across all taxon pairs was linked to their similar responses towards elevation change. Changes in the vegetation composition were closely linked to carabid turnover, with vegetation structure and associated microclimatic conditions proposed causes of this link. In contrast, moth assemblages appeared to be dominated by generalist species whose turnover was weakly associated with vegetation changes. Overall, abiotic factors exerted a stronger influence on cross-taxon congruence across our study sites than biotic interactions. The weak congruence in turnover observed particularly between plants and moths highlights the importance of multi-taxon approaches based on groupings of taxa with similar turnovers, rather than the use of single surrogate taxa or environmental proxies, in biodiversity assessments. PMID:27032533

  8. The influence of spatial congruency and movement preparation time on saccade curvature in simultaneous and sequential dual-tasks.

    PubMed

    Moehler, Tobias; Fiehler, Katja

    2015-11-01

    Saccade curvature represents a sensitive measure of oculomotor inhibition with saccades curving away from covertly attended locations. Here we investigated whether and how saccade curvature depends on movement preparation time when a perceptual task is performed during or before saccade preparation. Participants performed a dual-task including a visual discrimination task at a cued location and a saccade task to the same location (congruent) or to a different location (incongruent). Additionally, we varied saccade preparation time (time between saccade cue and Go-signal) and the occurrence of the discrimination task (during saccade preparation=simultaneous vs. before saccade preparation=sequential). We found deteriorated perceptual performance in incongruent trials during simultaneous task performance while perceptual performance was unaffected during sequential task performance. Saccade accuracy and precision were deteriorated in incongruent trials during simultaneous and, to a lesser extent, also during sequential task performance. Saccades consistently curved away from covertly attended non-saccade locations. Saccade curvature was unaffected by movement preparation time during simultaneous task performance but decreased and finally vanished with increasing movement preparation time during sequential task performance. Our results indicate that the competing saccade plan to the covertly attended non-saccade location is maintained during simultaneous task performance until the perceptual task is solved while in the sequential condition, in which the discrimination task is solved prior to the saccade task, oculomotor inhibition decays gradually with movement preparation time. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Cross-modal Savings in the Contralateral Eyelid Conditioned Response

    PubMed Central

    Campolattaro, Matthew M.; Buss, Eric W.; Freeman, John H.

    2015-01-01

    The present experiment monitored bilateral eyelid responses during eyeblink conditioning in rats trained with a unilateral unconditioned stimulus (US). Three groups of rats were used to determine if cross-modal savings occurs when the location of the US is switched from one eye to the other. Rats in each group first received paired or unpaired eyeblink conditioning with a conditioned stimulus (tone or light; CS) and a unilateral periorbital electrical stimulation US. All rats were subsequently given paired training, but with the US location (Group 1), CS modality (Group 2), or US location and CS modality (Group 3) changed. Changing the location of the US alone resulted in an immediate transfer of responding in both eyelids (Group 1) in rats that received paired training prior to the transfer session. Rats in groups 2 and 3 that initially received paired training showed facilitated learning to the new CS modality during the transfer sessions, indicating that cross-modal savings occurs whether or not the location of the US is changed. All rats that were initially given unpaired training acquired conditioned eyeblink responses similar to de novo acquisition rate during the transfer sessions. Savings of CR incidence was more robust than savings of CR amplitude when the US switched sides, a finding that has implications for elucidating the neural mechanisms of cross-modal savings. PMID:26501170

  10. Cross-Modality Image Synthesis via Weakly Coupled and Geometry Co-Regularized Joint Dictionary Learning.

    PubMed

    Huang, Yawen; Shao, Ling; Frangi, Alejandro F

    2018-03-01

    Multi-modality medical imaging is increasingly used for comprehensive assessment of complex diseases in either diagnostic examinations or as part of medical research trials. Different imaging modalities provide complementary information about living tissues. However, multi-modal examinations are not always possible due to adversary factors, such as patient discomfort, increased cost, prolonged scanning time, and scanner unavailability. In additionally, in large imaging studies, incomplete records are not uncommon owing to image artifacts, data corruption or data loss, which compromise the potential of multi-modal acquisitions. In this paper, we propose a weakly coupled and geometry co-regularized joint dictionary learning method to address the problem of cross-modality synthesis while considering the fact that collecting the large amounts of training data is often impractical. Our learning stage requires only a few registered multi-modality image pairs as training data. To employ both paired images and a large set of unpaired data, a cross-modality image matching criterion is proposed. Then, we propose a unified model by integrating such a criterion into the joint dictionary learning and the observed common feature space for associating cross-modality data for the purpose of synthesis. Furthermore, two regularization terms are added to construct robust sparse representations. Our experimental results demonstrate superior performance of the proposed model over state-of-the-art methods.

  11. Cross-modal signatures in maternal speech and singing

    PubMed Central

    Trehub, Sandra E.; Plantinga, Judy; Brcic, Jelena; Nowicki, Magda

    2013-01-01

    We explored the possibility of a unique cross-modal signature in maternal speech and singing that enables adults and infants to link unfamiliar speaking or singing voices with subsequently viewed silent videos of the talkers or singers. In Experiment 1, adults listened to 30-s excerpts of speech followed by successively presented 7-s silent video clips, one from the previously heard speaker (different speech content) and the other from a different speaker. They successfully identified the previously heard speaker. In Experiment 2, adults heard comparable excerpts of singing followed by silent video clips from the previously heard singer (different song) and another singer. They failed to identify the previously heard singer. In Experiment 3, the videos of talkers and singers were blurred to obscure mouth movements. Adults successfully identified the talkers and they also identified the singers from videos of different portions of the song previously heard. In Experiment 4, 6− to 8-month-old infants listened to 30-s excerpts of the same maternal speech or singing followed by exposure to the silent videos on alternating trials. They looked longer at the silent videos of previously heard talkers and singers. The findings confirm the individuality of maternal speech and singing performance as well as adults' and infants' ability to discern the unique cross-modal signatures. The cues that enable cross-modal matching of talker and singer identity remain to be determined. PMID:24198805

  12. Early Cross-modal Plasticity in Adults.

    PubMed

    Lo Verde, Luca; Morrone, Maria Concetta; Lunghi, Claudia

    2017-03-01

    It is known that, after a prolonged period of visual deprivation, the adult visual cortex can be recruited for nonvisual processing, reflecting cross-modal plasticity. Here, we investigated whether cross-modal plasticity can occur at short timescales in the typical adult brain by comparing the interaction between vision and touch during binocular rivalry before and after a brief period of monocular deprivation, which strongly alters ocular balance favoring the deprived eye. While viewing dichoptically two gratings of orthogonal orientation, participants were asked to actively explore a haptic grating congruent in orientation to one of the two rivalrous stimuli. We repeated this procedure before and after 150 min of monocular deprivation. We first confirmed that haptic stimulation interacted with vision during rivalry promoting dominance of the congruent visuo-haptic stimulus and that monocular deprivation increased the deprived eye and decreased the nondeprived eye dominance. Interestingly, after deprivation, we found that the effect of touch did not change for the nondeprived eye, whereas it disappeared for the deprived eye, which was potentiated after deprivation. The absence of visuo-haptic interaction for the deprived eye lasted for over 1 hr and was not attributable to a masking induced by the stronger response of the deprived eye as confirmed by a control experiment. Taken together, our results demonstrate that the adult human visual cortex retains a high degree of cross-modal plasticity, which can occur even at very short timescales.

  13. Visual and auditory synchronization deficits among dyslexic readers as compared to non-impaired readers: a cross-correlation algorithm analysis

    PubMed Central

    Sela, Itamar

    2014-01-01

    Visual and auditory temporal processing and crossmodal integration are crucial factors in the word decoding process. The speed of processing (SOP) gap (Asynchrony) between these two modalities, which has been suggested as related to the dyslexia phenomenon, is the focus of the current study. Nineteen dyslexic and 17 non-impaired University adult readers were given stimuli in a reaction time (RT) procedure where participants were asked to identify whether the stimulus type was only visual, only auditory or crossmodally integrated. Accuracy, RT, and Event Related Potential (ERP) measures were obtained for each of the three conditions. An algorithm to measure the contribution of the temporal SOP of each modality to the crossmodal integration in each group of participants was developed. Results obtained using this model for the analysis of the current study data, indicated that in the crossmodal integration condition the presence of the auditory modality at the pre-response time frame (between 170 and 240 ms after stimulus presentation), increased processing speed in the visual modality among the non-impaired readers, but not in the dyslexic group. The differences between the temporal SOP of the modalities among the dyslexics and the non-impaired readers give additional support to the theory that an asynchrony between the visual and auditory modalities is a cause of dyslexia. PMID:24959125

  14. Cross-modal signatures in maternal speech and singing.

    PubMed

    Trehub, Sandra E; Plantinga, Judy; Brcic, Jelena; Nowicki, Magda

    2013-01-01

    We explored the possibility of a unique cross-modal signature in maternal speech and singing that enables adults and infants to link unfamiliar speaking or singing voices with subsequently viewed silent videos of the talkers or singers. In Experiment 1, adults listened to 30-s excerpts of speech followed by successively presented 7-s silent video clips, one from the previously heard speaker (different speech content) and the other from a different speaker. They successfully identified the previously heard speaker. In Experiment 2, adults heard comparable excerpts of singing followed by silent video clips from the previously heard singer (different song) and another singer. They failed to identify the previously heard singer. In Experiment 3, the videos of talkers and singers were blurred to obscure mouth movements. Adults successfully identified the talkers and they also identified the singers from videos of different portions of the song previously heard. In Experiment 4, 6- to 8-month-old infants listened to 30-s excerpts of the same maternal speech or singing followed by exposure to the silent videos on alternating trials. They looked longer at the silent videos of previously heard talkers and singers. The findings confirm the individuality of maternal speech and singing performance as well as adults' and infants' ability to discern the unique cross-modal signatures. The cues that enable cross-modal matching of talker and singer identity remain to be determined.

  15. Congruence Couple Therapy for Pathological Gambling

    ERIC Educational Resources Information Center

    Lee, Bonnie K.

    2009-01-01

    Couple therapy models for pathological gambling are limited. Congruence Couple Therapy is an integrative, humanistic, systems model that addresses intrapsychic, interpersonal, intergenerational, and universal-spiritual disconnections of pathological gamblers and their spouses to shift towards congruence. Specifically, CCT's theoretical…

  16. Organizational goal congruence and job attitudes revisited.

    DOT National Transportation Integrated Search

    1992-02-01

    Vancouver and Schmitt (1991) operationalized person-organization fit in terms of goal congruence and reported that goal congruence scores were positively related to favorable job attitudes. The purpose of the present study was to replicate and extend...

  17. Chiropractic curriculum mapping and congruence of the evidence for workplace interventions in work-related neck pain

    PubMed Central

    Frutiger, Martin; Tuchin, Peter Jeffery

    2017-01-01

    Objective: The purpose of this study was to provide a best-synthesis summary of the literature for effective workplace health promotion interventions (WHPI) for work-related mechanical neck pain (MNP) and to determine the congruence between knowledge of WHPI for work-related MNP and coverage of MNP in the chiropractic postgraduate program at Macquarie University. Methods: A literature review was undertaken to determine effective WHPI for work-related MNP. We searched Cochrane Library, PubMed, EMBASE, CINAHL, and PEDro (from 1991 to 2016) for systematic reviews and meta-analyses. The PRISMA (2009) 27-item checklist was used to critically appraise included articles. Lectures, tutorials, and assessment tasks within the chiropractic postgraduate program were mapped to the literature review findings and analyzed. Results: There was moderate-quality evidence for multidimensional WHPI, including aspects of mental and physical functioning, activity performance and modifications, and environmental modifications, to reduce MNP and disability in workers, particularly in the long term. Education on coverage of MNP and effective WHPI for MNP was inadequately covered although congruent with synthesis of current literature. Education on body functions and structures and personal factors were the most commonly covered components. Conclusion: Multidimensional WHPI, focusing on physical, mental, and environmental modifications, appear to reduce self-reported MNP primarily in office workers. There is adequate congruence between the chiropractic postgraduate program at Macquarie University and the published literature on some WHPI. However, there is inadequate coverage on aspects of MNP and effective WHPI for MNP, particularly those focusing on activity and participation and environmental factors. PMID:28742974

  18. Semantic congruency and the (reversed) Colavita effect in children and adults.

    PubMed

    Wille, Claudia; Ebersbach, Mirjam

    2016-01-01

    When presented with auditory, visual, or bimodal audiovisual stimuli in a discrimination task, adults tend to ignore the auditory component in bimodal stimuli and respond to the visual component only (i.e., Colavita visual dominance effect). The same is true for older children, whereas young children are dominated by the auditory component of bimodal audiovisual stimuli. This suggests a change of sensory dominance during childhood. The aim of the current study was to investigate, in three experimental conditions, whether children and adults show sensory dominance when presented with complex semantic stimuli and whether this dominance can be modulated by stimulus characteristics such as semantic (in)congruency, frequency of bimodal trials, and color information. Semantic (in)congruency did not affect the magnitude of the auditory dominance effect in 6-year-olds or the visual dominance effect in adults, but it was a modulating factor of the visual dominance in 9-year-olds (Conditions 1 and 2). Furthermore, the absence of color information (Condition 3) did not affect auditory dominance in 6-year-olds and hardly affected visual dominance in adults, whereas the visual dominance in 9-year-olds disappeared. Our results suggest that (a) sensory dominance in children and adults is not restricted to simple lights and sounds, as used in previous research, but can be extended to semantically meaningful stimuli and that (b) sensory dominance is more robust in 6-year-olds and adults than in 9-year-olds, implying a transitional stage around this age. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Converging evidence for control of color-word Stroop interference at the item level.

    PubMed

    Bugg, Julie M; Hutchison, Keith A

    2013-04-01

    Prior studies have shown that cognitive control is implemented at the list and context levels in the color-word Stroop task. At first blush, the finding that Stroop interference is reduced for mostly incongruent items as compared with mostly congruent items (i.e., the item-specific proportion congruence [ISPC] effect) appears to provide evidence for yet a third level of control, which modulates word reading at the item level. However, evidence to date favors the view that ISPC effects reflect the rapid prediction of high-contingency responses and not item-specific control. In Experiment 1, we first show that an ISPC effect is obtained when the relevant dimension (i.e., color) signals proportion congruency, a problematic pattern for theories based on differential response contingencies. In Experiment 2, we replicate and extend this pattern by showing that item-specific control settings transfer to new stimuli, ruling out alternative frequency-based accounts. In Experiment 3, we revert to the traditional design in which the irrelevant dimension (i.e., word) signals proportion congruency. Evidence for item-specific control, including transfer of the ISPC effect to new stimuli, is apparent when 4-item sets are employed but not when 2-item sets are employed. We attribute this pattern to the absence of high-contingency responses on incongruent trials in the 4-item set. These novel findings provide converging evidence for reactive control of color-word Stroop interference at the item level, reveal theoretically important factors that modulate reliance on item-specific control versus contingency learning, and suggest an update to the item-specific control account (Bugg, Jacoby, & Chanani, 2011).

  20. Chiropractic curriculum mapping and congruence of the evidence for workplace interventions in work-related neck pain.

    PubMed

    Frutiger, Martin; Tuchin, Peter Jeffery

    2017-10-01

    The purpose of this study was to provide a best-synthesis summary of the literature for effective workplace health promotion interventions (WHPI) for work-related mechanical neck pain (MNP) and to determine the congruence between knowledge of WHPI for work-related MNP and coverage of MNP in the chiropractic postgraduate program at Macquarie University. A literature review was undertaken to determine effective WHPI for work-related MNP. We searched Cochrane Library, PubMed, EMBASE, CINAHL, and PEDro (from 1991 to 2016) for systematic reviews and meta-analyses. The PRISMA (2009) 27-item checklist was used to critically appraise included articles. Lectures, tutorials, and assessment tasks within the chiropractic postgraduate program were mapped to the literature review findings and analyzed. There was moderate-quality evidence for multidimensional WHPI, including aspects of mental and physical functioning, activity performance and modifications, and environmental modifications, to reduce MNP and disability in workers, particularly in the long term. Education on coverage of MNP and effective WHPI for MNP was inadequately covered although congruent with synthesis of current literature. Education on body functions and structures and personal factors were the most commonly covered components. Multidimensional WHPI, focusing on physical, mental, and environmental modifications, appear to reduce self-reported MNP primarily in office workers. There is adequate congruence between the chiropractic postgraduate program at Macquarie University and the published literature on some WHPI. However, there is inadequate coverage on aspects of MNP and effective WHPI for MNP, particularly those focusing on activity and participation and environmental factors.

  1. Do men and their wives see it the same way? Congruence within couples during the first year of prostate cancer.

    PubMed

    Ezer, Hélène; Chachamovich, Juliana L Rigol; Chachamovich, Eduardo

    2011-02-01

    The purpose of this study was to determine the psychosocial adjustment congruence within couples through the first year of prostate cancer experience, and to explore the personal variables that could predict congruence within couples. Eighty-one couples were interviewed at the time of diagnosis; 69 participated at 3 months and 61 at 12 months. Paired t-tests were used to examine dyadic congruence on seven domains of psychosocial adjustment. Repeated Measures ANOVAs were used to examine the congruence over time. Multiple regressions were used to determine whether mood disturbance, urinary and sexual bother, sense of coherence, and social support were predictors of congruence within couples on each of the adjustment domains. At time 1, couples had incongruent perceptions in 3 of 7 domains: health care, psychological, and social adjustment. Three months later, health care, psychological, and sexual domains showed incongruence within couples. One year after the diagnosis, there were incongruent perceptions only in sexual and psychological domains. There was little variation of the congruence within couples over time. Husbands and wives' mood disturbance, urinary and sexual bother, sense of coherence, and social support accounted for 25-63% of variance in couple congruence in the adjustment domains in the study periods. The findings suggested that there is couple congruence. Domains in which incongruence was observed are important targets for clinical interventions. Greater attention needs to be directed to assisting couples to recognize the differences between their perceptions, especially the ones related to the sexual symptoms and psychological distress. Copyright © 2010 John Wiley & Sons, Ltd.

  2. Building Intuitive Arguments for the Triangle Congruence Conditions

    ERIC Educational Resources Information Center

    Piatek-Jimenez, Katrina

    2008-01-01

    The triangle congruence conditions are a central focus to nearly any course in Euclidean geometry. The author presents a hands-on activity that uses straws and pipe cleaners to explore and justify the triangle congruence conditions. (Contains 4 figures.)

  3. Leader-follower value congruence in social responsibility and ethical satisfaction: a polynomial regression analysis.

    PubMed

    Kang, Seung-Wan; Byun, Gukdo; Park, Hun-Joon

    2014-12-01

    This paper presents empirical research into the relationship between leader-follower value congruence in social responsibility and the level of ethical satisfaction for employees in the workplace. 163 dyads were analyzed, each consisting of a team leader and an employee working at a large manufacturing company in South Korea. Following current methodological recommendations for congruence research, polynomial regression and response surface modeling methodologies were used to determine the effects of value congruence. Results indicate that leader-follower value congruence in social responsibility was positively related to the ethical satisfaction of employees. Furthermore, employees' ethical satisfaction was stronger when aligned with a leader with high social responsibility. The theoretical and practical implications are discussed.

  4. The design of electronic map displays

    NASA Technical Reports Server (NTRS)

    Aretz, Anthony J.

    1991-01-01

    This paper presents a cognitive analysis of a pilot's navigation task and describes an experiment comparing a new map display that employs the principle of visual momentum with the two traditional approaches, track-up and north-up. The data show that the advantage of a track-up alignment is its congruence with the egocentered forward view; however, the inconsistency of the rotating display hinders development of a cognitive map. The stability of a north-up alignment aids the acquisition of a cognitive map, but there is a cost associated with the mental rotation of the display to a track-up alignment for tasks involving the ego-centered forward view. The data also show that the visual momentum design captures the benefits and reduces the costs associated with the two traditional approaches.

  5. Measuring Stratigraphic Congruence Across Trees, Higher Taxa, and Time.

    PubMed

    O'Connor, Anne; Wills, Matthew A

    2016-09-01

    The congruence between the order of cladistic branching and the first appearance dates of fossil lineages can be quantified using a variety of indices. Good matching is a prerequisite for the accurate time calibration of trees, while the distribution of congruence indices across large samples of cladograms has underpinned claims about temporal and taxonomic patterns of completeness in the fossil record. The most widely used stratigraphic congruence indices are the stratigraphic consistency index (SCI), the modified Manhattan stratigraphic measure (MSM*), and the gap excess ratio (GER) (plus its derivatives; the topological GER and the modified GER). Many factors are believed to variously bias these indices, with several empirical and simulation studies addressing some subset of the putative interactions. This study combines both approaches to quantify the effects (on all five indices) of eight variables reasoned to constrain the distribution of possible values (the number of taxa, tree balance, tree resolution, range of first occurrence (FO) dates, center of gravity of FO dates, the variability of FO dates, percentage of extant taxa, and percentage of taxa with no fossil record). Our empirical data set comprised 647 published animal and plant cladograms spanning the entire Phanerozoic, and for these data we also modeled the effects of mean age of FOs (as a proxy for clade age), the taxonomic rank of the clade, and the higher taxonomic group to which it belonged. The center of gravity of FO dates had not been investigated hitherto, and this was found to correlate most strongly with some measures of stratigraphic congruence in our empirical study (top-heavy clades had better congruence). The modified GER was the index least susceptible to bias. We found significant differences across higher taxa for all indices; arthropods had lower congruence and tetrapods higher congruence. Stratigraphic congruence-however measured-also varied throughout the Phanerozoic, reflecting the taxonomic composition of our sample. Notably, periods containing a high proportion of arthropods had poorer congruence overall than those with higher proportions of tetrapods. [Fossil calibration; gap excess ratio; manhattan stratigraphic metric; molecular clocks; stratigraphic congruence.]. © The Author(s) 2016. Published by Oxford University Press on behalf of the Society of Systematic Biologists.

  6. Procrustes Matching by Congruence Coefficients

    ERIC Educational Resources Information Center

    Korth, Bruce; Tucker, L. R.

    1976-01-01

    Matching by Procrustes methods involves the transformation of one matrix to match with another. A special least squares criterion, the congruence coefficient, has advantages as a criterion for some factor analytic interpretations. A Procrustes method maximizing the congruence coefficient is given. (Author/JKS)

  7. Predictors of symptom congruence among patients with acute myocardial infarction.

    PubMed

    Fox-Wasylyshyn, Susan

    2012-01-01

    The extent of congruence between one's symptom experience and preconceived ideas about the nature of myocardial infarction symptoms (ie, symptom congruence) can influence when acute myocardial infarction (AMI) patients seek medical care. Lengthy delays impede timely receipt of medical interventions and result in greater morbidity and mortality. However, little is known about the factors that contribute to symptom congruence. Hence, the purpose of this study was to examine how AMI patients' symptom experiences and patients' demographic and clinical characteristics contribute to symptom congruence. Secondary data analyses were performed on interview data that were collected from 135 AMI patients. Hierarchical multiple regression analyses were used to examine how specific symptom attributes and demographic and clinical characteristics contribute to symptom congruence. Chest pain/discomfort and other symptom variables (type and location) were included in step 1 of the analysis, whereas symptom severity and demographic and clinical factors were included in step 2. In a second analysis, quality descriptors of discomfort replaced chest pain/discomfort in step 1. Although chest pain/discomfort, and quality descriptors of heaviness and cutting were significant in step 1 of their respective analyses, all became nonsignificant when the variables in step 2 were added to the analyses. Severe discomfort (β = .29, P < .001), history of AMI (β = .21, P < .01), and male sex (β = .17, P < .05) were significant predictors of symptom congruence in the first analysis. Only severe discomfort (β = .23, P < .01) and history of AMI (β = .17, P < .05) were predictive of symptom congruence in the second analysis. Although the location and quality of discomfort were important components of symptom congruence, symptom severity outweighed their importance. Nonsevere symptoms were less likely to meet the expectations of AMI symptoms by those experiencing this event. Those without a previous history of AMI also experienced lower levels of symptom congruence. Implications pertaining to these findings are discussed.

  8. Representations of temporal information in short-term memory: Are they modality-specific?

    PubMed

    Bratzke, Daniel; Quinn, Katrina R; Ulrich, Rolf; Bausenhart, Karin M

    2016-10-01

    Rattat and Picard (2012) reported that the coding of temporal information in short-term memory is modality-specific, that is, temporal information received via the visual (auditory) modality is stored as a visual (auditory) code. This conclusion was supported by modality-specific interference effects on visual and auditory duration discrimination, which were induced by secondary tasks (visual tracking or articulatory suppression), presented during a retention interval. The present study assessed the stability of these modality-specific interference effects. Our study did not replicate the selective interference pattern but rather indicated that articulatory suppression not only impairs short-term memory for auditory but also for visual durations. This result pattern supports a crossmodal or an abstract view of temporal encoding. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Object representation in the bottlenose dolphin (Tursiops truncatus): integration of visual and echoic information.

    PubMed

    Harley, H E; Roitblat, H L; Nachtigall, P E

    1996-04-01

    A dolphin performed a 3-alternative matching-to-sample task in different modality conditions (visual/echoic, both vision and echolocation: visual, vision only; echoic, echolocation only). In Experiment 1, training occurred in the dual-modality (visual/echoic) condition. Choice accuracy in tests of all conditions was above chance without further training. In Experiment 2, unfamiliar objects with complementary similarity relations in vision and echolocation were presented in single-modality conditions until accuracy was about 70%. When tested in the visual/echoic condition, accuracy immediately rose (95%), suggesting integration across modalities. In Experiment 3, conditions varied between presentation of sample and alternatives. The dolphin successfully matched familiar objects in the cross-modal conditions. These data suggest that the dolphin has an object-based representational system.

  10. Microcontroller based fibre-optic visual presentation system for multisensory neuroimaging.

    PubMed

    Kurniawan, Veldri; Klemen, Jane; Chambers, Christopher D

    2011-10-30

    Presenting visual stimuli in physical 3D space during fMRI experiments carries significant technical challenges. Certain types of multisensory visuotactile experiments and visuomotor tasks require presentation of visual stimuli in peripersonal space, which cannot be accommodated by ordinary projection screens or binocular goggles. However, light points produced by a group of LEDs can be transmitted through fibre-optic cables and positioned anywhere inside the MRI scanner. Here we describe the design and implementation of a microcontroller-based programmable digital device for controlling fibre-optically transmitted LED lights from a PC. The main feature of this device is the ability to independently control the colour, brightness, and timing of each LED. Moreover, the device was designed in a modular and extensible way, which enables easy adaptation for various experimental paradigms. The device was tested and validated in three fMRI experiments involving basic visual perception, a simple colour discrimination task, and a blocked multisensory visuo-tactile task. The results revealed significant lateralized activation in occipital cortex of all participants, a reliable response in ventral occipital areas to colour stimuli elicited by the device, and strong activations in multisensory brain regions in the multisensory task. Overall, these findings confirm the suitability of this device for presenting complex fibre-optic visual and cross-modal stimuli inside the scanner. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Distraction by emotional sounds: Disentangling arousal benefits and orienting costs.

    PubMed

    Max, Caroline; Widmann, Andreas; Kotz, Sonja A; Schröger, Erich; Wetzel, Nicole

    2015-08-01

    Unexpectedly occurring task-irrelevant stimuli have been shown to impair performance. They capture attention away from the main task leaving fewer resources for target processing. However, the actual distraction effect depends on various variables; for example, only target-informative distractors have been shown to cause costs of attentional orienting. Furthermore, recent studies have shown that high arousing emotional distractors, as compared with low arousing neutral distractors, can improve performance by increasing alertness. We aimed to separate costs of attentional orienting and benefits of arousal by presenting negative and neutral environmental sounds (novels) as oddballs in an auditory-visual distraction paradigm. Participants categorized pictures while task-irrelevant sounds preceded visual targets in two conditions: (a) informative sounds reliably signaled onset and occurrence of visual targets, and (b) noninformative sounds occurred unrelated to visual targets. Results confirmed that only informative novels yield distraction. Importantly, irrespective of sounds' informational value participants responded faster in trials with high arousing negative as compared with moderately arousing neutral novels. That is, costs related to attentional orienting are modulated by information, whereas benefits related to emotional arousal are independent of a sound's informational value. This favors a nonspecific facilitating cross-modal influence of emotional arousal on visual task performance and suggests that behavioral distraction by noninformative novels is controlled after their motivational significance has been determined. (c) 2015 APA, all rights reserved).

  12. Environmental Congruence, Group Importance, and Well-Being among Paratroopers.

    ERIC Educational Resources Information Center

    Meir, Elchanan I.; Segal-Halevi, Anat

    2001-01-01

    Israeli paratroopers (n=267) completed measures of group importance, role satisfaction, vocational interests, and somatic complaints. Group importance correlated with satisfaction and somatic complaints; congruence with environment did not. Congruence interacted with group importance to enhance satisfaction. (Contains 29 references.) (SK)

  13. Enhancing Congruence between Implicit Motives and Explicit Goal Commitments: Results of a Randomized Controlled Trial.

    PubMed

    Roch, Ramona M; Rösch, Andreas G; Schultheiss, Oliver C

    2017-01-01

    Objective: Theory and research suggest that the pursuit of personal goals that do not fit a person's affect-based implicit motives results in impaired emotional well-being, including increased symptoms of depression. The aim of this study was to evaluate an intervention designed to enhance motive-goal congruence and study its impact on well-being. Method: Seventy-four German students (mean age = 22.91, SD = 3.68; 64.9% female) without current psychopathology, randomly allocated to three groups: motivational feedback (FB; n = 25; participants learned about the fit between their implicit motives and explicit goals), FB + congruence-enhancement training (CET; n = 22; participants also engaged in exercises to increase the fit between their implicit motives and goals), and a no-intervention control group ( n = 27), were administered measures of implicit motives, personal goal commitments, happiness, depressive symptoms, and life satisfaction 3 weeks before (T1) and 6 weeks after (T2) treatment. Results: On two types of congruence measures derived from motive and goal assessments, treated participants showed increases in agentic (power and achievement) congruence, with improvements being most consistent in the FB+CET group. Treated participants also showed a trend-level depressive symptom reduction, but no changes on other well-being measures. Although increases in overall and agentic motivational congruence were associated with increases in affective well-being, treatment-based reduction of depressive symptoms was not mediated by treatment-based agentic congruence changes. Conclusion: These findings document that motivational congruence can be effectively enhanced, that changes in motivational congruence are associated with changes in affective well-being, and they suggest that individuals' implicit motives should be considered when personal goals are discussed in the therapeutic process.

  14. Enhancing Congruence between Implicit Motives and Explicit Goal Commitments: Results of a Randomized Controlled Trial

    PubMed Central

    Roch, Ramona M.; Rösch, Andreas G.; Schultheiss, Oliver C.

    2017-01-01

    Objective: Theory and research suggest that the pursuit of personal goals that do not fit a person's affect-based implicit motives results in impaired emotional well-being, including increased symptoms of depression. The aim of this study was to evaluate an intervention designed to enhance motive-goal congruence and study its impact on well-being. Method: Seventy-four German students (mean age = 22.91, SD = 3.68; 64.9% female) without current psychopathology, randomly allocated to three groups: motivational feedback (FB; n = 25; participants learned about the fit between their implicit motives and explicit goals), FB + congruence-enhancement training (CET; n = 22; participants also engaged in exercises to increase the fit between their implicit motives and goals), and a no-intervention control group (n = 27), were administered measures of implicit motives, personal goal commitments, happiness, depressive symptoms, and life satisfaction 3 weeks before (T1) and 6 weeks after (T2) treatment. Results: On two types of congruence measures derived from motive and goal assessments, treated participants showed increases in agentic (power and achievement) congruence, with improvements being most consistent in the FB+CET group. Treated participants also showed a trend-level depressive symptom reduction, but no changes on other well-being measures. Although increases in overall and agentic motivational congruence were associated with increases in affective well-being, treatment-based reduction of depressive symptoms was not mediated by treatment-based agentic congruence changes. Conclusion: These findings document that motivational congruence can be effectively enhanced, that changes in motivational congruence are associated with changes in affective well-being, and they suggest that individuals' implicit motives should be considered when personal goals are discussed in the therapeutic process. PMID:28955267

  15. On the role of attention for the processing of emotions in speech: sex differences revisited.

    PubMed

    Schirmer, Annett; Kotz, Sonja A; Friederici, Angela D

    2005-08-01

    In a previous cross-modal priming study [A. Schirmer, A.S. Kotz, A.D. Friederici, Sex differentiates the role of emotional prosody during word processing, Cogn. Brain Res. 14 (2002) 228-233.], we found that women integrated emotional prosody and word valence earlier than men. Both sexes showed a smaller N400 in the event-related potential to emotional words when these words were preceded by a sentence with congruous compared to incongruous emotional prosody. However, women showed this effect with a 200-ms interval between prime sentence and target word whereas men showed the effect with a 750-ms interval. The present study was designed to determine whether these sex differences prevail when attention is directed towards the emotional content of prosody and word meaning. To this end, we presented the same prime sentences and target words as in our previous study. Sentences were spoken with happy or sad prosody and followed by a congruous or incongruous emotional word or pseudoword. The interval between sentence offset and target onset was 200 ms. In addition to performing a lexical decision, participants were asked to decide whether or not a word matched the emotional prosody of the preceding sentence. The combined lexical and congruence judgment failed to reveal differences in emotional-prosodic priming between men and women. Both sexes showed smaller N400 amplitudes to emotionally congruent compared to incongruent words. This suggests that the presence of sex differences in emotional-prosodic priming depends on whether or not participants are instructed to take emotional prosody into account.

  16. False recollections and the congruence of suggested information.

    PubMed

    Pérez-Mata, Nieves; Diges, Margarita

    2007-10-01

    In two experiments, congruence of postevent information was manipulated in order to explore its role in the misinformation effect. Congruence of a detail was empirically defined as its compatibility (or match) with a concrete event. Based on this idea it was predicted that a congruent suggested detail would be more easily accepted than an incongruent one. In Experiments 1 and 2 two factors(congruence and truth value ) were manipulated within-subjects, and a two-alternative forced-choice recognition test was used followed by phenomenological judgements. Furthermore, in the second experiment participants were asked to describe four critical items (two seen and two suggested details)to explore differences and similarities between real and unreal memories. Both experiments clearly showed that the congruence of false information caused a robust misinformation effect, so that congruent information was much more accepted than false incongruent information. Furthermore, congruence increased the descriptive and phenomenological similarities between perceived and suggested memories, thus contributing to the misleading effect.

  17. Development and Examination of a Family Triadic Measure to Examine Quality of Life Family Congruence in Nursing Home Residents and Two Family Members.

    PubMed

    Aalgaard Kelly, Gina

    2015-01-01

    Objective: The overall purpose of this study was to propose and test a conceptual model and apply family analyses methods to understand quality of life family congruence in the nursing home setting. Method: Secondary data for this study were from a larger study, titled Measurement, Indicators and Improvement of the Quality of Life (QOL) in Nursing Homes . Research literature, family systems theory and human ecological assumptions, fostered the conceptual model empirically testing quality of life family congruence. Results: The study results supported a model examining nursing home residents and two family members on quality of life family congruence. Specifically, family intergenerational dynamic factors, resident personal and social-psychological factors, and nursing home family input factors were examined to identify differences in quality of life family congruence among triad families. Discussion: Formal family involvement and resident cognitive functioning were found as the two most influential factors to quality of life family congruence (QOLFC).

  18. Development and Examination of a Family Triadic Measure to Examine Quality of Life Family Congruence in Nursing Home Residents and Two Family Members

    PubMed Central

    Aalgaard Kelly, Gina

    2015-01-01

    Objective: The overall purpose of this study was to propose and test a conceptual model and apply family analyses methods to understand quality of life family congruence in the nursing home setting. Method: Secondary data for this study were from a larger study, titled Measurement, Indicators and Improvement of the Quality of Life (QOL) in Nursing Homes. Research literature, family systems theory and human ecological assumptions, fostered the conceptual model empirically testing quality of life family congruence. Results: The study results supported a model examining nursing home residents and two family members on quality of life family congruence. Specifically, family intergenerational dynamic factors, resident personal and social-psychological factors, and nursing home family input factors were examined to identify differences in quality of life family congruence among triad families. Discussion: Formal family involvement and resident cognitive functioning were found as the two most influential factors to quality of life family congruence (QOLFC). PMID:28138474

  19. The Parallel Episodic Processing (PEP) model: dissociating contingency and conflict adaptation in the item-specific proportion congruent paradigm.

    PubMed

    Schmidt, James R

    2013-01-01

    The present work introduces a computational model, the Parallel Episodic Processing (PEP) model, which demonstrates that contingency learning achieved via simple storage and retrieval of episodic memories can explain the item-specific proportion congruency effect in the colour-word Stroop paradigm. The current work also presents a new experimental procedure to more directly dissociate contingency biases from conflict adaptation (i.e., proportion congruency). This was done with three different types of incongruent words that allow a comparison of: (a) high versus low contingency while keeping proportion congruency constant, and (b) high versus low proportion congruency while keeping contingency constant. Results demonstrated a significant contingency effect, but no effect of proportion congruence. It was further shown that the proportion congruency associated with the colour does not matter, either. Thus, the results quite directly demonstrate that ISPC effects are not due to conflict adaptation, but instead to contingency learning biases. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. Age-equivalent top-down modulation during cross-modal selective attention.

    PubMed

    Guerreiro, Maria J S; Anguera, Joaquin A; Mishra, Jyoti; Van Gerven, Pascal W M; Gazzaley, Adam

    2014-12-01

    Selective attention involves top-down modulation of sensory cortical areas, such that responses to relevant information are enhanced whereas responses to irrelevant information are suppressed. Suppression of irrelevant information, unlike enhancement of relevant information, has been shown to be deficient in aging. Although these attentional mechanisms have been well characterized within the visual modality, little is known about these mechanisms when attention is selectively allocated across sensory modalities. The present EEG study addressed this issue by testing younger and older participants in three different tasks: Participants attended to the visual modality and ignored the auditory modality, attended to the auditory modality and ignored the visual modality, or passively perceived information presented through either modality. We found overall modulation of visual and auditory processing during cross-modal selective attention in both age groups. Top-down modulation of visual processing was observed as a trend toward enhancement of visual information in the setting of auditory distraction, but no significant suppression of visual distraction when auditory information was relevant. Top-down modulation of auditory processing, on the other hand, was observed as suppression of auditory distraction when visual stimuli were relevant, but no significant enhancement of auditory information in the setting of visual distraction. In addition, greater visual enhancement was associated with better recognition of relevant visual information, and greater auditory distractor suppression was associated with a better ability to ignore auditory distraction. There were no age differences in these effects, suggesting that when relevant and irrelevant information are presented through different sensory modalities, selective attention remains intact in older age.

Top