Sample records for ubiquitous crossmodal stochastic

  1. Visual adaptation enhances action sound discrimination.

    PubMed

    Barraclough, Nick E; Page, Steve A; Keefe, Bruce D

    2017-01-01

    Prolonged exposure, or adaptation, to a stimulus in 1 modality can bias, but also enhance, perception of a subsequent stimulus presented within the same modality. However, recent research has also found that adaptation in 1 modality can bias perception in another modality. Here, we show a novel crossmodal adaptation effect, where adaptation to a visual stimulus enhances subsequent auditory perception. We found that when compared to no adaptation, prior adaptation to visual, auditory, or audiovisual hand actions enhanced discrimination between 2 subsequently presented hand action sounds. Discrimination was most enhanced when the visual action "matched" the auditory action. In addition, prior adaptation to a visual, auditory, or audiovisual action caused subsequent ambiguous action sounds to be perceived as less like the adaptor. In contrast, these crossmodal action aftereffects were not generated by adaptation to the names of actions. Enhanced crossmodal discrimination and crossmodal perceptual aftereffects may result from separate mechanisms operating in audiovisual action sensitive neurons within perceptual systems. Adaptation-induced crossmodal enhancements cannot be explained by postperceptual responses or decisions. More generally, these results together indicate that adaptation is a ubiquitous mechanism for optimizing perceptual processing of multisensory stimuli.

  2. Chaotic Stochasticity: A Ubiquitous Source of Unpredictability in Epidemics

    NASA Astrophysics Data System (ADS)

    Rand, D. A.; Wilson, H. B.

    1991-11-01

    We address the question of whether or not childhood epidemics such as measles and chickenpox are chaotic, and argue that the best explanation of the observed unpredictability is that it is a manifestation of what we call chaotic stochasticity. Such chaos is driven and made permanent by the fluctuations from the mean field encountered in epidemics, or by extrinsic stochastic noise, and is dependent upon the existence of chaotic repellors in the mean field dynamics. Its existence is also a consequence of the near extinctions in the epidemic. For such systems, chaotic stochasticity is likely to be far more ubiquitous than the presence of deterministic chaotic attractors. It is likely to be a common phenomenon in biological dynamics.

  3. Nonlinear Stochastic Markov Processes and Modeling Uncertainty in Populations

    DTIC Science & Technology

    2011-07-06

    219–232. [26] I. Karatzas and S.E. Shreve, Brownian Motion and Stochastic Calculus, Second Edition, Springer, New York, 1991. [27] F. Klebaner...ubiquitous in mathematics and physics (e.g., particle transport, filtering), biology (population models), finance (e.g., Black-Scholes equations) among other

  4. Geographic variation in density-dependent dynamics impacts the synchronizing effect of dispersal and regional stochasticity

    Treesearch

    Andrew M. Liebhold; Derek M. Johnson; Ottar N. Bj& #248rnstad

    2006-01-01

    Explanations for the ubiquitous presence of spatially synchronous population dynamics have assumed that density-dependent processes governing the dynamics of local populations are identical among disjunct populations, and low levels of dispersal or small amounts of regionalized stochasticity ("Moran effect") can act to synchronize populations. In this study...

  5. Fock space, symbolic algebra, and analytical solutions for small stochastic systems.

    PubMed

    Santos, Fernando A N; Gadêlha, Hermes; Gaffney, Eamonn A

    2015-12-01

    Randomness is ubiquitous in nature. From single-molecule biochemical reactions to macroscale biological systems, stochasticity permeates individual interactions and often regulates emergent properties of the system. While such systems are regularly studied from a modeling viewpoint using stochastic simulation algorithms, numerous potential analytical tools can be inherited from statistical and quantum physics, replacing randomness due to quantum fluctuations with low-copy-number stochasticity. Nevertheless, classical studies remained limited to the abstract level, demonstrating a more general applicability and equivalence between systems in physics and biology rather than exploiting the physics tools to study biological systems. Here the Fock space representation, used in quantum mechanics, is combined with the symbolic algebra of creation and annihilation operators to consider explicit solutions for the chemical master equations describing small, well-mixed, biochemical, or biological systems. This is illustrated with an exact solution for a Michaelis-Menten single enzyme interacting with limited substrate, including a consideration of very short time scales, which emphasizes when stiffness is present even for small copy numbers. Furthermore, we present a general matrix representation for Michaelis-Menten kinetics with an arbitrary number of enzymes and substrates that, following diagonalization, leads to the solution of this ubiquitous, nonlinear enzyme kinetics problem. For this, a flexible symbolic maple code is provided, demonstrating the prospective advantages of this framework compared to stochastic simulation algorithms. This further highlights the possibilities for analytically based studies of stochastic systems in biology and chemistry using tools from theoretical quantum physics.

  6. Harvesting wind energy to detect weak signals using mechanical stochastic resonance.

    PubMed

    Breen, Barbara J; Rix, Jillian G; Ross, Samuel J; Yu, Yue; Lindner, John F; Mathewson, Nathan; Wainwright, Elliot R; Wilson, Ian

    2016-12-01

    Wind is free and ubiquitous and can be harnessed in multiple ways. We demonstrate mechanical stochastic resonance in a tabletop experiment in which wind energy is harvested to amplify weak periodic signals detected via the movement of an inverted pendulum. Unlike earlier mechanical stochastic resonance experiments, where noise was added via electrically driven vibrations, our broad-spectrum noise source is a single flapping flag. The regime of the experiment is readily accessible, with wind speeds ∼20 m/s and signal frequencies ∼1 Hz. We readily obtain signal-to-noise ratios on the order of 10 dB.

  7. Crossmodal processing of emotions in alcohol-dependence and Korsakoff syndrome.

    PubMed

    Brion, Mélanie; D'Hondt, Fabien; Lannoy, Séverine; Pitel, Anne-Lise; Davidoff, Donald A; Maurage, Pierre

    2017-09-01

    Decoding emotional information from faces and voices is crucial for efficient interpersonal communication. Emotional decoding deficits have been found in alcohol-dependence (ALC), particularly in crossmodal situations (with simultaneous stimulations from different modalities), but are still underexplored in Korsakoff syndrome (KS). The aim of this study is to determine whether the continuity hypothesis, postulating a gradual worsening of cognitive and brain impairments from ALC to KS, is valid for emotional crossmodal processing. Sixteen KS, 17 ALC and 19 matched healthy controls (CP) had to detect the emotion (anger or happiness) displayed by auditory, visual or crossmodal auditory-visual stimuli. Crossmodal stimuli were either emotionally congruent (leading to a facilitation effect, i.e. enhanced performance for crossmodal condition compared to unimodal ones) or incongruent (leading to an interference effect, i.e. decreased performance for crossmodal condition due to discordant information across modalities). Reaction times and accuracy were recorded. Crossmodal integration for congruent information was dampened only in ALC, while both ALC and KS demonstrated, compared to CP, decreased performance for decoding emotional facial expressions in the incongruent condition. The crossmodal integration appears impaired in ALC but preserved in KS. Both alcohol-related disorders present an increased interference effect. These results show the interest of more ecological designs, using crossmodal stimuli, to explore emotional decoding in alcohol-related disorders. They also suggest that the continuum hypothesis cannot be generalised to emotional decoding abilities.

  8. Stochastic Watershed Models for Risk Based Decision Making

    NASA Astrophysics Data System (ADS)

    Vogel, R. M.

    2017-12-01

    Over half a century ago, the Harvard Water Program introduced the field of operational or synthetic hydrology providing stochastic streamflow models (SSMs), which could generate ensembles of synthetic streamflow traces useful for hydrologic risk management. The application of SSMs, based on streamflow observations alone, revolutionized water resources planning activities, yet has fallen out of favor due, in part, to their inability to account for the now nearly ubiquitous anthropogenic influences on streamflow. This commentary advances the modern equivalent of SSMs, termed `stochastic watershed models' (SWMs) useful as input to nearly all modern risk based water resource decision making approaches. SWMs are deterministic watershed models implemented using stochastic meteorological series, model parameters and model errors, to generate ensembles of streamflow traces that represent the variability in possible future streamflows. SWMs combine deterministic watershed models, which are ideally suited to accounting for anthropogenic influences, with recent developments in uncertainty analysis and principles of stochastic simulation

  9. Auditory peripersonal space in humans.

    PubMed

    Farnè, Alessandro; Làdavas, Elisabetta

    2002-10-01

    In the present study we report neuropsychological evidence of the existence of an auditory peripersonal space representation around the head in humans and its characteristics. In a group of right brain-damaged patients with tactile extinction, we found that a sound delivered near the ipsilesional side of the head (20 cm) strongly extinguished a tactile stimulus delivered to the contralesional side of the head (cross-modal auditory-tactile extinction). By contrast, when an auditory stimulus was presented far from the head (70 cm), cross-modal extinction was dramatically reduced. This spatially specific cross-modal extinction was most consistently found (i.e., both in the front and back spaces) when a complex sound was presented, like a white noise burst. Pure tones produced spatially specific cross-modal extinction when presented in the back space, but not in the front space. In addition, the most severe cross-modal extinction emerged when sounds came from behind the head, thus showing that the back space is more sensitive than the front space to the sensory interaction of auditory-tactile inputs. Finally, when cross-modal effects were investigated by reversing the spatial arrangement of cross-modal stimuli (i.e., touch on the right and sound on the left), we found that an ipsilesional tactile stimulus, although inducing a small amount of cross-modal tactile-auditory extinction, did not produce any spatial-specific effect. Therefore, the selective aspects of cross-modal interaction found near the head cannot be explained by a competition between a damaged left spatial representation and an intact right spatial representation. Thus, consistent with neurophysiological evidence from monkeys, our findings strongly support the existence, in humans, of an integrated cross-modal system coding auditory and tactile stimuli near the body, that is, in the peripersonal space.

  10. A matter of attention: Crossmodal congruence enhances and impairs performance in a novel trimodal matching paradigm.

    PubMed

    Misselhorn, Jonas; Daume, Jonathan; Engel, Andreas K; Friese, Uwe

    2016-07-29

    A novel crossmodal matching paradigm including vision, audition, and somatosensation was developed in order to investigate the interaction between attention and crossmodal congruence in multisensory integration. To that end, all three modalities were stimulated concurrently while a bimodal focus was defined blockwise. Congruence between stimulus intensity changes in the attended modalities had to be evaluated. We found that crossmodal congruence improved performance if both, the attended modalities and the task-irrelevant distractor were congruent. If the attended modalities were incongruent, the distractor impaired performance due to its congruence relation to one of the attended modalities. Between attentional conditions, magnitudes of crossmodal enhancement or impairment differed. Largest crossmodal effects were seen in visual-tactile matching, intermediate effects for audio-visual and smallest effects for audio-tactile matching. We conclude that differences in crossmodal matching likely reflect characteristics of multisensory neural network architecture. We discuss our results with respect to the timing of perceptual processing and state hypotheses for future physiological studies. Finally, etiological questions are addressed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Cross-modal links among vision, audition, and touch in complex environments.

    PubMed

    Ferris, Thomas K; Sarter, Nadine B

    2008-02-01

    This study sought to determine whether performance effects of cross-modal spatial links that were observed in earlier laboratory studies scale to more complex environments and need to be considered in multimodal interface design. It also revisits the unresolved issue of cross-modal cuing asymmetries. Previous laboratory studies employing simple cues, tasks, and/or targets have demonstrated that the efficiency of processing visual, auditory, and tactile stimuli is affected by the modality, lateralization, and timing of surrounding cues. Very few studies have investigated these cross-modal constraints in the context of more complex environments to determine whether they scale and how complexity affects the nature of cross-modal cuing asymmetries. Amicroworld simulation of battlefield operations with a complex task set and meaningful visual, auditory, and tactile stimuli was used to investigate cuing effects for all cross-modal pairings. Significant asymmetric performance effects of cross-modal spatial links were observed. Auditory cues shortened response latencies for collocated visual targets but visual cues did not do the same for collocated auditory targets. Responses to contralateral (rather than ipsilateral) targets were faster for tactually cued auditory targets and each visual-tactile cue-target combination, suggesting an inhibition-of-return effect. The spatial relationships between multimodal cues and targets significantly affect target response times in complex environments. The performance effects of cross-modal links and the observed cross-modal cuing asymmetries need to be examined in more detail and considered in future interface design. The findings from this study have implications for the design of multimodal and adaptive interfaces and for supporting attention management in complex, data-rich domains.

  12. Sounds can boost the awareness of visual events through attention without cross-modal integration.

    PubMed

    Pápai, Márta Szabina; Soto-Faraco, Salvador

    2017-01-31

    Cross-modal interactions can lead to enhancement of visual perception, even for visual events below awareness. However, the underlying mechanism is still unclear. Can purely bottom-up cross-modal integration break through the threshold of awareness? We used a binocular rivalry paradigm to measure perceptual switches after brief flashes or sounds which, sometimes, co-occurred. When flashes at the suppressed eye coincided with sounds, perceptual switches occurred the earliest. Yet, contrary to the hypothesis of cross-modal integration, this facilitation never surpassed the assumption of probability summation of independent sensory signals. A follow-up experiment replicated the same pattern of results using silent gaps embedded in continuous noise, instead of sounds. This manipulation should weaken putative sound-flash integration, although keep them salient as bottom-up attention cues. Additional results showed that spatial congruency between flashes and sounds did not determine the effectiveness of cross-modal facilitation, which was again not better than probability summation. Thus, the present findings fail to fully support the hypothesis of bottom-up cross-modal integration, above and beyond the independent contribution of two transient signals, as an account for cross-modal enhancement of visual events below level of awareness.

  13. Auditory cross-modal reorganization in cochlear implant users indicates audio-visual integration.

    PubMed

    Stropahl, Maren; Debener, Stefan

    2017-01-01

    There is clear evidence for cross-modal cortical reorganization in the auditory system of post-lingually deafened cochlear implant (CI) users. A recent report suggests that moderate sensori-neural hearing loss is already sufficient to initiate corresponding cortical changes. To what extend these changes are deprivation-induced or related to sensory recovery is still debated. Moreover, the influence of cross-modal reorganization on CI benefit is also still unclear. While reorganization during deafness may impede speech recovery, reorganization also has beneficial influences on face recognition and lip-reading. As CI users were observed to show differences in multisensory integration, the question arises if cross-modal reorganization is related to audio-visual integration skills. The current electroencephalography study investigated cortical reorganization in experienced post-lingually deafened CI users ( n  = 18), untreated mild to moderately hearing impaired individuals (n = 18) and normal hearing controls ( n  = 17). Cross-modal activation of the auditory cortex by means of EEG source localization in response to human faces and audio-visual integration, quantified with the McGurk illusion, were measured. CI users revealed stronger cross-modal activations compared to age-matched normal hearing individuals. Furthermore, CI users showed a relationship between cross-modal activation and audio-visual integration strength. This may further support a beneficial relationship between cross-modal activation and daily-life communication skills that may not be fully captured by laboratory-based speech perception tests. Interestingly, hearing impaired individuals showed behavioral and neurophysiological results that were numerically between the other two groups, and they showed a moderate relationship between cross-modal activation and the degree of hearing loss. This further supports the notion that auditory deprivation evokes a reorganization of the auditory system even at early stages of hearing loss.

  14. Associative learning changes cross-modal representations in the gustatory cortex

    PubMed Central

    Vincis, Roberto; Fontanini, Alfredo

    2016-01-01

    A growing body of literature has demonstrated that primary sensory cortices are not exclusively unimodal, but can respond to stimuli of different sensory modalities. However, several questions concerning the neural representation of cross-modal stimuli remain open. Indeed, it is poorly understood if cross-modal stimuli evoke unique or overlapping representations in a primary sensory cortex and whether learning can modulate these representations. Here we recorded single unit responses to auditory, visual, somatosensory, and olfactory stimuli in the gustatory cortex (GC) of alert rats before and after associative learning. We found that, in untrained rats, the majority of GC neurons were modulated by a single modality. Upon learning, both prevalence of cross-modal responsive neurons and their breadth of tuning increased, leading to a greater overlap of representations. Altogether, our results show that the gustatory cortex represents cross-modal stimuli according to their sensory identity, and that learning changes the overlap of cross-modal representations. DOI: http://dx.doi.org/10.7554/eLife.16420.001 PMID:27572258

  15. Practical Unitary Simulator for Non-Markovian Complex Processes

    NASA Astrophysics Data System (ADS)

    Binder, Felix C.; Thompson, Jayne; Gu, Mile

    2018-06-01

    Stochastic processes are as ubiquitous throughout the quantitative sciences as they are notorious for being difficult to simulate and predict. In this Letter, we propose a unitary quantum simulator for discrete-time stochastic processes which requires less internal memory than any classical analogue throughout the simulation. The simulator's internal memory requirements equal those of the best previous quantum models. However, in contrast to previous models, it only requires a (small) finite-dimensional Hilbert space. Moreover, since the simulator operates unitarily throughout, it avoids any unnecessary information loss. We provide a stepwise construction for simulators for a large class of stochastic processes hence directly opening the possibility for experimental implementations with current platforms for quantum computation. The results are illustrated for an example process.

  16. Unconscious Cross-Modal Priming of Auditory Sound Localization by Visual Words

    ERIC Educational Resources Information Center

    Ansorge, Ulrich; Khalid, Shah; Laback, Bernhard

    2016-01-01

    Little is known about the cross-modal integration of unconscious and conscious information. In the current study, we therefore tested whether the spatial meaning of an unconscious visual word, such as "up", influences the perceived location of a subsequently presented auditory target. Although cross-modal integration of unconscious…

  17. How Children Use Emotional Prosody: Crossmodal Emotional Integration?

    ERIC Educational Resources Information Center

    Gil, Sandrine; Hattouti, Jamila; Laval, Virginie

    2016-01-01

    A crossmodal effect has been observed in the processing of facial and vocal emotion in adults and infants. For the first time, we assessed whether this effect is present in childhood by administering a crossmodal task similar to those used in seminal studies featuring emotional faces (i.e., a continuum of emotional expressions running from…

  18. Crossmodal Connections of Primary Sensory Cortices Largely Vanish During Normal Aging

    PubMed Central

    Henschke, Julia U.; Ohl, Frank W.; Budinger, Eike

    2018-01-01

    During aging, human response times (RTs) to unisensory and crossmodal stimuli decrease. However, the elderly benefit more from crossmodal stimulus representations than younger people. The underlying short-latency multisensory integration process is mediated by direct crossmodal connections at the level of primary sensory cortices. We investigate the age-related changes of these connections using a rodent model (Mongolian gerbil), retrograde tracer injections into the primary auditory (A1), somatosensory (S1), and visual cortex (V1), and immunohistochemistry for markers of apoptosis (Caspase-3), axonal plasticity (Growth associated protein 43, GAP 43), and a calcium-binding protein (Parvalbumin, PV). In adult animals, primary sensory cortices receive a substantial number of direct thalamic inputs from nuclei of their matched, but also from nuclei of non-matched sensory modalities. There are also direct intracortical connections among primary sensory cortices and connections with secondary sensory cortices of other modalities. In very old animals, the crossmodal connections strongly decrease in number or vanish entirely. This is likely due to a retraction of the projection neuron axonal branches rather than ongoing programmed cell death. The loss of crossmodal connections is also accompanied by changes in anatomical correlates of inhibition and excitation in the sensory thalamus and cortex. Together, the loss and restructuring of crossmodal connections during aging suggest a shift of multisensory processing from primary cortices towards other sensory brain areas in elderly individuals. PMID:29551970

  19. Crossmodal Connections of Primary Sensory Cortices Largely Vanish During Normal Aging.

    PubMed

    Henschke, Julia U; Ohl, Frank W; Budinger, Eike

    2018-01-01

    During aging, human response times (RTs) to unisensory and crossmodal stimuli decrease. However, the elderly benefit more from crossmodal stimulus representations than younger people. The underlying short-latency multisensory integration process is mediated by direct crossmodal connections at the level of primary sensory cortices. We investigate the age-related changes of these connections using a rodent model (Mongolian gerbil), retrograde tracer injections into the primary auditory (A1), somatosensory (S1), and visual cortex (V1), and immunohistochemistry for markers of apoptosis (Caspase-3), axonal plasticity (Growth associated protein 43, GAP 43), and a calcium-binding protein (Parvalbumin, PV). In adult animals, primary sensory cortices receive a substantial number of direct thalamic inputs from nuclei of their matched, but also from nuclei of non-matched sensory modalities. There are also direct intracortical connections among primary sensory cortices and connections with secondary sensory cortices of other modalities. In very old animals, the crossmodal connections strongly decrease in number or vanish entirely. This is likely due to a retraction of the projection neuron axonal branches rather than ongoing programmed cell death. The loss of crossmodal connections is also accompanied by changes in anatomical correlates of inhibition and excitation in the sensory thalamus and cortex. Together, the loss and restructuring of crossmodal connections during aging suggest a shift of multisensory processing from primary cortices towards other sensory brain areas in elderly individuals.

  20. Do early sensory cortices integrate cross-modal information?

    PubMed

    Kayser, Christoph; Logothetis, Nikos K

    2007-09-01

    Our different senses provide complementary evidence about the environment and their interaction often aids behavioral performance or alters the quality of the sensory percept. A traditional view defers the merging of sensory information to higher association cortices, and posits that a large part of the brain can be reduced into a collection of unisensory systems that can be studied in isolation. Recent studies, however, challenge this view and suggest that cross-modal interactions can already occur in areas hitherto regarded as unisensory. We review results from functional imaging and electrophysiology exemplifying cross-modal interactions that occur early during the evoked response, and at the earliest stages of sensory cortical processing. Although anatomical studies revealed several potential origins of these cross-modal influences, there is yet no clear relation between particular functional observations and specific anatomical connections. In addition, our view on sensory integration at the neuronal level is coined by many studies on subcortical model systems of sensory integration; yet, the patterns of cross-modal interaction in cortex deviate from these model systems in several ways. Consequently, future studies on cortical sensory integration need to leave the descriptive level and need to incorporate cross-modal influences into models of the organization of sensory processing. Only then will we be able to determine whether early cross-modal interactions truly merit the label sensory integration, and how they increase a sensory system's ability to scrutinize its environment and finally aid behavior.

  1. Predicting evolutionary rescue via evolving plasticity in stochastic environments

    PubMed Central

    Baskett, Marissa L.

    2016-01-01

    Phenotypic plasticity and its evolution may help evolutionary rescue in a novel and stressful environment, especially if environmental novelty reveals cryptic genetic variation that enables the evolution of increased plasticity. However, the environmental stochasticity ubiquitous in natural systems may alter these predictions, because high plasticity may amplify phenotype–environment mismatches. Although previous studies have highlighted this potential detrimental effect of plasticity in stochastic environments, they have not investigated how it affects extinction risk in the context of evolutionary rescue and with evolving plasticity. We investigate this question here by integrating stochastic demography with quantitative genetic theory in a model with simultaneous change in the mean and predictability (temporal autocorrelation) of the environment. We develop an approximate prediction of long-term persistence under the new pattern of environmental fluctuations, and compare it with numerical simulations for short- and long-term extinction risk. We find that reduced predictability increases extinction risk and reduces persistence because it increases stochastic load during rescue. This understanding of how stochastic demography, phenotypic plasticity, and evolution interact when evolution acts on cryptic genetic variation revealed in a novel environment can inform expectations for invasions, extinctions, or the emergence of chemical resistance in pests. PMID:27655762

  2. Auditory Sensory Substitution is Intuitive and Automatic with Texture Stimuli

    PubMed Central

    Stiles, Noelle R. B.; Shimojo, Shinsuke

    2015-01-01

    Millions of people are blind worldwide. Sensory substitution (SS) devices (e.g., vOICe) can assist the blind by encoding a video stream into a sound pattern, recruiting visual brain areas for auditory analysis via crossmodal interactions and plasticity. SS devices often require extensive training to attain limited functionality. In contrast to conventional attention-intensive SS training that starts with visual primitives (e.g., geometrical shapes), we argue that sensory substitution can be engaged efficiently by using stimuli (such as textures) associated with intrinsic crossmodal mappings. Crossmodal mappings link images with sounds and tactile patterns. We show that intuitive SS sounds can be matched to the correct images by naive sighted participants just as well as by intensively-trained participants. This result indicates that existing crossmodal interactions and amodal sensory cortical processing may be as important in the interpretation of patterns by SS as crossmodal plasticity (e.g., the strengthening of existing connections or the formation of new ones), especially at the earlier stages of SS usage. An SS training procedure based on crossmodal mappings could both considerably improve participant performance and shorten training times, thereby enabling SS devices to significantly expand blind capabilities. PMID:26490260

  3. Simultaneous estimation of deterministic and fractal stochastic components in non-stationary time series

    NASA Astrophysics Data System (ADS)

    García, Constantino A.; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G.

    2018-07-01

    In the past few decades, it has been recognized that 1 / f fluctuations are ubiquitous in nature. The most widely used mathematical models to capture the long-term memory properties of 1 / f fluctuations have been stochastic fractal models. However, physical systems do not usually consist of just stochastic fractal dynamics, but they often also show some degree of deterministic behavior. The present paper proposes a model based on fractal stochastic and deterministic components that can provide a valuable basis for the study of complex systems with long-term correlations. The fractal stochastic component is assumed to be a fractional Brownian motion process and the deterministic component is assumed to be a band-limited signal. We also provide a method that, under the assumptions of this model, is able to characterize the fractal stochastic component and to provide an estimate of the deterministic components present in a given time series. The method is based on a Bayesian wavelet shrinkage procedure that exploits the self-similar properties of the fractal processes in the wavelet domain. This method has been validated over simulated signals and over real signals with economical and biological origin. Real examples illustrate how our model may be useful for exploring the deterministic-stochastic duality of complex systems, and uncovering interesting patterns present in time series.

  4. Cortical GABAergic Interneurons in Cross-Modal Plasticity following Early Blindness

    PubMed Central

    Desgent, Sébastien; Ptito, Maurice

    2012-01-01

    Early loss of a given sensory input in mammals causes anatomical and functional modifications in the brain via a process called cross-modal plasticity. In the past four decades, several animal models have illuminated our understanding of the biological substrates involved in cross-modal plasticity. Progressively, studies are now starting to emphasise on cell-specific mechanisms that may be responsible for this intermodal sensory plasticity. Inhibitory interneurons expressing γ-aminobutyric acid (GABA) play an important role in maintaining the appropriate dynamic range of cortical excitation, in critical periods of developmental plasticity, in receptive field refinement, and in treatment of sensory information reaching the cerebral cortex. The diverse interneuron population is very sensitive to sensory experience during development. GABAergic neurons are therefore well suited to act as a gate for mediating cross-modal plasticity. This paper attempts to highlight the links between early sensory deprivation, cortical GABAergic interneuron alterations, and cross-modal plasticity, discuss its implications, and further provide insights for future research in the field. PMID:22720175

  5. Cross-Modal Retrieval With CNN Visual Features: A New Baseline.

    PubMed

    Wei, Yunchao; Zhao, Yao; Lu, Canyi; Wei, Shikui; Liu, Luoqi; Zhu, Zhenfeng; Yan, Shuicheng

    2017-02-01

    Recently, convolutional neural network (CNN) visual features have demonstrated their powerful ability as a universal representation for various recognition tasks. In this paper, cross-modal retrieval with CNN visual features is implemented with several classic methods. Specifically, off-the-shelf CNN visual features are extracted from the CNN model, which is pretrained on ImageNet with more than one million images from 1000 object categories, as a generic image representation to tackle cross-modal retrieval. To further enhance the representational ability of CNN visual features, based on the pretrained CNN model on ImageNet, a fine-tuning step is performed by using the open source Caffe CNN library for each target data set. Besides, we propose a deep semantic matching method to address the cross-modal retrieval problem with respect to samples which are annotated with one or multiple labels. Extensive experiments on five popular publicly available data sets well demonstrate the superiority of CNN visual features for cross-modal retrieval.

  6. How children use emotional prosody: Crossmodal emotional integration?

    PubMed

    Gil, Sandrine; Hattouti, Jamila; Laval, Virginie

    2016-07-01

    A crossmodal effect has been observed in the processing of facial and vocal emotion in adults and infants. For the first time, we assessed whether this effect is present in childhood by administering a crossmodal task similar to those used in seminal studies featuring emotional faces (i.e., a continuum of emotional expressions running from happiness to sadness: 90% happy, 60% happy, 30% happy, neutral, 30% sad, 60% sad, 90% sad) and emotional prosody (i.e., sad vs. happy). Participants were 5-, 7-, and 9-year-old children and a control group of adult students. The children had a different pattern of results from the adults, with only the 9-year-olds exhibiting the crossmodal effect whatever the emotional condition. These results advance our understanding of emotional prosody processing and the efficiency of crossmodal integration in children and are discussed in terms of a developmental trajectory and factors that may modulate the efficiency of this effect in children. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. Cross-modal versus within-modal recall: differences in behavioral and brain responses.

    PubMed

    Butler, Andrew J; James, Karin H

    2011-10-31

    Although human experience is multisensory in nature, previous research has focused predominantly on memory for unisensory as opposed to multisensory information. In this work, we sought to investigate behavioral and neural differences between the cued recall of cross-modal audiovisual associations versus within-modal visual or auditory associations. Participants were presented with cue-target associations comprised of pairs of nonsense objects, pairs of nonsense sounds, objects paired with sounds, and sounds paired with objects. Subsequently, they were required to recall the modality of the target given the cue while behavioral accuracy, reaction time, and blood oxygenation level dependent (BOLD) activation were measured. Successful within-modal recall was associated with modality-specific reactivation in primary perceptual regions, and was more accurate than cross-modal retrieval. When auditory targets were correctly or incorrectly recalled using a cross-modal visual cue, there was re-activation in auditory association cortex, and recall of information from cross-modal associations activated the hippocampus to a greater degree than within-modal associations. Findings support theories that propose an overlap between regions active during perception and memory, and show that behavioral and neural differences exist between within- and cross-modal associations. Overall the current study highlights the importance of the role of multisensory information in memory. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Cross-cultural differences in crossmodal correspondences between basic tastes and visual features

    PubMed Central

    Wan, Xiaoang; Woods, Andy T.; van den Bosch, Jasper J. F.; McKenzie, Kirsten J.; Velasco, Carlos; Spence, Charles

    2014-01-01

    We report a cross-cultural study designed to investigate crossmodal correspondences between a variety of visual features (11 colors, 15 shapes, and 2 textures) and the five basic taste terms (bitter, salty, sour, sweet, and umami). A total of 452 participants from China, India, Malaysia, and the USA viewed color patches, shapes, and textures online and had to choose the taste term that best matched the image and then rate their confidence in their choice. Across the four groups of participants, the results revealed a number of crossmodal correspondences between certain colors/shapes and bitter, sour, and sweet tastes. Crossmodal correspondences were also documented between the color white and smooth/rough textures on the one hand and the salt taste on the other. Cross-cultural differences were observed in the correspondences between certain colors, shapes, and one of the textures and the taste terms. The taste-patterns shown by the participants from the four countries tested in the present study are quite different from one another, and these differences cannot easily be attributed merely to whether a country is Eastern or Western. These findings therefore highlight the impact of cultural background on crossmodal correspondences. As such, they raise a number of interesting questions regarding the neural mechanisms underlying crossmodal correspondences. PMID:25538643

  9. Crossmodal representation of a functional robotic hand arises after extensive training in healthy participants.

    PubMed

    Marini, Francesco; Tagliabue, Chiara F; Sposito, Ambra V; Hernandez-Arieta, Alejandro; Brugger, Peter; Estévez, Natalia; Maravita, Angelo

    2014-01-01

    The way in which humans represent their own bodies is critical in guiding their interactions with the environment. To achieve successful body-space interactions, the body representation is strictly connected with that of the space immediately surrounding it through efficient visuo-tactile crossmodal integration. Such a body-space integrated representation is not fixed, but can be dynamically modulated by the use of external tools. Our study aims to explore the effect of using a complex tool, namely a functional prosthesis, on crossmodal visuo-tactile spatial interactions in healthy participants. By using the crossmodal visuo-tactile congruency paradigm, we found that prolonged training with a mechanical hand capable of distal hand movements and providing sensory feedback induces a pattern of interference, which is not observed after a brief training, between visual stimuli close to the prosthesis and touches on the body. These results suggest that after extensive, but not short, training the functional prosthesis acquires a visuo-tactile crossmodal representation akin to real limbs. This finding adds to previous evidence for the embodiment of functional prostheses in amputees, and shows that their use may also improve the crossmodal combination of somatosensory feedback delivered by the prosthesis with visual stimuli in the space around it, thus effectively augmenting the patients' visuomotor abilities. © 2013 Published by Elsevier Ltd.

  10. The taste-visual cross-modal Stroop effect: An event-related brain potential study.

    PubMed

    Xiao, X; Dupuis-Roy, N; Yang, X L; Qiu, J F; Zhang, Q L

    2014-03-28

    Event-related potentials (ERPs) were recorded to explore, for the first time, the electrophysiological correlates of the taste-visual cross-modal Stroop effect. Eighteen healthy participants were presented with a taste stimulus and a food image, and asked to categorize the image as "sweet" or "sour" by pressing the relevant button as quickly as possible. Accurate categorization of the image was faster when it was presented with a congruent taste stimulus (e.g., sour taste/image of lemon) than with an incongruent one (e.g., sour taste/image of ice cream). ERP analyses revealed a negative difference component (ND430-620) between 430 and 620ms in the taste-visual cross-modal Stroop interference. Dipole source analysis of the difference wave (incongruent minus congruent) indicated that two generators localized in the prefrontal cortex and the parahippocampal gyrus contributed to this taste-visual cross-modal Stroop effect. This result suggests that the prefrontal cortex is associated with the process of conflict control in the taste-visual cross-modal Stroop effect. Also, we speculate that the parahippocampal gyrus is associated with the process of discordant information in the taste-visual cross-modal Stroop effect. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  11. Cross-cultural differences in crossmodal correspondences between basic tastes and visual features.

    PubMed

    Wan, Xiaoang; Woods, Andy T; van den Bosch, Jasper J F; McKenzie, Kirsten J; Velasco, Carlos; Spence, Charles

    2014-01-01

    We report a cross-cultural study designed to investigate crossmodal correspondences between a variety of visual features (11 colors, 15 shapes, and 2 textures) and the five basic taste terms (bitter, salty, sour, sweet, and umami). A total of 452 participants from China, India, Malaysia, and the USA viewed color patches, shapes, and textures online and had to choose the taste term that best matched the image and then rate their confidence in their choice. Across the four groups of participants, the results revealed a number of crossmodal correspondences between certain colors/shapes and bitter, sour, and sweet tastes. Crossmodal correspondences were also documented between the color white and smooth/rough textures on the one hand and the salt taste on the other. Cross-cultural differences were observed in the correspondences between certain colors, shapes, and one of the textures and the taste terms. The taste-patterns shown by the participants from the four countries tested in the present study are quite different from one another, and these differences cannot easily be attributed merely to whether a country is Eastern or Western. These findings therefore highlight the impact of cultural background on crossmodal correspondences. As such, they raise a number of interesting questions regarding the neural mechanisms underlying crossmodal correspondences.

  12. Experimental and clinical usefulness of crossmodal paradigms in psychiatry: an illustration from emotional processing in alcohol-dependence

    PubMed Central

    Maurage, Pierre; Campanella, Salvatore

    2013-01-01

    Crossmodal processing (i.e., the construction of a unified representation stemming from distinct sensorial modalities inputs) constitutes a crucial ability in humans' everyday life. It has been extensively explored at cognitive and cerebral levels during the last decade among healthy controls. Paradoxically however, and while difficulties to perform this integrative process have been suggested in a large range of psychopathological states (e.g., schizophrenia and autism), these crossmodal paradigms have been very rarely used in the exploration of psychiatric populations. The main aim of the present paper is thus to underline the experimental and clinical usefulness of exploring crossmodal processes in psychiatry. We will illustrate this proposal by means of the recent data obtained in the crossmodal exploration of emotional alterations in alcohol-dependence. Indeed, emotional decoding impairments might have a role in the development and maintenance of alcohol-dependence, and have been extensively investigated by means of experiments using separated visual or auditory stimulations. Besides these unimodal explorations, we have recently conducted several studies using audio-visual crossmodal paradigms, which has allowed us to improve the ecological validity of the unimodal experimental designs and to offer new insights on the emotional alterations among alcohol-dependent individuals. We will show how these preliminary results can be extended to develop a coherent and ambitious research program using crossmodal designs in various psychiatric populations and sensory modalities. We will finally end the paper by underlining the various potential clinical applications and the fundamental implications that can be raised by this emerging project. PMID:23898250

  13. Cross-modal illusory conjunctions between vision and touch.

    PubMed

    Cinel, Caterina; Humphreys, Glyn W; Poli, Riccardo

    2002-10-01

    Cross-modal illusory conjunctions (ICs) happen when, under conditions of divided attention, felt textures are reported as being seen or vice versa. Experiments provided evidence for these errors, demonstrated that ICs are more frequent if tactile and visual stimuli are in the same hemispace, and showed that ICs still occur under forced-choice conditions but do not occur when attention to the felt texture is increased. Cross-modal ICs were also found in a patient with parietal damage even with relatively long presentations of visual stimuli. The data are consistent with there being cross-modal integration of sensory information, with the modality of origin sometimes being misattributed when attention is constrained. The empirical conclusions from the experiments are supported by formal models.

  14. On Klatzky and Creswell (2014): saving social priming effects but losing science as we know it?

    PubMed

    Schwartz, Barry

    2015-05-01

    Klatzky and Creswell (2014) offer an interpretation of the unreliability of social priming effects by analogizing them to what is known about the complexity of cross-modal transfer effects in perception. The complexity of these transfer effects arises because they are both multiply determined and stochastic. In this commentary, I argue that Klatzky and Creswell's thoughtful contribution raises the possibility that there might be deep and substantive limits to both the replicability and the generalizability of many of the phenomena that most interest psychologists, including social priming effects. Psychological phenomena largely governed by what Fodor (1983) called the "central system" may resist both replication and generalization by their very nature and not because of weak and underpowered experimental methods. With such phenomena, science might give us very good tools for explanation, but not for prediction (replication). © The Author(s) 2015.

  15. A Role of Phase-Resetting in Coordinating Large Scale Neural Networks During Attention and Goal-Directed Behavior

    PubMed Central

    Voloh, Benjamin; Womelsdorf, Thilo

    2016-01-01

    Short periods of oscillatory activation are ubiquitous signatures of neural circuits. A broad range of studies documents not only their circuit origins, but also a fundamental role for oscillatory activity in coordinating information transfer during goal directed behavior. Recent studies suggest that resetting the phase of ongoing oscillatory activity to endogenous or exogenous cues facilitates coordinated information transfer within circuits and between distributed brain areas. Here, we review evidence that pinpoints phase resetting as a critical marker of dynamic state changes of functional networks. Phase resets: (1) set a “neural context” in terms of narrow band frequencies that uniquely characterizes the activated circuits; (2) impose coherent low frequency phases to which high frequency activations can synchronize, identifiable as cross-frequency correlations across large anatomical distances; (3) are critical for neural coding models that depend on phase, increasing the informational content of neural representations; and (4) likely originate from the dynamics of canonical E-I circuits that are anatomically ubiquitous. These multiple signatures of phase resets are directly linked to enhanced information transfer and behavioral success. We survey how phase resets re-organize oscillations in diverse task contexts, including sensory perception, attentional stimulus selection, cross-modal integration, Pavlovian conditioning, and spatial navigation. The evidence we consider suggests that phase-resets can drive changes in neural excitability, ensemble organization, functional networks, and ultimately, overt behavior. PMID:27013986

  16. Dynamic Facial Expressions Prime the Processing of Emotional Prosody.

    PubMed

    Garrido-Vásquez, Patricia; Pell, Marc D; Paulmann, Silke; Kotz, Sonja A

    2018-01-01

    Evidence suggests that emotion is represented supramodally in the human brain. Emotional facial expressions, which often precede vocally expressed emotion in real life, can modulate event-related potentials (N100 and P200) during emotional prosody processing. To investigate these cross-modal emotional interactions, two lines of research have been put forward: cross-modal integration and cross-modal priming. In cross-modal integration studies, visual and auditory channels are temporally aligned, while in priming studies they are presented consecutively. Here we used cross-modal emotional priming to study the interaction of dynamic visual and auditory emotional information. Specifically, we presented dynamic facial expressions (angry, happy, neutral) as primes and emotionally-intoned pseudo-speech sentences (angry, happy) as targets. We were interested in how prime-target congruency would affect early auditory event-related potentials, i.e., N100 and P200, in order to shed more light on how dynamic facial information is used in cross-modal emotional prediction. Results showed enhanced N100 amplitudes for incongruently primed compared to congruently and neutrally primed emotional prosody, while the latter two conditions did not significantly differ. However, N100 peak latency was significantly delayed in the neutral condition compared to the other two conditions. Source reconstruction revealed that the right parahippocampal gyrus was activated in incongruent compared to congruent trials in the N100 time window. No significant ERP effects were observed in the P200 range. Our results indicate that dynamic facial expressions influence vocal emotion processing at an early point in time, and that an emotional mismatch between a facial expression and its ensuing vocal emotional signal induces additional processing costs in the brain, potentially because the cross-modal emotional prediction mechanism is violated in case of emotional prime-target incongruency.

  17. What colour does that feel? Tactile--visual mapping and the development of cross-modality.

    PubMed

    Ludwig, Vera U; Simner, Julia

    2013-04-01

    Humans share implicit preferences for cross-modal mappings (e.g., low pitch sounds are preferentially paired with darker colours). Individuals with synaesthesia experience cross-modal mappings to a conscious degree (e.g., they may see colours when they hear sounds). The neonatal synaesthesia hypothesis claims that all humans may be born with this explicit cross-modal perception, which dies out in most people through childhood, leaving only implicit associations in the average adult. Although there is evidence for decreasing cross-modality throughout early infancy, it is unclear whether this decline continues to take place throughout childhood and adolescence. This large-scale study had two goals. First, we aimed to establish whether human non-synaesthetes systematically map tactile and visual dimensions - a combination that has rarely been studied. Second, we asked whether tactile-visual associations may be more pronounced in younger compared to older participants. 210 participants between the ages of 5-74 years assigned colours to tactile stimuli. Smoothness, softness and roundness of stimuli positively correlated with luminance of the chosen colour; and smoothness and softness also positively correlated with chroma. Moreover, tactile sensations were associated with specific colours (e.g., softness with pink). There were no age differences for luminance effects. Chroma effects, however, were found exclusively in children and adolescents. Our findings are consistent with the neonatal synaesthesia hypothesis which suggests that all humans are born with strong cross-modal perception which is pruned away or inhibited throughout development. Moreover, the findings suggest that a decline of some forms of cross-modality may take place over a much longer time span than previously assumed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Neuronal Correlates of Cross-Modal Transfer in the Cerebellum and Pontine Nuclei

    PubMed Central

    Campolattaro, Matthew M.; Kashef, Alireza; Lee, Inah; Freeman, John H.

    2011-01-01

    Cross-modal transfer occurs when learning established with a stimulus from one sensory modality facilitates subsequent learning with a new stimulus from a different sensory modality. The current study examined neuronal correlates of cross-modal transfer of Pavlovian eyeblink conditioning in rats. Neuronal activity was recorded from tetrodes within the anterior interpositus nucleus (IPN) of the cerebellum and basilar pontine nucleus (PN) during different phases of training. After stimulus pre-exposure and unpaired training sessions with a tone conditioned stimulus (CS), light CS, and periorbital stimulation unconditioned stimulus (US), rats received associative training with one of the CSs and the US (CS1-US). Training then continued on the same day with the other CS to assess cross-modal transfer (CS2-US). The final training session included associative training with both CSs on separate trials to establish stronger cross-modal transfer (CS1/CS2). Neurons in the IPN and PN showed primarily unimodal responses during pre-training sessions. Learning-related facilitation of activity correlated with the conditioned response (CR) developed in the IPN and PN during CS1-US training. Subsequent CS2-US training resulted in acquisition of CRs and learning-related neuronal activity in the IPN but substantially less little learning-related activity in the PN. Additional CS1/CS2 training increased CRs and learning-related activity in the IPN and PN during CS2-US trials. The findings suggest that cross-modal neuronal plasticity in the PN is driven by excitatory feedback from the IPN to the PN. Interacting plasticity mechanisms in the IPN and PN may underlie behavioral cross-modal transfer in eyeblink conditioning. PMID:21411647

  19. Learning Discriminative Binary Codes for Large-scale Cross-modal Retrieval.

    PubMed

    Xu, Xing; Shen, Fumin; Yang, Yang; Shen, Heng Tao; Li, Xuelong

    2017-05-01

    Hashing based methods have attracted considerable attention for efficient cross-modal retrieval on large-scale multimedia data. The core problem of cross-modal hashing is how to learn compact binary codes that construct the underlying correlations between heterogeneous features from different modalities. A majority of recent approaches aim at learning hash functions to preserve the pairwise similarities defined by given class labels. However, these methods fail to explicitly explore the discriminative property of class labels during hash function learning. In addition, they usually discard the discrete constraints imposed on the to-be-learned binary codes, and compromise to solve a relaxed problem with quantization to obtain the approximate binary solution. Therefore, the binary codes generated by these methods are suboptimal and less discriminative to different classes. To overcome these drawbacks, we propose a novel cross-modal hashing method, termed discrete cross-modal hashing (DCH), which directly learns discriminative binary codes while retaining the discrete constraints. Specifically, DCH learns modality-specific hash functions for generating unified binary codes, and these binary codes are viewed as representative features for discriminative classification with class labels. An effective discrete optimization algorithm is developed for DCH to jointly learn the modality-specific hash function and the unified binary codes. Extensive experiments on three benchmark data sets highlight the superiority of DCH under various cross-modal scenarios and show its state-of-the-art performance.

  20. Asynchronous Incremental Stochastic Dual Descent Algorithm for Network Resource Allocation

    NASA Astrophysics Data System (ADS)

    Bedi, Amrit Singh; Rajawat, Ketan

    2018-05-01

    Stochastic network optimization problems entail finding resource allocation policies that are optimum on an average but must be designed in an online fashion. Such problems are ubiquitous in communication networks, where resources such as energy and bandwidth are divided among nodes to satisfy certain long-term objectives. This paper proposes an asynchronous incremental dual decent resource allocation algorithm that utilizes delayed stochastic {gradients} for carrying out its updates. The proposed algorithm is well-suited to heterogeneous networks as it allows the computationally-challenged or energy-starved nodes to, at times, postpone the updates. The asymptotic analysis of the proposed algorithm is carried out, establishing dual convergence under both, constant and diminishing step sizes. It is also shown that with constant step size, the proposed resource allocation policy is asymptotically near-optimal. An application involving multi-cell coordinated beamforming is detailed, demonstrating the usefulness of the proposed algorithm.

  1. Fuzzy Adaptive Compensation Control of Uncertain Stochastic Nonlinear Systems With Actuator Failures and Input Hysteresis.

    PubMed

    Wang, Jianhui; Liu, Zhi; Chen, C L Philip; Zhang, Yun

    2017-10-12

    Hysteresis exists ubiquitously in physical actuators. Besides, actuator failures/faults may also occur in practice. Both effects would deteriorate the transient tracking performance, and even trigger instability. In this paper, we consider the problem of compensating for actuator failures and input hysteresis by proposing a fuzzy control scheme for stochastic nonlinear systems. Compared with the existing research on stochastic nonlinear uncertain systems, it is found that how to guarantee a prescribed transient tracking performance when taking into account actuator failures and hysteresis simultaneously also remains to be answered. Our proposed control scheme is designed on the basis of the fuzzy logic system and backstepping techniques for this purpose. It is proven that all the signals remain bounded and the tracking error is ensured to be within a preestablished bound with the failures of hysteretic actuator. Finally, simulations are provided to illustrate the effectiveness of the obtained theoretical results.

  2. Cross-modal working memory binding and word recognition skills: how specific is the link?

    PubMed

    Wang, Shinmin; Allen, Richard J

    2018-04-01

    Recent research has suggested that the creation of temporary bound representations of information from different sources within working memory uniquely relates to word recognition abilities in school-age children. However, it is unclear to what extent this link is attributable specifically to the binding ability for cross-modal information. This study examined the performance of Grade 3 (8-9 years old) children on binding tasks requiring either temporary association formation of two visual items (i.e., within-modal binding) or pairs of visually presented abstract shapes and auditorily presented nonwords (i.e., cross-modal binding). Children's word recognition skills were related to performance on the cross-modal binding task but not on the within-modal binding task. Further regression models showed that cross-modal binding memory was a significant predictor of word recognition when memory for its constituent elements, general abilities, and crucially, within-modal binding memory were taken into account. These findings may suggest a specific link between the ability to bind information across modalities within working memory and word recognition skills.

  3. Cross-modal individual recognition in wild African lions.

    PubMed

    Gilfillan, Geoffrey; Vitale, Jessica; McNutt, John Weldon; McComb, Karen

    2016-08-01

    Individual recognition is considered to have been fundamental in the evolution of complex social systems and is thought to be a widespread ability throughout the animal kingdom. Although robust evidence for individual recognition remains limited, recent experimental paradigms that examine cross-modal processing have demonstrated individual recognition in a range of captive non-human animals. It is now highly relevant to test whether cross-modal individual recognition exists within wild populations and thus examine how it is employed during natural social interactions. We address this question by testing audio-visual cross-modal individual recognition in wild African lions (Panthera leo) using an expectancy-violation paradigm. When presented with a scenario where the playback of a loud-call (roaring) broadcast from behind a visual block is incongruent with the conspecific previously seen there, subjects responded more strongly than during the congruent scenario where the call and individual matched. These findings suggest that lions are capable of audio-visual cross-modal individual recognition and provide a useful method for studying this ability in wild populations. © 2016 The Author(s).

  4. Infants are superior in implicit crossmodal learning and use other learning mechanisms than adults

    PubMed Central

    von Frieling, Marco; Röder, Brigitte

    2017-01-01

    During development internal models of the sensory world must be acquired which have to be continuously adapted later. We used event-related potentials (ERP) to test the hypothesis that infants extract crossmodal statistics implicitly while adults learn them when task relevant. Participants were passively exposed to frequent standard audio-visual combinations (A1V1, A2V2, p=0.35 each), rare recombinations of these standard stimuli (A1V2, A2V1, p=0.10 each), and a rare audio-visual deviant with infrequent auditory and visual elements (A3V3, p=0.10). While both six-month-old infants and adults differentiated between rare deviants and standards involving early neural processing stages only infants were sensitive to crossmodal statistics as indicated by a late ERP difference between standard and recombined stimuli. A second experiment revealed that adults differentiated recombined and standard combinations when crossmodal combinations were task relevant. These results demonstrate a heightened sensitivity for crossmodal statistics in infants and a change in learning mode from infancy to adulthood. PMID:28949291

  5. Influence of auditory spatial attention on cross-modal semantic priming effect: evidence from N400 effect.

    PubMed

    Wang, Hongyan; Zhang, Gaoyan; Liu, Baolin

    2017-01-01

    Semantic priming is an important research topic in the field of cognitive neuroscience. Previous studies have shown that the uni-modal semantic priming effect can be modulated by attention. However, the influence of attention on cross-modal semantic priming is unclear. To investigate this issue, the present study combined a cross-modal semantic priming paradigm with an auditory spatial attention paradigm, presenting the visual pictures as the prime stimuli and the semantically related or unrelated sounds as the target stimuli. Event-related potentials results showed that when the target sound was attended to, the N400 effect was evoked. The N400 effect was also observed when the target sound was not attended to, demonstrating that the cross-modal semantic priming effect persists even though the target stimulus is not focused on. Further analyses revealed that the N400 effect evoked by the unattended sound was significantly lower than the effect evoked by the attended sound. This contrast provides new evidence that the cross-modal semantic priming effect can be modulated by attention.

  6. Neonatal Restriction of Tactile Inputs Leads to Long-Lasting Impairments of Cross-Modal Processing

    PubMed Central

    Röder, Brigitte; Hanganu-Opatz, Ileana L.

    2015-01-01

    Optimal behavior relies on the combination of inputs from multiple senses through complex interactions within neocortical networks. The ontogeny of this multisensory interplay is still unknown. Here, we identify critical factors that control the development of visual-tactile processing by combining in vivo electrophysiology with anatomical/functional assessment of cortico-cortical communication and behavioral investigation of pigmented rats. We demonstrate that the transient reduction of unimodal (tactile) inputs during a short period of neonatal development prior to the first cross-modal experience affects feed-forward subcortico-cortical interactions by attenuating the cross-modal enhancement of evoked responses in the adult primary somatosensory cortex. Moreover, the neonatal manipulation alters cortico-cortical interactions by decreasing the cross-modal synchrony and directionality in line with the sparsification of direct projections between primary somatosensory and visual cortices. At the behavioral level, these functional and structural deficits resulted in lower cross-modal matching abilities. Thus, neonatal unimodal experience during defined developmental stages is necessary for setting up the neuronal networks of multisensory processing. PMID:26600123

  7. A developmental basis for stochasticity in floral organ numbers

    PubMed Central

    Kitazawa, Miho S.; Fujimoto, Koichi

    2014-01-01

    Stochasticity ubiquitously inevitably appears at all levels from molecular traits to multicellular, morphological traits. Intrinsic stochasticity in biochemical reactions underlies the typical intercellular distributions of chemical concentrations, e.g., morphogen gradients, which can give rise to stochastic morphogenesis. While the universal statistics and mechanisms underlying the stochasticity at the biochemical level have been widely analyzed, those at the morphological level have not. Such morphological stochasticity is found in foral organ numbers. Although the floral organ number is a hallmark of floral species, it can distribute stochastically even within an individual plant. The probability distribution of the floral organ number within a population is usually asymmetric, i.e., it is more likely to increase rather than decrease from the modal value, or vice versa. We combined field observations, statistical analysis, and mathematical modeling to study the developmental basis of the variation in floral organ numbers among 50 species mainly from Ranunculaceae and several other families from core eudicots. We compared six hypothetical mechanisms and found that a modified error function reproduced much of the asymmetric variation found in eudicot floral organ numbers. The error function is derived from mathematical modeling of floral organ positioning, and its parameters represent measurable distances in the floral bud morphologies. The model predicts two developmental sources of the organ-number distributions: stochastic shifts in the expression boundaries of homeotic genes and a semi-concentric (whorled-type) organ arrangement. Other models species- or organ-specifically reproduced different types of distributions that reflect different developmental processes. The organ-number variation could be an indicator of stochasticity in organ fate determination and organ positioning. PMID:25404932

  8. Analyzing long-term correlated stochastic processes by means of recurrence networks: Potentials and pitfalls

    NASA Astrophysics Data System (ADS)

    Zou, Yong; Donner, Reik V.; Kurths, Jürgen

    2015-02-01

    Long-range correlated processes are ubiquitous, ranging from climate variables to financial time series. One paradigmatic example for such processes is fractional Brownian motion (fBm). In this work, we highlight the potentials and conceptual as well as practical limitations when applying the recently proposed recurrence network (RN) approach to fBm and related stochastic processes. In particular, we demonstrate that the results of a previous application of RN analysis to fBm [Liu et al. Phys. Rev. E 89, 032814 (2014), 10.1103/PhysRevE.89.032814] are mainly due to an inappropriate treatment disregarding the intrinsic nonstationarity of such processes. Complementarily, we analyze some RN properties of the closely related stationary fractional Gaussian noise (fGn) processes and find that the resulting network properties are well-defined and behave as one would expect from basic conceptual considerations. Our results demonstrate that RN analysis can indeed provide meaningful results for stationary stochastic processes, given a proper selection of its intrinsic methodological parameters, whereas it is prone to fail to uniquely retrieve RN properties for nonstationary stochastic processes like fBm.

  9. Enhancing emotional experiences to dance through music: the role of valence and arousal in the cross-modal bias.

    PubMed

    Christensen, Julia F; Gaigg, Sebastian B; Gomila, Antoni; Oke, Peter; Calvo-Merino, Beatriz

    2014-01-01

    It is well established that emotional responses to stimuli presented to one perceptive modality (e.g., visual) are modulated by the concurrent presentation of affective information to another modality (e.g., auditory)-an effect known as the cross-modal bias. However, the affective mechanisms mediating this effect are still not fully understood. It remains unclear what role different dimensions of stimulus valence and arousal play in mediating the effect, and to what extent cross-modal influences impact not only our perception and conscious affective experiences, but also our psychophysiological emotional response. We addressed these issues by measuring participants' subjective emotion ratings and their Galvanic Skin Responses (GSR) in a cross-modal affect perception paradigm employing videos of ballet dance movements and instrumental classical music as the stimuli. We chose these stimuli to explore the cross-modal bias in a context of stimuli (ballet dance movements) that most participants would have relatively little prior experience with. Results showed (i) that the cross-modal bias was more pronounced for sad than for happy movements, whereas it was equivalent when contrasting high vs. low arousal movements; and (ii) that movement valence did not modulate participants' GSR, while movement arousal did, such that GSR was potentiated in the case of low arousal movements with sad music and when high arousal movements were paired with happy music. Results are discussed in the context of the affective dimension of neuroentrainment and with regards to implications for the art community.

  10. Sequential roles of primary somatosensory cortex and posterior parietal cortex in tactile-visual cross-modal working memory: a single-pulse transcranial magnetic stimulation (spTMS) study.

    PubMed

    Ku, Yixuan; Zhao, Di; Hao, Ning; Hu, Yi; Bodner, Mark; Zhou, Yong-Di

    2015-01-01

    Both monkey neurophysiological and human EEG studies have shown that association cortices, as well as primary sensory cortical areas, play an essential role in sequential neural processes underlying cross-modal working memory. The present study aims to further examine causal and sequential roles of the primary sensory cortex and association cortex in cross-modal working memory. Individual MRI-based single-pulse transcranial magnetic stimulation (spTMS) was applied to bilateral primary somatosensory cortices (SI) and the contralateral posterior parietal cortex (PPC), while participants were performing a tactile-visual cross-modal delayed matching-to-sample task. Time points of spTMS were 300 ms, 600 ms, 900 ms after the onset of the tactile sample stimulus in the task. The accuracy of task performance and reaction time were significantly impaired when spTMS was applied to the contralateral SI at 300 ms. Significant impairment on performance accuracy was also observed when the contralateral PPC was stimulated at 600 ms. SI and PPC play sequential and distinct roles in neural processes of cross-modal associations and working memory. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Analytic descriptions of stochastic bistable systems under force ramp

    DOE PAGES

    Friddle, Raymond W.

    2016-05-13

    Solving the two-state master equation with time-dependent rates, the ubiquitous driven bistable system, is a long-standing problem that does not permit a complete solution for all driving rates. We show an accurate approximation to this problem by considering the system in the control parameter regime. Moreover, the results are immediately applicable to a diverse range of bistable systems including single-molecule mechanics.

  12. Cross-modal plasticity in developmental and age-related hearing loss: Clinical implications.

    PubMed

    Glick, Hannah; Sharma, Anu

    2017-01-01

    This review explores cross-modal cortical plasticity as a result of auditory deprivation in populations with hearing loss across the age spectrum, from development to adulthood. Cross-modal plasticity refers to the phenomenon when deprivation in one sensory modality (e.g. the auditory modality as in deafness or hearing loss) results in the recruitment of cortical resources of the deprived modality by intact sensory modalities (e.g. visual or somatosensory systems). We discuss recruitment of auditory cortical resources for visual and somatosensory processing in deafness and in lesser degrees of hearing loss. We describe developmental cross-modal re-organization in the context of congenital or pre-lingual deafness in childhood and in the context of adult-onset, age-related hearing loss, with a focus on how cross-modal plasticity relates to clinical outcomes. We provide both single-subject and group-level evidence of cross-modal re-organization by the visual and somatosensory systems in bilateral, congenital deafness, single-sided deafness, adults with early-stage, mild-moderate hearing loss, and individual adult and pediatric patients exhibit excellent and average speech perception with hearing aids and cochlear implants. We discuss a framework in which changes in cortical resource allocation secondary to hearing loss results in decreased intra-modal plasticity in auditory cortex, accompanied by increased cross-modal recruitment of auditory cortices by the other sensory systems, and simultaneous compensatory activation of frontal cortices. The frontal cortices, as we will discuss, play an important role in mediating cognitive compensation in hearing loss. Given the wide range of variability in behavioral performance following audiological intervention, changes in cortical plasticity may play a valuable role in the prediction of clinical outcomes following intervention. Further, the development of new technologies and rehabilitation strategies that incorporate brain-based biomarkers may help better serve hearing impaired populations across the lifespan. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Oscillatory signatures of crossmodal congruence effects: An EEG investigation employing a visuotactile pattern matching paradigm.

    PubMed

    Göschl, Florian; Friese, Uwe; Daume, Jonathan; König, Peter; Engel, Andreas K

    2015-08-01

    Coherent percepts emerge from the accurate combination of inputs from the different sensory systems. There is an ongoing debate about the neurophysiological mechanisms of crossmodal interactions in the brain, and it has been proposed that transient synchronization of neurons might be of central importance. Oscillatory activity in lower frequency ranges (<30Hz) has been implicated in mediating long-range communication as typically studied in multisensory research. In the current study, we recorded high-density electroencephalograms while human participants were engaged in a visuotactile pattern matching paradigm and analyzed oscillatory power in the theta- (4-7Hz), alpha- (8-13Hz) and beta-bands (13-30Hz). Employing the same physical stimuli, separate tasks of the experiment either required the detection of predefined targets in visual and tactile modalities or the explicit evaluation of crossmodal stimulus congruence. Analysis of the behavioral data showed benefits for congruent visuotactile stimulus combinations. Differences in oscillatory dynamics related to crossmodal congruence within the two tasks were observed in the beta-band for crossmodal target detection, as well as in the theta-band for congruence evaluation. Contrasting ongoing activity preceding visuotactile stimulation between the two tasks revealed differences in the alpha- and beta-bands. Source reconstruction of between-task differences showed prominent involvement of premotor cortex, supplementary motor area, somatosensory association cortex and the supramarginal gyrus. These areas not only exhibited more involvement in the pre-stimulus interval for target detection compared to congruence evaluation, but were also crucially involved in post-stimulus differences related to crossmodal stimulus congruence within the detection task. These results add to the increasing evidence that low frequency oscillations are functionally relevant for integration in distributed brain networks, as demonstrated for crossmodal interactions in visuotactile pattern matching in the current study. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Cross-modal project prioritization : a TPCB peer exchange.

    DOT National Transportation Integrated Search

    2015-05-01

    This report highlights key recommendations and best practices identified at the peer exchange on Cross-Modal Project Prioritization, held on December 16 and 17, 2014, in Raleigh, North Carolina. This event was sponsored by the Transportation Planning...

  15. Evidence of a visual-to-auditory cross-modal sensory gating phenomenon as reflected by the human P50 event-related brain potential modulation.

    PubMed

    Lebib, Riadh; Papo, David; de Bode, Stella; Baudonnière, Pierre Marie

    2003-05-08

    We investigated the existence of a cross-modal sensory gating reflected by the modulation of an early electrophysiological index, the P50 component. We analyzed event-related brain potentials elicited by audiovisual speech stimuli manipulated along two dimensions: congruency and discriminability. The results showed that the P50 was attenuated when visual and auditory speech information were redundant (i.e. congruent), in comparison with this same event-related potential component elicited with discrepant audiovisual dubbing. When hard to discriminate, however, bimodal incongruent speech stimuli elicited a similar pattern of P50 attenuation. We concluded to the existence of a visual-to-auditory cross-modal sensory gating phenomenon. These results corroborate previous findings revealing a very early audiovisual interaction during speech perception. Finally, we postulated that the sensory gating system included a cross-modal dimension.

  16. Oxytocin mediates early experience-dependent cross-modal plasticity in the sensory cortices.

    PubMed

    Zheng, Jing-Jing; Li, Shu-Jing; Zhang, Xiao-Di; Miao, Wan-Ying; Zhang, Dinghong; Yao, Haishan; Yu, Xiang

    2014-03-01

    Sensory experience is critical to development and plasticity of neural circuits. Here we report a new form of plasticity in neonatal mice, where early sensory experience cross-modally regulates development of all sensory cortices via oxytocin signaling. Unimodal sensory deprivation from birth through whisker deprivation or dark rearing reduced excitatory synaptic transmission in the correspondent sensory cortex and cross-modally in other sensory cortices. Sensory experience regulated synthesis and secretion of the neuropeptide oxytocin as well as its level in the cortex. Both in vivo oxytocin injection and increased sensory experience elevated excitatory synaptic transmission in multiple sensory cortices and significantly rescued the effects of sensory deprivation. Together, these results identify a new function for oxytocin in promoting cross-modal, experience-dependent cortical development. This link between sensory experience and oxytocin is particularly relevant to autism, where hypersensitivity or hyposensitivity to sensory inputs is prevalent and oxytocin is a hotly debated potential therapy.

  17. Congenital Anophthalmia and Binocular Neonatal Enucleation Differently Affect the Proteome of Primary and Secondary Visual Cortices in Mice.

    PubMed

    Laramée, Marie-Eve; Smolders, Katrien; Hu, Tjing-Tjing; Bronchti, Gilles; Boire, Denis; Arckens, Lutgarde

    2016-01-01

    In blind individuals, visually deprived occipital areas are activated by non-visual stimuli. The extent of this cross-modal activation depends on the age at onset of blindness. Cross-modal inputs have access to several anatomical pathways to reactivate deprived visual areas. Ectopic cross-modal subcortical connections have been shown in anophthalmic animals but not in animals deprived of sight at a later age. Direct and indirect cross-modal cortical connections toward visual areas could also be involved, yet the number of neurons implicated is similar between blind mice and sighted controls. Changes at the axon terminal, dendritic spine or synaptic level are therefore expected upon loss of visual inputs. Here, the proteome of V1, V2M and V2L from P0-enucleated, anophthalmic and sighted mice, sharing a common genetic background (C57BL/6J x ZRDCT/An), was investigated by 2-D DIGE and Western analyses to identify molecular adaptations to enucleation and/or anophthalmia. Few proteins were differentially expressed in enucleated or anophthalmic mice in comparison to sighted mice. The loss of sight affected three pathways: metabolism, synaptic transmission and morphogenesis. Most changes were detected in V1, followed by V2M. Overall, cross-modal adaptations could be promoted in both models of early blindness but not through the exact same molecular strategy. A lower metabolic activity observed in visual areas of blind mice suggests that even if cross-modal inputs reactivate visual areas, they could remain suboptimally processed.

  18. Congenital Anophthalmia and Binocular Neonatal Enucleation Differently Affect the Proteome of Primary and Secondary Visual Cortices in Mice

    PubMed Central

    Smolders, Katrien; Hu, Tjing-Tjing; Bronchti, Gilles; Boire, Denis; Arckens, Lutgarde

    2016-01-01

    In blind individuals, visually deprived occipital areas are activated by non-visual stimuli. The extent of this cross-modal activation depends on the age at onset of blindness. Cross-modal inputs have access to several anatomical pathways to reactivate deprived visual areas. Ectopic cross-modal subcortical connections have been shown in anophthalmic animals but not in animals deprived of sight at a later age. Direct and indirect cross-modal cortical connections toward visual areas could also be involved, yet the number of neurons implicated is similar between blind mice and sighted controls. Changes at the axon terminal, dendritic spine or synaptic level are therefore expected upon loss of visual inputs. Here, the proteome of V1, V2M and V2L from P0-enucleated, anophthalmic and sighted mice, sharing a common genetic background (C57BL/6J x ZRDCT/An), was investigated by 2-D DIGE and Western analyses to identify molecular adaptations to enucleation and/or anophthalmia. Few proteins were differentially expressed in enucleated or anophthalmic mice in comparison to sighted mice. The loss of sight affected three pathways: metabolism, synaptic transmission and morphogenesis. Most changes were detected in V1, followed by V2M. Overall, cross-modal adaptations could be promoted in both models of early blindness but not through the exact same molecular strategy. A lower metabolic activity observed in visual areas of blind mice suggests that even if cross-modal inputs reactivate visual areas, they could remain suboptimally processed. PMID:27410964

  19. On the relative contributions of multisensory integration and crossmodal exogenous spatial attention to multisensory response enhancement.

    PubMed

    Van der Stoep, N; Spence, C; Nijboer, T C W; Van der Stigchel, S

    2015-11-01

    Two processes that can give rise to multisensory response enhancement (MRE) are multisensory integration (MSI) and crossmodal exogenous spatial attention. It is, however, currently unclear what the relative contribution of each of these is to MRE. We investigated this issue using two tasks that are generally assumed to measure MSI (a redundant target effect task) and crossmodal exogenous spatial attention (a spatial cueing task). One block of trials consisted of unimodal auditory and visual targets designed to provide a unimodal baseline. In two other blocks of trials, the participants were presented with spatially and temporally aligned and misaligned audiovisual (AV) targets (0, 50, 100, and 200ms SOA). In the integration block, the participants were instructed to respond to the onset of the first target stimulus that they detected (A or V). The instruction for the cueing block was to respond only to the onset of the visual targets. The targets could appear at one of three locations: left, center, and right. The participants were instructed to respond only to lateral targets. The results indicated that MRE was caused by MSI at 0ms SOA. At 50ms SOA, both crossmodal exogenous spatial attention and MSI contributed to the observed MRE, whereas the MRE observed at the 100 and 200ms SOAs was attributable to crossmodal exogenous spatial attention, alerting, and temporal preparation. These results therefore suggest that there may be a temporal window in which both MSI and exogenous crossmodal spatial attention can contribute to multisensory response enhancement. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Linear Subspace Ranking Hashing for Cross-Modal Retrieval.

    PubMed

    Li, Kai; Qi, Guo-Jun; Ye, Jun; Hua, Kien A

    2017-09-01

    Hashing has attracted a great deal of research in recent years due to its effectiveness for the retrieval and indexing of large-scale high-dimensional multimedia data. In this paper, we propose a novel ranking-based hashing framework that maps data from different modalities into a common Hamming space where the cross-modal similarity can be measured using Hamming distance. Unlike existing cross-modal hashing algorithms where the learned hash functions are binary space partitioning functions, such as the sign and threshold function, the proposed hashing scheme takes advantage of a new class of hash functions closely related to rank correlation measures which are known to be scale-invariant, numerically stable, and highly nonlinear. Specifically, we jointly learn two groups of linear subspaces, one for each modality, so that features' ranking orders in different linear subspaces maximally preserve the cross-modal similarities. We show that the ranking-based hash function has a natural probabilistic approximation which transforms the original highly discontinuous optimization problem into one that can be efficiently solved using simple gradient descent algorithms. The proposed hashing framework is also flexible in the sense that the optimization procedures are not tied up to any specific form of loss function, which is typical for existing cross-modal hashing methods, but rather we can flexibly accommodate different loss functions with minimal changes to the learning steps. We demonstrate through extensive experiments on four widely-used real-world multimodal datasets that the proposed cross-modal hashing method can achieve competitive performance against several state-of-the-arts with only moderate training and testing time.

  1. Neural substrate of initiation of cross-modal working memory retrieval.

    PubMed

    Zhang, Yangyang; Hu, Yang; Guan, Shuchen; Hong, Xiaolong; Wang, Zhaoxin; Li, Xianchun

    2014-01-01

    Cross-modal working memory requires integrating stimuli from different modalities and it is associated with co-activation of distributed networks in the brain. However, how brain initiates cross-modal working memory retrieval remains not clear yet. In the present study, we developed a cued matching task, in which the necessity for cross-modal/unimodal memory retrieval and its initiation time were controlled by a task cue appeared in the delay period. Using functional magnetic resonance imaging (fMRI), significantly larger brain activations were observed in the left lateral prefrontal cortex (l-LPFC), left superior parietal lobe (l-SPL), and thalamus in the cued cross-modal matching trials (CCMT) compared to those in the cued unimodal matching trials (CUMT). However, no significant differences in the brain activations prior to task cue were observed for sensory stimulation in the l-LPFC and l-SPL areas. Although thalamus displayed differential responses to the sensory stimulation between two conditions, the differential responses were not the same with responses to the task cues. These results revealed that the frontoparietal-thalamus network participated in the initiation of cross-modal working memory retrieval. Secondly, the l-SPL and thalamus showed differential activations between maintenance and working memory retrieval, which might be associated with the enhanced demand for cognitive resources.

  2. Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons

    PubMed Central

    Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang

    2011-01-01

    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717

  3. Limitations and tradeoffs in synchronization of large-scale networks with uncertain links

    PubMed Central

    Diwadkar, Amit; Vaidya, Umesh

    2016-01-01

    The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies. PMID:27067994

  4. Stochastic Methods for Aircraft Design

    NASA Technical Reports Server (NTRS)

    Pelz, Richard B.; Ogot, Madara

    1998-01-01

    The global stochastic optimization method, simulated annealing (SA), was adapted and applied to various problems in aircraft design. The research was aimed at overcoming the problem of finding an optimal design in a space with multiple minima and roughness ubiquitous to numerically generated nonlinear objective functions. SA was modified to reduce the number of objective function evaluations for an optimal design, historically the main criticism of stochastic methods. SA was applied to many CFD/MDO problems including: low sonic-boom bodies, minimum drag on supersonic fore-bodies, minimum drag on supersonic aeroelastic fore-bodies, minimum drag on HSCT aeroelastic wings, FLOPS preliminary design code, another preliminary aircraft design study with vortex lattice aerodynamics, HSR complete aircraft aerodynamics. In every case, SA provided a simple, robust and reliable optimization method which found optimal designs in order 100 objective function evaluations. Perhaps most importantly, from this academic/industrial project, technology has been successfully transferred; this method is the method of choice for optimization problems at Northrop Grumman.

  5. Random diffusivity from stochastic equations: comparison of two models for Brownian yet non-Gaussian diffusion

    NASA Astrophysics Data System (ADS)

    Sposini, Vittoria; Chechkin, Aleksei V.; Seno, Flavio; Pagnini, Gianni; Metzler, Ralf

    2018-04-01

    A considerable number of systems have recently been reported in which Brownian yet non-Gaussian dynamics was observed. These are processes characterised by a linear growth in time of the mean squared displacement, yet the probability density function of the particle displacement is distinctly non-Gaussian, and often of exponential (Laplace) shape. This apparently ubiquitous behaviour observed in very different physical systems has been interpreted as resulting from diffusion in inhomogeneous environments and mathematically represented through a variable, stochastic diffusion coefficient. Indeed different models describing a fluctuating diffusivity have been studied. Here we present a new view of the stochastic basis describing time-dependent random diffusivities within a broad spectrum of distributions. Concretely, our study is based on the very generic class of the generalised Gamma distribution. Two models for the particle spreading in such random diffusivity settings are studied. The first belongs to the class of generalised grey Brownian motion while the second follows from the idea of diffusing diffusivities. The two processes exhibit significant characteristics which reproduce experimental results from different biological and physical systems. We promote these two physical models for the description of stochastic particle motion in complex environments.

  6. Musicians are more consistent: Gestural cross-modal mappings of pitch, loudness and tempo in real-time

    PubMed Central

    Küssner, Mats B.; Tidhar, Dan; Prior, Helen M.; Leech-Wilkinson, Daniel

    2014-01-01

    Cross-modal mappings of auditory stimuli reveal valuable insights into how humans make sense of sound and music. Whereas researchers have investigated cross-modal mappings of sound features varied in isolation within paradigms such as speeded classification and forced-choice matching tasks, investigations of representations of concurrently varied sound features (e.g., pitch, loudness and tempo) with overt gestures—accounting for the intrinsic link between movement and sound—are scant. To explore the role of bodily gestures in cross-modal mappings of auditory stimuli we asked 64 musically trained and untrained participants to represent pure tones—continually sounding and concurrently varied in pitch, loudness and tempo—with gestures while the sound stimuli were played. We hypothesized musical training to lead to more consistent mappings between pitch and height, loudness and distance/height, and tempo and speed of hand movement and muscular energy. Our results corroborate previously reported pitch vs. height (higher pitch leading to higher elevation in space) and tempo vs. speed (increasing tempo leading to increasing speed of hand movement) associations, but also reveal novel findings pertaining to musical training which influenced consistency of pitch mappings, annulling a commonly observed bias for convex (i.e., rising–falling) pitch contours. Moreover, we reveal effects of interactions between musical parameters on cross-modal mappings (e.g., pitch and loudness on speed of hand movement), highlighting the importance of studying auditory stimuli concurrently varied in different musical parameters. Results are discussed in light of cross-modal cognition, with particular emphasis on studies within (embodied) music cognition. Implications for theoretical refinements and potential clinical applications are provided. PMID:25120506

  7. Cross-modal enhancement of speech detection in young and older adults: does signal content matter?

    PubMed

    Tye-Murray, Nancy; Spehar, Brent; Myerson, Joel; Sommers, Mitchell S; Hale, Sandra

    2011-01-01

    The purpose of the present study was to examine the effects of age and visual content on cross-modal enhancement of auditory speech detection. Visual content consisted of three clearly distinct types of visual information: an unaltered video clip of a talker's face, a low-contrast version of the same clip, and a mouth-like Lissajous figure. It was hypothesized that both young and older adults would exhibit reduced enhancement as visual content diverged from the original clip of the talker's face, but that the decrease would be greater for older participants. Nineteen young adults and 19 older adults were asked to detect a single spoken syllable (/ba/) in speech-shaped noise, and the level of the signal was adaptively varied to establish the signal-to-noise ratio (SNR) at threshold. There was an auditory-only baseline condition and three audiovisual conditions in which the syllable was accompanied by one of the three visual signals (the unaltered clip of the talker's face, the low-contrast version of that clip, or the Lissajous figure). For each audiovisual condition, the SNR at threshold was compared with the SNR at threshold for the auditory-only condition to measure the amount of cross-modal enhancement. Young adults exhibited significant cross-modal enhancement with all three types of visual stimuli, with the greatest amount of enhancement observed for the unaltered clip of the talker's face. Older adults, in contrast, exhibited significant cross-modal enhancement only with the unaltered face. Results of this study suggest that visual signal content affects cross-modal enhancement of speech detection in both young and older adults. They also support a hypothesized age-related deficit in processing low-contrast visual speech stimuli, even in older adults with normal contrast sensitivity.

  8. Cross-Modal Binding in Developmental Dyslexia

    ERIC Educational Resources Information Center

    Jones, Manon W.; Branigan, Holly P.; Parra, Mario A.; Logie, Robert H.

    2013-01-01

    The ability to learn visual-phonological associations is a unique predictor of word reading, and individuals with developmental dyslexia show impaired ability in learning these associations. In this study, we compared developmentally dyslexic and nondyslexic adults on their ability to form cross-modal associations (or "bindings") based…

  9. A Cross-Modal Assessment of Reading Achievement in Children.

    ERIC Educational Resources Information Center

    Webb, Kathryn; And Others

    1982-01-01

    This study examined the ability of the Listen and Look (LL) test of cross-modal perception and the Metropolitan Readiness Test (MRT) to predict reading achievement. Data from 79 first-grade pupils were analyzed. Both the LL and MRT demonstrated predictive validity. (Author/BW)

  10. The time course of episodic associative retrieval: electrophysiological correlates of cued recall of unimodal and crossmodal pair-associate learning.

    PubMed

    Tibon, Roni; Levy, Daniel A

    2014-03-01

    Little is known about the time course of processes supporting episodic cued recall. To examine these processes, we recorded event-related scalp electrical potentials during episodic cued recall following pair-associate learning of unimodal object-picture pairs and crossmodal object-picture and sound pairs. Successful cued recall of unimodal associates was characterized by markedly early scalp potential differences over frontal areas, while cued recall of both unimodal and crossmodal associates were reflected by subsequent differences recorded over frontal and parietal areas. Notably, unimodal cued recall success divergences over frontal areas were apparent in a time window generally assumed to reflect the operation of familiarity but not recollection processes, raising the possibility that retrieval success effects in that temporal window may reflect additional mnemonic processes beyond familiarity. Furthermore, parietal scalp potential recall success differences, which did not distinguish between crossmodal and unimodal tasks, seemingly support attentional or buffer accounts of posterior parietal mnemonic function but appear to constrain signal accumulation, expectation, or representational accounts.

  11. Temporal Structure and Complexity Affect Audio-Visual Correspondence Detection

    PubMed Central

    Denison, Rachel N.; Driver, Jon; Ruff, Christian C.

    2013-01-01

    Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence) to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task reproduced features of past findings based on explicit timing judgments but did not show any special advantage for perfectly synchronous streams. Importantly, the complexity of temporal patterns influences sensitivity to correspondence. Stochastic, irregular streams – with richer temporal pattern information – led to higher audio-visual matching sensitivity than predictable, rhythmic streams. Our results reveal that temporal structure and its complexity are key determinants for human detection of audio-visual correspondence. The distinctive emphasis of our new paradigms on temporal patterning could be useful for studying special populations with suspected abnormalities in audio-visual temporal perception and multisensory integration. PMID:23346067

  12. Cross-mode bioelectrical impedance analysis in a standing position for estimating fat-free mass validated against dual-energy x-ray absorptiometry.

    PubMed

    Huang, Ai-Chun; Chen, Yu-Yawn; Chuang, Chih-Lin; Chiang, Li-Ming; Lu, Hsueh-Kuan; Lin, Hung-Chi; Chen, Kuen-Tsann; Hsiao, An-Chi; Hsieh, Kuen-Chang

    2015-11-01

    Bioelectrical impedance analysis (BIA) is commonly used to assess body composition. Cross-mode (left hand to right foot, Z(CR)) BIA presumably uses the longest current path in the human body, which may generate better results when estimating fat-free mass (FFM). We compared the cross-mode with the hand-to-foot mode (right hand to right foot, Z(HF)) using dual-energy x-ray absorptiometry (DXA) as the reference. We hypothesized that when comparing anthropometric parameters using stepwise regression analysis, the impedance value from the cross-mode analysis would have better prediction accuracy than that from the hand-to-foot mode analysis. We studied 264 men and 232 women (mean ages, 32.19 ± 14.95 and 34.51 ± 14.96 years, respectively; mean body mass indexes, 24.54 ± 3.74 and 23.44 ± 4.61 kg/m2, respectively). The DXA-measured FFMs in men and women were 58.85 ± 8.15 and 40.48 ± 5.64 kg, respectively. Multiple stepwise linear regression analyses were performed to construct sex-specific FFM equations. The correlations of FFM measured by DXA vs. FFM from hand-to-foot mode and estimated FFM by cross-mode were 0.85 and 0.86 in women, with standard errors of estimate of 2.96 and 2.92 kg, respectively. In men, they were 0.91 and 0.91, with standard errors of the estimates of 3.34 and 3.48 kg, respectively. Bland-Altman plots showed limits of agreement of -6.78 to 6.78 kg for FFM from hand-to-foot mode and -7.06 to 7.06 kg for estimated FFM by cross-mode for men, and -5.91 to 5.91 and -5.84 to 5.84 kg, respectively, for women. Paired t tests showed no significant differences between the 2 modes (P > .05). Hence, cross-mode BIA appears to represent a reasonable and practical application for assessing FFM in Chinese populations. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Large-scale Cross-modality Search via Collective Matrix Factorization Hashing.

    PubMed

    Ding, Guiguang; Guo, Yuchen; Zhou, Jile; Gao, Yue

    2016-09-08

    By transforming data into binary representation, i.e., Hashing, we can perform high-speed search with low storage cost, and thus Hashing has collected increasing research interest in the recent years. Recently, how to generate Hashcode for multimodal data (e.g., images with textual tags, documents with photos, etc) for large-scale cross-modality search (e.g., searching semantically related images in database for a document query) is an important research issue because of the fast growth of multimodal data in the Web. To address this issue, a novel framework for multimodal Hashing is proposed, termed as Collective Matrix Factorization Hashing (CMFH). The key idea of CMFH is to learn unified Hashcodes for different modalities of one multimodal instance in the shared latent semantic space in which different modalities can be effectively connected. Therefore, accurate cross-modality search is supported. Based on the general framework, we extend it in the unsupervised scenario where it tries to preserve the Euclidean structure, and in the supervised scenario where it fully exploits the label information of data. The corresponding theoretical analysis and the optimization algorithms are given. We conducted comprehensive experiments on three benchmark datasets for cross-modality search. The experimental results demonstrate that CMFH can significantly outperform several state-of-the-art cross-modality Hashing methods, which validates the effectiveness of the proposed CMFH.

  14. Grouping and Segregation of Sensory Events by Actions in Temporal Audio-Visual Recalibration.

    PubMed

    Ikumi, Nara; Soto-Faraco, Salvador

    2016-01-01

    Perception in multi-sensory environments involves both grouping and segregation of events across sensory modalities. Temporal coincidence between events is considered a strong cue to resolve multisensory perception. However, differences in physical transmission and neural processing times amongst modalities complicate this picture. This is illustrated by cross-modal recalibration, whereby adaptation to audio-visual asynchrony produces shifts in perceived simultaneity. Here, we examined whether voluntary actions might serve as a temporal anchor to cross-modal recalibration in time. Participants were tested on an audio-visual simultaneity judgment task after an adaptation phase where they had to synchronize voluntary actions with audio-visual pairs presented at a fixed asynchrony (vision leading or vision lagging). Our analysis focused on the magnitude of cross-modal recalibration to the adapted audio-visual asynchrony as a function of the nature of the actions during adaptation, putatively fostering cross-modal grouping or, segregation. We found larger temporal adjustments when actions promoted grouping than segregation of sensory events. However, a control experiment suggested that additional factors, such as attention to planning/execution of actions, could have an impact on recalibration effects. Contrary to the view that cross-modal temporal organization is mainly driven by external factors related to the stimulus or environment, our findings add supporting evidence for the idea that perceptual adjustments strongly depend on the observer's inner states induced by motor and cognitive demands.

  15. Multimodal lexical processing in auditory cortex is literacy skill dependent.

    PubMed

    McNorgan, Chris; Awati, Neha; Desroches, Amy S; Booth, James R

    2014-09-01

    Literacy is a uniquely human cross-modal cognitive process wherein visual orthographic representations become associated with auditory phonological representations through experience. Developmental studies provide insight into how experience-dependent changes in brain organization influence phonological processing as a function of literacy. Previous investigations show a synchrony-dependent influence of letter presentation on individual phoneme processing in superior temporal sulcus; others demonstrate recruitment of primary and associative auditory cortex during cross-modal processing. We sought to determine whether brain regions supporting phonological processing of larger lexical units (monosyllabic words) over larger time windows is sensitive to cross-modal information, and whether such effects are literacy dependent. Twenty-two children (age 8-14 years) made rhyming judgments for sequentially presented word and pseudoword pairs presented either unimodally (auditory- or visual-only) or cross-modally (audiovisual). Regression analyses examined the relationship between literacy and congruency effects (overlapping orthography and phonology vs. overlapping phonology-only). We extend previous findings by showing that higher literacy is correlated with greater congruency effects in auditory cortex (i.e., planum temporale) only for cross-modal processing. These skill effects were specific to known words and occurred over a large time window, suggesting that multimodal integration in posterior auditory cortex is critical for fluent reading. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Grouping and Segregation of Sensory Events by Actions in Temporal Audio-Visual Recalibration

    PubMed Central

    Ikumi, Nara; Soto-Faraco, Salvador

    2017-01-01

    Perception in multi-sensory environments involves both grouping and segregation of events across sensory modalities. Temporal coincidence between events is considered a strong cue to resolve multisensory perception. However, differences in physical transmission and neural processing times amongst modalities complicate this picture. This is illustrated by cross-modal recalibration, whereby adaptation to audio-visual asynchrony produces shifts in perceived simultaneity. Here, we examined whether voluntary actions might serve as a temporal anchor to cross-modal recalibration in time. Participants were tested on an audio-visual simultaneity judgment task after an adaptation phase where they had to synchronize voluntary actions with audio-visual pairs presented at a fixed asynchrony (vision leading or vision lagging). Our analysis focused on the magnitude of cross-modal recalibration to the adapted audio-visual asynchrony as a function of the nature of the actions during adaptation, putatively fostering cross-modal grouping or, segregation. We found larger temporal adjustments when actions promoted grouping than segregation of sensory events. However, a control experiment suggested that additional factors, such as attention to planning/execution of actions, could have an impact on recalibration effects. Contrary to the view that cross-modal temporal organization is mainly driven by external factors related to the stimulus or environment, our findings add supporting evidence for the idea that perceptual adjustments strongly depend on the observer's inner states induced by motor and cognitive demands. PMID:28154529

  17. Compensating for age limits through emotional crossmodal integration

    PubMed Central

    Chaby, Laurence; Boullay, Viviane Luherne-du; Chetouani, Mohamed; Plaza, Monique

    2015-01-01

    Social interactions in daily life necessitate the integration of social signals from different sensory modalities. In the aging literature, it is well established that the recognition of emotion in facial expressions declines with advancing age, and this also occurs with vocal expressions. By contrast, crossmodal integration processing in healthy aging individuals is less documented. Here, we investigated the age-related effects on emotion recognition when faces and voices were presented alone or simultaneously, allowing for crossmodal integration. In this study, 31 young adults (M = 25.8 years) and 31 older adults (M = 67.2 years) were instructed to identify several basic emotions (happiness, sadness, anger, fear, disgust) and a neutral expression, which were displayed as visual (facial expressions), auditory (non-verbal affective vocalizations) or crossmodal (simultaneous, congruent facial and vocal affective expressions) stimuli. The results showed that older adults performed slower and worse than younger adults at recognizing negative emotions from isolated faces and voices. In the crossmodal condition, although slower, older adults were as accurate as younger except for anger. Importantly, additional analyses using the “race model” demonstrate that older adults benefited to the same extent as younger adults from the combination of facial and vocal emotional stimuli. These results help explain some conflicting results in the literature and may clarify emotional abilities related to daily life that are partially spared among older adults. PMID:26074845

  18. Human brain detects short-time nonlinear predictability in the temporal fine structure of deterministic chaotic sounds

    NASA Astrophysics Data System (ADS)

    Itoh, Kosuke; Nakada, Tsutomu

    2013-04-01

    Deterministic nonlinear dynamical processes are ubiquitous in nature. Chaotic sounds generated by such processes may appear irregular and random in waveform, but these sounds are mathematically distinguished from random stochastic sounds in that they contain deterministic short-time predictability in their temporal fine structures. We show that the human brain distinguishes deterministic chaotic sounds from spectrally matched stochastic sounds in neural processing and perception. Deterministic chaotic sounds, even without being attended to, elicited greater cerebral cortical responses than the surrogate control sounds after about 150 ms in latency after sound onset. Listeners also clearly discriminated these sounds in perception. The results support the hypothesis that the human auditory system is sensitive to the subtle short-time predictability embedded in the temporal fine structure of sounds.

  19. Spatial Attention and Audiovisual Interactions in Apparent Motion

    ERIC Educational Resources Information Center

    Sanabria, Daniel; Soto-Faraco, Salvador; Spence, Charles

    2007-01-01

    In this study, the authors combined the cross-modal dynamic capture task (involving the horizontal apparent movement of visual and auditory stimuli) with spatial cuing in the vertical dimension to investigate the role of spatial attention in cross-modal interactions during motion perception. Spatial attention was manipulated endogenously, either…

  20. Basic multisensory functions can be acquired after congenital visual pattern deprivation in humans.

    PubMed

    Putzar, Lisa; Gondan, Matthias; Röder, Brigitte

    2012-01-01

    People treated for bilateral congenital cataracts offer a model to study the influence of visual deprivation in early infancy on visual and multisensory development. We investigated cross-modal integration capabilities in cataract patients using a simple detection task that provided redundant information to two different senses. In both patients and controls, redundancy gains were consistent with coactivation models, indicating an integrated processing of modality-specific information. This finding is in contrast with recent studies showing impaired higher-level multisensory interactions in cataract patients. The present results suggest that basic cross-modal integrative processes for simple short stimuli do not depend on visual and/or crossmodal input since birth.

  1. Stochastic Processes in Physics: Deterministic Origins and Control

    NASA Astrophysics Data System (ADS)

    Demers, Jeffery

    Stochastic processes are ubiquitous in the physical sciences and engineering. While often used to model imperfections and experimental uncertainties in the macroscopic world, stochastic processes can attain deeper physical significance when used to model the seemingly random and chaotic nature of the underlying microscopic world. Nowhere more prevalent is this notion than in the field of stochastic thermodynamics - a modern systematic framework used describe mesoscale systems in strongly fluctuating thermal environments which has revolutionized our understanding of, for example, molecular motors, DNA replication, far-from equilibrium systems, and the laws of macroscopic thermodynamics as they apply to the mesoscopic world. With progress, however, come further challenges and deeper questions, most notably in the thermodynamics of information processing and feedback control. Here it is becoming increasingly apparent that, due to divergences and subtleties of interpretation, the deterministic foundations of the stochastic processes themselves must be explored and understood. This thesis presents a survey of stochastic processes in physical systems, the deterministic origins of their emergence, and the subtleties associated with controlling them. First, we study time-dependent billiards in the quivering limit - a limit where a billiard system is indistinguishable from a stochastic system, and where the simplified stochastic system allows us to view issues associated with deterministic time-dependent billiards in a new light and address some long-standing problems. Then, we embark on an exploration of the deterministic microscopic Hamiltonian foundations of non-equilibrium thermodynamics, and we find that important results from mesoscopic stochastic thermodynamics have simple microscopic origins which would not be apparent without the benefit of both the micro and meso perspectives. Finally, we study the problem of stabilizing a stochastic Brownian particle with feedback control, and we find that in order to avoid paradoxes involving the first law of thermodynamics, we need a model for the fine details of the thermal driving noise. The underlying theme of this thesis is the argument that the deterministic microscopic perspective and stochastic mesoscopic perspective are both important and useful, and when used together, we can more deeply and satisfyingly understand the physics occurring over either scale.

  2. Cross-Modal Interactions in the Experience of Musical Performances: Physiological Correlates

    ERIC Educational Resources Information Center

    Chapados, Catherine; Levitin, Daniel J.

    2008-01-01

    This experiment was conducted to investigate cross-modal interactions in the emotional experience of music listeners. Previous research showed that visual information present in a musical performance is rich in expressive content, and moderates the subjective emotional experience of a participant listening and/or observing musical stimuli [Vines,…

  3. The Function of Consciousness in Multisensory Integration

    ERIC Educational Resources Information Center

    Palmer, Terry D.; Ramsey, Ashley K.

    2012-01-01

    The function of consciousness was explored in two contexts of audio-visual speech, cross-modal visual attention guidance and McGurk cross-modal integration. Experiments 1, 2, and 3 utilized a novel cueing paradigm in which two different flash suppressed lip-streams cooccured with speech sounds matching one of these streams. A visual target was…

  4. Effect of Perceptual Load on Semantic Access by Speech in Children

    ERIC Educational Resources Information Center

    Jerger, Susan; Damian, Markus F.; Mills, Candice; Bartlett, James; Tye-Murray, Nancy; Abdi, Herve

    2013-01-01

    Purpose: To examine whether semantic access by speech requires attention in children. Method: Children ("N" = 200) named pictures and ignored distractors on a cross-modal (distractors: auditory-no face) or multimodal (distractors: auditory-static face and audiovisual- dynamic face) picture word task. The cross-modal task had a low load,…

  5. Plasticity of Ability to Form Cross-Modal Representations in Infant Japanese Macaques

    ERIC Educational Resources Information Center

    Adachi, Ikuma; Kuwahata, Hiroko; Fujita, Kazuo; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2009-01-01

    In a previous study, Adachi, Kuwahata, Fujita, Tomonaga & Matsuzawa demonstrated that infant Japanese macaques (Macaca fuscata) form cross-modal representations of conspecifics but not of humans. However, because the subjects in the experiment were raised in a large social group and had considerably less exposure to humans than to…

  6. Multisensory brand search: How the meaning of sounds guides consumers' visual attention.

    PubMed

    Knoeferle, Klemens M; Knoeferle, Pia; Velasco, Carlos; Spence, Charles

    2016-06-01

    Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. Cross-modality Sharpening of Visual Cortical Processing through Layer 1-Mediated Inhibition and Disinhibition

    PubMed Central

    Ibrahim, Leena A.; Mesik, Lukas; Ji, Xu-ying; Fang, Qi; Li, Hai-fu; Li, Ya-tang; Zingg, Brian; Zhang, Li I.; Tao, Huizhong Whit

    2016-01-01

    Summary Cross-modality interaction in sensory perception is advantageous for animals’ survival. How cortical sensory processing is cross-modally modulated and what are the underlying neural circuits remain poorly understood. In mouse primary visual cortex (V1), we discovered that orientation selectivity of layer (L)2/3 but not L4 excitatory neurons was sharpened in the presence of sound or optogenetic activation of projections from primary auditory cortex (A1) to V1. The effect was manifested by decreased average visual responses yet increased responses at the preferred orientation. It was more pronounced at lower visual contrast, and was diminished by suppressing L1 activity. L1 neurons were strongly innervated by A1-V1 axons and excited by sound, while visual responses of L2/3 vasoactive intestinal peptide (VIP) neurons were suppressed by sound, both preferentially at the cell's preferred orientation. These results suggest that the cross-modality modulation is achieved primarily through L1 neuron and L2/3 VIP-cell mediated inhibitory and disinhibitory circuits. PMID:26898778

  8. The informativity of sound modulates crossmodal facilitation of visual discrimination: a fMRI study.

    PubMed

    Li, Qi; Yu, Hongtao; Li, Xiujun; Sun, Hongzan; Yang, Jingjing; Li, Chunlin

    2017-01-18

    Many studies have investigated behavioral crossmodal facilitation when a visual stimulus is accompanied by a concurrent task-irrelevant sound. Lippert and colleagues reported that a concurrent task-irrelevant sound reduced the uncertainty of the timing of the visual display and improved perceptional responses (informative sound). However, the neural mechanism by which the informativity of sound affected crossmodal facilitation of visual discrimination remained unclear. In this study, we used event-related functional MRI to investigate the neural mechanisms underlying the role of informativity of sound in crossmodal facilitation of visual discrimination. Significantly faster reaction times were observed when there was an informative relationship between auditory and visual stimuli. The functional MRI results showed sound informativity-induced activation enhancement including the left fusiform gyrus and the right lateral occipital complex. Further correlation analysis showed that the right lateral occipital complex was significantly correlated with the behavioral benefit in reaction times. This suggests that this region was modulated by the informative relationship within audiovisual stimuli that was learnt during the experiment, resulting in late-stage multisensory integration and enhanced behavioral responses.

  9. Cross-modal interaction between visual and olfactory learning in Apis cerana.

    PubMed

    Zhang, Li-Zhen; Zhang, Shao-Wu; Wang, Zi-Long; Yan, Wei-Yu; Zeng, Zhi-Jiang

    2014-10-01

    The power of the small honeybee brain carrying out behavioral and cognitive tasks has been shown repeatedly to be highly impressive. The present study investigates, for the first time, the cross-modal interaction between visual and olfactory learning in Apis cerana. To explore the role and molecular mechanisms of cross-modal learning in A. cerana, the honeybees were trained and tested in a modified Y-maze with seven visual and five olfactory stimulus, where a robust visual threshold for black/white grating (period of 2.8°-3.8°) and relatively olfactory threshold (concentration of 50-25%) was obtained. Meanwhile, the expression levels of five genes (AcCREB, Acdop1, Acdop2, Acdop3, Actyr1) related to learning and memory were analyzed under different training conditions by real-time RT-PCR. The experimental results indicate that A. cerana could exhibit cross-modal interactions between visual and olfactory learning by reducing the threshold level of the conditioning stimuli, and that these genes may play important roles in the learning process of honeybees.

  10. Thermal-to-visible face recognition using partial least squares.

    PubMed

    Hu, Shuowen; Choi, Jonghyun; Chan, Alex L; Schwartz, William Robson

    2015-03-01

    Although visible face recognition has been an active area of research for several decades, cross-modal face recognition has only been explored by the biometrics community relatively recently. Thermal-to-visible face recognition is one of the most difficult cross-modal face recognition challenges, because of the difference in phenomenology between the thermal and visible imaging modalities. We address the cross-modal recognition problem using a partial least squares (PLS) regression-based approach consisting of preprocessing, feature extraction, and PLS model building. The preprocessing and feature extraction stages are designed to reduce the modality gap between the thermal and visible facial signatures, and facilitate the subsequent one-vs-all PLS-based model building. We incorporate multi-modal information into the PLS model building stage to enhance cross-modal recognition. The performance of the proposed recognition algorithm is evaluated on three challenging datasets containing visible and thermal imagery acquired under different experimental scenarios: time-lapse, physical tasks, mental tasks, and subject-to-camera range. These scenarios represent difficult challenges relevant to real-world applications. We demonstrate that the proposed method performs robustly for the examined scenarios.

  11. What is the link between synaesthesia and sound symbolism?

    PubMed Central

    Bankieris, Kaitlyn; Simner, Julia

    2015-01-01

    Sound symbolism is a property of certain words which have a direct link between their phonological form and their semantic meaning. In certain instances, sound symbolism can allow non-native speakers to understand the meanings of etymologically unfamiliar foreign words, although the mechanisms driving this are not well understood. We examined whether sound symbolism might be mediated by the same types of cross-modal processes that typify synaesthetic experiences. Synaesthesia is an inherited condition in which sensory or cognitive stimuli (e.g., sounds, words) cause additional, unusual cross-modal percepts (e.g., sounds trigger colours, words trigger tastes). Synaesthesia may be an exaggeration of normal cross-modal processing, and if so, there may be a link between synaesthesia and the type of cross-modality inherent in sound symbolism. To test this we predicted that synaesthetes would have superior understanding of unfamiliar (sound symbolic) foreign words. In our study, 19 grapheme-colour synaesthetes and 57 non-synaesthete controls were presented with 400 adjectives from 10 unfamiliar languages and were asked to guess the meaning of each word in a two-alternative forced-choice task. Both groups showed superior understanding compared to chance levels, but synaesthetes significantly outperformed controls. This heightened ability suggests that sound symbolism may rely on the types of cross-modal integration that drive synaesthetes’ unusual experiences. It also suggests that synaesthesia endows or co-occurs with heightened multi-modal skills, and that this can arise in domains unrelated to the specific form of synaesthesia. PMID:25498744

  12. Reduced frontal theta oscillations indicate altered crossmodal prediction error processing in schizophrenia

    PubMed Central

    Keil, Julian; Balz, Johanna; Gallinat, Jürgen; Senkowski, Daniel

    2016-01-01

    Our brain generates predictions about forthcoming stimuli and compares predicted with incoming input. Failures in predicting events might contribute to hallucinations and delusions in schizophrenia (SZ). When a stimulus violates prediction, neural activity that reflects prediction error (PE) processing is found. While PE processing deficits have been reported in unisensory paradigms, it is unknown whether SZ patients (SZP) show altered crossmodal PE processing. We measured high-density electroencephalography and applied source estimation approaches to investigate crossmodal PE processing generated by audiovisual speech. In SZP and healthy control participants (HC), we used an established paradigm in which high- and low-predictive visual syllables were paired with congruent or incongruent auditory syllables. We examined crossmodal PE processing in SZP and HC by comparing differences in event-related potentials and neural oscillations between incongruent and congruent high- and low-predictive audiovisual syllables. In both groups event-related potentials between 206 and 250 ms were larger in high- compared with low-predictive syllables, suggesting intact audiovisual incongruence detection in the auditory cortex of SZP. The analysis of oscillatory responses revealed theta-band (4–7 Hz) power enhancement in high- compared with low-predictive syllables between 230 and 370 ms in the frontal cortex of HC but not SZP. Thus aberrant frontal theta-band oscillations reflect crossmodal PE processing deficits in SZ. The present study suggests a top-down multisensory processing deficit and highlights the role of dysfunctional frontal oscillations for the SZ psychopathology. PMID:27358314

  13. Priming within and across modalities: exploring the nature of rCBF increases and decreases.

    PubMed

    Badgaiyan, R D; Schacter, D L; Alpert, N M

    2001-02-01

    Neuroimaging studies suggest that within-modality priming is associated with reduced regional cerebral blood flow (rCBF) in the extrastriate area, whereas cross-modality priming is associated with increased rCBF in prefrontal cortex. To characterize the nature of rCBF changes in within- and cross-modality priming, we conducted two neuroimaging experiments using positron emission tomography (PET). In experiment 1, rCBF changes in within-modality auditory priming on a word stem completion task were observed under same- and different-voice conditions. Both conditions were associated with decreased rCBF in extrastriate cortex. In the different-voice condition there were additional rCBF changes in the middle temporal gyrus and prefrontal cortex. Results suggest that the extrastriate involvement in within-modality priming is sensitive to a change in sensory modality of target stimuli between study and test, but not to a change in the feature of a stimulus within the same modality. In experiment 2, we studied cross-modality priming on a visual stem completion test after encoding under full- and divided-attention conditions. Increased rCBF in the anterior prefrontal cortex was observed in the full- but not in the divided-attention condition. Because explicit retrieval is compromised after encoding under the divided-attention condition, prefrontal involvement in cross-modality priming indicates recruitment of an aspect of explicit retrieval mechanism. The aspect of explicit retrieval that is most likely to be involved in cross-modality priming is the familiarity effect. Copyright 2001 Academic Press.

  14. Interaction of Perceptual Grouping and Crossmodal Temporal Capture in Tactile Apparent-Motion

    PubMed Central

    Chen, Lihan; Shi, Zhuanghua; Müller, Hermann J.

    2011-01-01

    Previous studies have shown that in tasks requiring participants to report the direction of apparent motion, task-irrelevant mono-beeps can “capture” visual motion perception when the beeps occur temporally close to the visual stimuli. However, the contributions of the relative timing of multimodal events and the event structure, modulating uni- and/or crossmodal perceptual grouping, remain unclear. To examine this question and extend the investigation to the tactile modality, the current experiments presented tactile two-tap apparent-motion streams, with an SOA of 400 ms between successive, left-/right-hand middle-finger taps, accompanied by task-irrelevant, non-spatial auditory stimuli. The streams were shown for 90 seconds, and participants' task was to continuously report the perceived (left- or rightward) direction of tactile motion. In Experiment 1, each tactile stimulus was paired with an auditory beep, though odd-numbered taps were paired with an asynchronous beep, with audiotactile SOAs ranging from −75 ms to 75 ms. Perceived direction of tactile motion varied systematically with audiotactile SOA, indicative of a temporal-capture effect. In Experiment 2, two audiotactile SOAs—one short (75 ms), one long (325 ms)—were compared. The long-SOA condition preserved the crossmodal event structure (so the temporal-capture dynamics should have been similar to that in Experiment 1), but both beeps now occurred temporally close to the taps on one side (even-numbered taps). The two SOAs were found to produce opposite modulations of apparent motion, indicative of an influence of crossmodal grouping. In Experiment 3, only odd-numbered, but not even-numbered, taps were paired with auditory beeps. This abolished the temporal-capture effect and, instead, a dominant percept of apparent motion from the audiotactile side to the tactile-only side was observed independently of the SOA variation. These findings suggest that asymmetric crossmodal grouping leads to an attentional modulation of apparent motion, which inhibits crossmodal temporal-capture effects. PMID:21383834

  15. An extended research of crossmodal correspondence between color and sound in psychology and cognitive ergonomics.

    PubMed

    Sun, Xiuwen; Li, Xiaoling; Ji, Lingyu; Han, Feng; Wang, Huifen; Liu, Yang; Chen, Yao; Lou, Zhiyuan; Li, Zhuoyun

    2018-01-01

    Based on the existing research on sound symbolism and crossmodal correspondence, this study proposed an extended research on cross-modal correspondence between various sound attributes and color properties in a group of non-synesthetes. In Experiment 1, we assessed the associations between each property of sounds and colors. Twenty sounds with five auditory properties (pitch, roughness, sharpness, tempo and discontinuity), each varied in four levels, were used as the sound stimuli. Forty-nine colors with different hues, saturation and brightness were used to match to those sounds. Result revealed that besides pitch and tempo, roughness and sharpness also played roles in sound-color correspondence. Reaction times of sound-hue were a little longer than the reaction times of sound-lightness. In Experiment 2, a speeded target discrimination task was used to assess whether the associations between sound attributes and color properties could invoke natural cross-modal correspondence and improve participants' cognitive efficiency in cognitive tasks. Several typical sound-color pairings were selected according to the results of Experiment 1. Participants were divided into two groups (congruent and incongruent). In each trial participants had to judge whether the presented color could appropriately be associated with the sound stimuli. Result revealed that participants responded more quickly and accurately in the congruent group than in the incongruent group. It was also found that there was no significant difference in reaction times and error rates between sound-hue and sound-lightness. The results of Experiment 1 and 2 indicate the existence of a robust crossmodal correspondence between multiple attributes of sound and color, which also has strong influence on cognitive tasks. The inconsistency of the reaction times between sound-hue and sound-lightness in Experiment 1 and 2 is probably owing to the difference in experimental protocol, which indicates that the complexity of experiment design may be an important factor in crossmodal correspondence phenomena.

  16. An extended research of crossmodal correspondence between color and sound in psychology and cognitive ergonomics

    PubMed Central

    Sun, Xiuwen; Ji, Lingyu; Han, Feng; Wang, Huifen; Liu, Yang; Chen, Yao; Lou, Zhiyuan; Li, Zhuoyun

    2018-01-01

    Based on the existing research on sound symbolism and crossmodal correspondence, this study proposed an extended research on cross-modal correspondence between various sound attributes and color properties in a group of non-synesthetes. In Experiment 1, we assessed the associations between each property of sounds and colors. Twenty sounds with five auditory properties (pitch, roughness, sharpness, tempo and discontinuity), each varied in four levels, were used as the sound stimuli. Forty-nine colors with different hues, saturation and brightness were used to match to those sounds. Result revealed that besides pitch and tempo, roughness and sharpness also played roles in sound-color correspondence. Reaction times of sound-hue were a little longer than the reaction times of sound-lightness. In Experiment 2, a speeded target discrimination task was used to assess whether the associations between sound attributes and color properties could invoke natural cross-modal correspondence and improve participants’ cognitive efficiency in cognitive tasks. Several typical sound-color pairings were selected according to the results of Experiment 1. Participants were divided into two groups (congruent and incongruent). In each trial participants had to judge whether the presented color could appropriately be associated with the sound stimuli. Result revealed that participants responded more quickly and accurately in the congruent group than in the incongruent group. It was also found that there was no significant difference in reaction times and error rates between sound-hue and sound-lightness. The results of Experiment 1 and 2 indicate the existence of a robust crossmodal correspondence between multiple attributes of sound and color, which also has strong influence on cognitive tasks. The inconsistency of the reaction times between sound-hue and sound-lightness in Experiment 1 and 2 is probably owing to the difference in experimental protocol, which indicates that the complexity of experiment design may be an important factor in crossmodal correspondence phenomena. PMID:29507834

  17. Chemical Memory Reactions Induced Bursting Dynamics in Gene Expression

    PubMed Central

    Tian, Tianhai

    2013-01-01

    Memory is a ubiquitous phenomenon in biological systems in which the present system state is not entirely determined by the current conditions but also depends on the time evolutionary path of the system. Specifically, many memorial phenomena are characterized by chemical memory reactions that may fire under particular system conditions. These conditional chemical reactions contradict to the extant stochastic approaches for modeling chemical kinetics and have increasingly posed significant challenges to mathematical modeling and computer simulation. To tackle the challenge, I proposed a novel theory consisting of the memory chemical master equations and memory stochastic simulation algorithm. A stochastic model for single-gene expression was proposed to illustrate the key function of memory reactions in inducing bursting dynamics of gene expression that has been observed in experiments recently. The importance of memory reactions has been further validated by the stochastic model of the p53-MDM2 core module. Simulations showed that memory reactions is a major mechanism for realizing both sustained oscillations of p53 protein numbers in single cells and damped oscillations over a population of cells. These successful applications of the memory modeling framework suggested that this innovative theory is an effective and powerful tool to study memory process and conditional chemical reactions in a wide range of complex biological systems. PMID:23349679

  18. Chemical memory reactions induced bursting dynamics in gene expression.

    PubMed

    Tian, Tianhai

    2013-01-01

    Memory is a ubiquitous phenomenon in biological systems in which the present system state is not entirely determined by the current conditions but also depends on the time evolutionary path of the system. Specifically, many memorial phenomena are characterized by chemical memory reactions that may fire under particular system conditions. These conditional chemical reactions contradict to the extant stochastic approaches for modeling chemical kinetics and have increasingly posed significant challenges to mathematical modeling and computer simulation. To tackle the challenge, I proposed a novel theory consisting of the memory chemical master equations and memory stochastic simulation algorithm. A stochastic model for single-gene expression was proposed to illustrate the key function of memory reactions in inducing bursting dynamics of gene expression that has been observed in experiments recently. The importance of memory reactions has been further validated by the stochastic model of the p53-MDM2 core module. Simulations showed that memory reactions is a major mechanism for realizing both sustained oscillations of p53 protein numbers in single cells and damped oscillations over a population of cells. These successful applications of the memory modeling framework suggested that this innovative theory is an effective and powerful tool to study memory process and conditional chemical reactions in a wide range of complex biological systems.

  19. Codifference as a practical tool to measure interdependence

    NASA Astrophysics Data System (ADS)

    Wyłomańska, Agnieszka; Chechkin, Aleksei; Gajda, Janusz; Sokolov, Igor M.

    2015-03-01

    Correlation and spectral analysis represent the standard tools to study interdependence in statistical data. However, for the stochastic processes with heavy-tailed distributions such that the variance diverges, these tools are inadequate. The heavy-tailed processes are ubiquitous in nature and finance. We here discuss codifference as a convenient measure to study statistical interdependence, and we aim to give a short introductory review of its properties. By taking different known stochastic processes as generic examples, we present explicit formulas for their codifferences. We show that for the Gaussian processes codifference is equivalent to covariance. For processes with finite variance these two measures behave similarly with time. For the processes with infinite variance the covariance does not exist, however, the codifference is relevant. We demonstrate the practical importance of the codifference by extracting this function from simulated as well as real data taken from turbulent plasma of fusion device and financial market. We conclude that the codifference serves as a convenient practical tool to study interdependence for stochastic processes with both infinite and finite variances as well.

  20. Modeling delay in genetic networks: From delay birth-death processes to delay stochastic differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, Chinmaya; López, José Manuel; Azencott, Robert

    Delay is an important and ubiquitous aspect of many biochemical processes. For example, delay plays a central role in the dynamics of genetic regulatory networks as it stems from the sequential assembly of first mRNA and then protein. Genetic regulatory networks are therefore frequently modeled as stochastic birth-death processes with delay. Here, we examine the relationship between delay birth-death processes and their appropriate approximating delay chemical Langevin equations. We prove a quantitative bound on the error between the pathwise realizations of these two processes. Our results hold for both fixed delay and distributed delay. Simulations demonstrate that the delay chemicalmore » Langevin approximation is accurate even at moderate system sizes. It captures dynamical features such as the oscillatory behavior in negative feedback circuits, cross-correlations between nodes in a network, and spatial and temporal information in two commonly studied motifs of metastability in biochemical systems. Overall, these results provide a foundation for using delay stochastic differential equations to approximate the dynamics of birth-death processes with delay.« less

  1. Zonostrophic instability driven by discrete particle noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    St-Onge, D. A.; Krommes, J. A.

    The consequences of discrete particle noise for a system possessing a possibly unstable collective mode are discussed. It is argued that a zonostrophic instability (of homogeneous turbulence to the formation of zonal flows) occurs just below the threshold for linear instability. The scenario provides a new interpretation of the random forcing that is ubiquitously invoked in stochastic models such as the second-order cumulant expansion or stochastic structural instability theory; neither intrinsic turbulence nor coupling to extrinsic turbulence is required. A representative calculation of the zonostrophic neutral curve is made for a simple two-field model of toroidal ion-temperature-gradient-driven modes. To themore » extent that the damping of zonal flows is controlled by the ion-ion collision rate, the point of zonostrophic instability is independent of that rate. Published by AIP Publishing.« less

  2. Zonostrophic instability driven by discrete particle noise

    DOE PAGES

    St-Onge, D. A.; Krommes, J. A.

    2017-04-01

    The consequences of discrete particle noise for a system possessing a possibly unstable collective mode are discussed. It is argued that a zonostrophic instability (of homogeneous turbulence to the formation of zonal flows) occurs just below the threshold for linear instability. The scenario provides a new interpretation of the random forcing that is ubiquitously invoked in stochastic models such as the second-order cumulant expansion or stochastic structural instability theory; neither intrinsic turbulence nor coupling to extrinsic turbulence is required. A representative calculation of the zonostrophic neutral curve is made for a simple two-field model of toroidal ion-temperature-gradient-driven modes. To themore » extent that the damping of zonal flows is controlled by the ion-ion collision rate, the point of zonostrophic instability is independent of that rate. Published by AIP Publishing.« less

  3. Suppression and Working Memory in Auditory Comprehension of L2 Narratives: Evidence from Cross-Modal Priming

    ERIC Educational Resources Information Center

    Wu, Shiyu; Ma, Zheng

    2016-01-01

    Using a cross-modal priming task, the present study explores whether Chinese-English bilinguals process goal related information during auditory comprehension of English narratives like native speakers. Results indicate that English native speakers adopted both mechanisms of suppression and enhancement to modulate the activation of goals and keep…

  4. Sound Symbolism in Infancy: Evidence for Sound-Shape Cross-Modal Correspondences in 4-Month-Olds

    ERIC Educational Resources Information Center

    Ozturk, Ozge; Krehm, Madelaine; Vouloumanos, Athena

    2013-01-01

    Perceptual experiences in one modality are often dependent on activity from other sensory modalities. These cross-modal correspondences are also evident in language. Adults and toddlers spontaneously and consistently map particular words (e.g., "kiki") to particular shapes (e.g., angular shapes). However, the origins of these systematic mappings…

  5. Behold the voice of wrath: cross-modal modulation of visual attention by anger prosody.

    PubMed

    Brosch, Tobias; Grandjean, Didier; Sander, David; Scherer, Klaus R

    2008-03-01

    Emotionally relevant stimuli are prioritized in human information processing. It has repeatedly been shown that selective spatial attention is modulated by the emotional content of a stimulus. Until now, studies investigating this phenomenon have only examined within-modality effects, most frequently using pictures of emotional stimuli to modulate visual attention. In this study, we used simultaneously presented utterances with emotional and neutral prosody as cues for a visually presented target in a cross-modal dot probe task. Response times towards targets were faster when they appeared at the location of the source of the emotional prosody. Our results show for the first time a cross-modal attentional modulation of visual attention by auditory affective prosody.

  6. Approximation and inference methods for stochastic biochemical kinetics—a tutorial review

    NASA Astrophysics Data System (ADS)

    Schnoerr, David; Sanguinetti, Guido; Grima, Ramon

    2017-03-01

    Stochastic fluctuations of molecule numbers are ubiquitous in biological systems. Important examples include gene expression and enzymatic processes in living cells. Such systems are typically modelled as chemical reaction networks whose dynamics are governed by the chemical master equation. Despite its simple structure, no analytic solutions to the chemical master equation are known for most systems. Moreover, stochastic simulations are computationally expensive, making systematic analysis and statistical inference a challenging task. Consequently, significant effort has been spent in recent decades on the development of efficient approximation and inference methods. This article gives an introduction to basic modelling concepts as well as an overview of state of the art methods. First, we motivate and introduce deterministic and stochastic methods for modelling chemical networks, and give an overview of simulation and exact solution methods. Next, we discuss several approximation methods, including the chemical Langevin equation, the system size expansion, moment closure approximations, time-scale separation approximations and hybrid methods. We discuss their various properties and review recent advances and remaining challenges for these methods. We present a comparison of several of these methods by means of a numerical case study and highlight some of their respective advantages and disadvantages. Finally, we discuss the problem of inference from experimental data in the Bayesian framework and review recent methods developed the literature. In summary, this review gives a self-contained introduction to modelling, approximations and inference methods for stochastic chemical kinetics.

  7. Empirical tests of Zipf's law mechanism in open source Linux distribution.

    PubMed

    Maillart, T; Sornette, D; Spaeth, S; von Krogh, G

    2008-11-21

    Zipf's power law is a ubiquitous empirical regularity found in many systems, thought to result from proportional growth. Here, we establish empirically the usually assumed ingredients of stochastic growth models that have been previously conjectured to be at the origin of Zipf's law. We use exceptionally detailed data on the evolution of open source software projects in Linux distributions, which offer a remarkable example of a growing complex self-organizing adaptive system, exhibiting Zipf's law over four full decades.

  8. Self-organization, collective decision making and resource exploitation strategies in social insects

    NASA Astrophysics Data System (ADS)

    Nicolis, S. C.; Dussutour, A.

    2008-10-01

    Amplifying communications are a ubiquitous characteristic of group-living animals. This work is concerned with their role in the processes of food recruitment and resource exploitation by social insects. The collective choices made by ants faced with different food sources are analyzed using both a mean field description and a stochastic approach. Emphasis is placed on the possibility of optimizing the recruitment and exploitation strategies through an appropriate balance between individual variability, cooperative interactions and environmental constraints.

  9. Nuclear magnetic relaxation by the dipolar EMOR mechanism: Multi-spin systems

    NASA Astrophysics Data System (ADS)

    Chang, Zhiwei; Halle, Bertil

    2017-08-01

    In aqueous systems with immobilized macromolecules, including biological tissues, the longitudinal spin relaxation of water protons is primarily induced by exchange-mediated orientational randomization (EMOR) of intra- and intermolecular magnetic dipole-dipole couplings. Starting from the stochastic Liouville equation, we have previously developed a rigorous EMOR relaxation theory for dipole-coupled two-spin and three-spin systems. Here, we extend the stochastic Liouville theory to four-spin systems and use these exact results as a guide for constructing an approximate multi-spin theory, valid for spin systems of arbitrary size. This so-called generalized stochastic Redfield equation (GSRE) theory includes the effects of longitudinal-transverse cross-mode relaxation, which gives rise to an inverted step in the relaxation dispersion profile, and coherent spin mode transfer among solid-like spins, which may be regarded as generalized spin diffusion. The GSRE theory is compared to an existing theory, based on the extended Solomon equations, which does not incorporate these phenomena. Relaxation dispersion profiles are computed from the GSRE theory for systems of up to 16 protons, taken from protein crystal structures. These profiles span the range from the motional narrowing limit, where the coherent mode transfer plays a major role, to the ultra-slow motion limit, where the zero-field rate is closely related to the strong-collision limit of the dipolar relaxation rate. Although a quantitative analysis of experimental data is beyond the scope of this work, it is clear from the magnitude of the predicted relaxation rate and the shape of the relaxation dispersion profile that the dipolar EMOR mechanism is the principal cause of water-1H low-field longitudinal relaxation in aqueous systems of immobilized macromolecules, including soft biological tissues. The relaxation theory developed here therefore provides a basis for molecular-level interpretation of endogenous soft-tissue image contrast obtained by the emerging low-field magnetic resonance imaging techniques.

  10. Coupling between Theta Oscillations and Cognitive Control Network during Cross-Modal Visual and Auditory Attention: Supramodal vs Modality-Specific Mechanisms.

    PubMed

    Wang, Wuyi; Viswanathan, Shivakumar; Lee, Taraz; Grafton, Scott T

    2016-01-01

    Cortical theta band oscillations (4-8 Hz) in EEG signals have been shown to be important for a variety of different cognitive control operations in visual attention paradigms. However the synchronization source of these signals as defined by fMRI BOLD activity and the extent to which theta oscillations play a role in multimodal attention remains unknown. Here we investigated the extent to which cross-modal visual and auditory attention impacts theta oscillations. Using a simultaneous EEG-fMRI paradigm, healthy human participants performed an attentional vigilance task with six cross-modal conditions using naturalistic stimuli. To assess supramodal mechanisms, modulation of theta oscillation amplitude for attention to either visual or auditory stimuli was correlated with BOLD activity by conjunction analysis. Negative correlation was localized to cortical regions associated with the default mode network and positively with ventral premotor areas. Modality-associated attention to visual stimuli was marked by a positive correlation of theta and BOLD activity in fronto-parietal area that was not observed in the auditory condition. A positive correlation of theta and BOLD activity was observed in auditory cortex, while a negative correlation of theta and BOLD activity was observed in visual cortex during auditory attention. The data support a supramodal interaction of theta activity with of DMN function, and modality-associated processes within fronto-parietal networks related to top-down theta related cognitive control in cross-modal visual attention. On the other hand, in sensory cortices there are opposing effects of theta activity during cross-modal auditory attention.

  11. Evolution of crossmodal reorganization of the voice area in cochlear-implanted deaf patients.

    PubMed

    Rouger, Julien; Lagleyre, Sébastien; Démonet, Jean-François; Fraysse, Bernard; Deguine, Olivier; Barone, Pascal

    2012-08-01

    Psychophysical and neuroimaging studies in both animal and human subjects have clearly demonstrated that cortical plasticity following sensory deprivation leads to a brain functional reorganization that favors the spared modalities. In postlingually deaf patients, the use of a cochlear implant (CI) allows a recovery of the auditory function, which will probably counteract the cortical crossmodal reorganization induced by hearing loss. To study the dynamics of such reversed crossmodal plasticity, we designed a longitudinal neuroimaging study involving the follow-up of 10 postlingually deaf adult CI users engaged in a visual speechreading task. While speechreading activates Broca's area in normally hearing subjects (NHS), the activity level elicited in this region in CI patients is abnormally low and increases progressively with post-implantation time. Furthermore, speechreading in CI patients induces abnormal crossmodal activations in right anterior regions of the superior temporal cortex normally devoted to processing human voice stimuli (temporal voice-sensitive areas-TVA). These abnormal activity levels diminish with post-implantation time and tend towards the levels observed in NHS. First, our study revealed that the neuroplasticity after cochlear implantation involves not only auditory but also visual and audiovisual speech processing networks. Second, our results suggest that during deafness, the functional links between cortical regions specialized in face and voice processing are reallocated to support speech-related visual processing through cross-modal reorganization. Such reorganization allows a more efficient audiovisual integration of speech after cochlear implantation. These compensatory sensory strategies are later completed by the progressive restoration of the visuo-audio-motor speech processing loop, including Broca's area. Copyright © 2011 Wiley Periodicals, Inc.

  12. Changes of the directional brain networks related with brain plasticity in patients with long-term unilateral sensorineural hearing loss.

    PubMed

    Zhang, G-Y; Yang, M; Liu, B; Huang, Z-C; Li, J; Chen, J-Y; Chen, H; Zhang, P-P; Liu, L-J; Wang, J; Teng, G-J

    2016-01-28

    Previous studies often report that early auditory deprivation or congenital deafness contributes to cross-modal reorganization in the auditory-deprived cortex, and this cross-modal reorganization limits clinical benefit from cochlear prosthetics. However, there are inconsistencies among study results on cortical reorganization in those subjects with long-term unilateral sensorineural hearing loss (USNHL). It is also unclear whether there exists a similar cross-modal plasticity of the auditory cortex for acquired monaural deafness and early or congenital deafness. To address this issue, we constructed the directional brain functional networks based on entropy connectivity of resting-state functional MRI and researched changes of the networks. Thirty-four long-term USNHL individuals and seventeen normally hearing individuals participated in the test, and all USNHL patients had acquired deafness. We found that certain brain regions of the sensorimotor and visual networks presented enhanced synchronous output entropy connectivity with the left primary auditory cortex in the left long-term USNHL individuals as compared with normally hearing individuals. Especially, the left USNHL showed more significant changes of entropy connectivity than the right USNHL. No significant plastic changes were observed in the right USNHL. Our results indicate that the left primary auditory cortex (non-auditory-deprived cortex) in patients with left USNHL has been reorganized by visual and sensorimotor modalities through cross-modal plasticity. Furthermore, the cross-modal reorganization also alters the directional brain functional networks. The auditory deprivation from the left or right side generates different influences on the human brain. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  13. Spatiotemporal Processing in Crossmodal Interactions for Perception of the External World: A Review

    PubMed Central

    Hidaka, Souta; Teramoto, Wataru; Sugita, Yoichi

    2015-01-01

    Research regarding crossmodal interactions has garnered much interest in the last few decades. A variety of studies have demonstrated that multisensory information (vision, audition, tactile sensation, and so on) can perceptually interact with each other in the spatial and temporal domains. Findings regarding crossmodal interactions in the spatiotemporal domain (i.e., motion processing) have also been reported, with updates in the last few years. In this review, we summarize past and recent findings on spatiotemporal processing in crossmodal interactions regarding perception of the external world. A traditional view regarding crossmodal interactions holds that vision is superior to audition in spatial processing, but audition is dominant over vision in temporal processing. Similarly, vision is considered to have dominant effects over the other sensory modalities (i.e., visual capture) in spatiotemporal processing. However, recent findings demonstrate that sound could have a driving effect on visual motion perception. Moreover, studies regarding perceptual associative learning reported that, after association is established between a sound sequence without spatial information and visual motion information, the sound sequence could trigger visual motion perception. Other sensory information, such as motor action or smell, has also exhibited similar driving effects on visual motion perception. Additionally, recent brain imaging studies demonstrate that similar activation patterns could be observed in several brain areas, including the motion processing areas, between spatiotemporal information from different sensory modalities. Based on these findings, we suggest that multimodal information could mutually interact in spatiotemporal processing in the percept of the external world and that common perceptual and neural underlying mechanisms would exist for spatiotemporal processing. PMID:26733827

  14. Re Viewing Listening: "Clip Culture" and Cross-Modal Learning in the Music Classroom

    ERIC Educational Resources Information Center

    Webb, Michael

    2010-01-01

    This article envisions a new, cross-modal approach to classroom music listening, one that takes advantage of students' rising screen literacy and the ever-expanding archive of music-related visual material available on DVD and on video sharing sites such as YouTube. It is grounded in current literature on music performance studies, embodied music…

  15. Parallel pathways for cross-modal memory retrieval in Drosophila.

    PubMed

    Zhang, Xiaonan; Ren, Qingzhong; Guo, Aike

    2013-05-15

    Memory-retrieval processing of cross-modal sensory preconditioning is vital for understanding the plasticity underlying the interactions between modalities. As part of the sensory preconditioning paradigm, it has been hypothesized that the conditioned response to an unreinforced cue depends on the memory of the reinforced cue via a sensory link between the two cues. To test this hypothesis, we studied cross-modal memory-retrieval processing in a genetically tractable model organism, Drosophila melanogaster. By expressing the dominant temperature-sensitive shibire(ts1) (shi(ts1)) transgene, which blocks synaptic vesicle recycling of specific neural subsets with the Gal4/UAS system at the restrictive temperature, we specifically blocked visual and olfactory memory retrieval, either alone or in combination; memory acquisition remained intact for these modalities. Blocking the memory retrieval of the reinforced olfactory cues did not impair the conditioned response to the unreinforced visual cues or vice versa, in contrast to the canonical memory-retrieval processing of sensory preconditioning. In addition, these conditioned responses can be abolished by blocking the memory retrieval of the two modalities simultaneously. In sum, our results indicated that a conditioned response to an unreinforced cue in cross-modal sensory preconditioning can be recalled through parallel pathways.

  16. Crossmodal interactions during non-linguistic auditory processing in cochlear-implanted deaf patients.

    PubMed

    Barone, Pascal; Chambaudie, Laure; Strelnikov, Kuzma; Fraysse, Bernard; Marx, Mathieu; Belin, Pascal; Deguine, Olivier

    2016-10-01

    Due to signal distortion, speech comprehension in cochlear-implanted (CI) patients relies strongly on visual information, a compensatory strategy supported by important cortical crossmodal reorganisations. Though crossmodal interactions are evident for speech processing, it is unclear whether a visual influence is observed in CI patients during non-linguistic visual-auditory processing, such as face-voice interactions, which are important in social communication. We analyse and compare visual-auditory interactions in CI patients and normal-hearing subjects (NHS) at equivalent auditory performance levels. Proficient CI patients and NHS performed a voice-gender categorisation in the visual-auditory modality from a morphing-generated voice continuum between male and female speakers, while ignoring the presentation of a male or female visual face. Our data show that during the face-voice interaction, CI deaf patients are strongly influenced by visual information when performing an auditory gender categorisation task, in spite of maximum recovery of auditory speech. No such effect is observed in NHS, even in situations of CI simulation. Our hypothesis is that the functional crossmodal reorganisation that occurs in deafness could influence nonverbal processing, such as face-voice interaction; this is important for patient internal supramodal representation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Generalization of cross-modal stimulus equivalence classes: operant processes as components in human category formation.

    PubMed Central

    Lane, S D; Clow, J K; Innis, A; Critchfield, T S

    1998-01-01

    This study employed a stimulus-class rating procedure to explore whether stimulus equivalence and stimulus generalization can combine to promote the formation of open-ended categories incorporating cross-modal stimuli. A pretest of simple auditory discrimination indicated that subjects (college students) could discriminate among a range of tones used in the main study. Before beginning the main study, 10 subjects learned to use a rating procedure for categorizing sets of stimuli as class consistent or class inconsistent. After completing conditional discrimination training with new stimuli (shapes and tones), the subjects demonstrated the formation of cross-modal equivalence classes. Subsequently, the class-inclusion rating procedure was reinstituted, this time with cross-modal sets of stimuli drawn from the equivalence classes. On some occasions, the tones of the equivalence classes were replaced by novel tones. The probability that these novel sets would be rated as class consistent was generally a function of the auditory distance between the novel tone and the tone that was explicitly included in the equivalence class. These data extend prior work on generalization of equivalence classes, and support the role of operant processes in human category formation. PMID:9821680

  18. The time-course of the cross-modal semantic modulation of visual picture processing by naturalistic sounds and spoken words.

    PubMed

    Chen, Yi-Chuan; Spence, Charles

    2013-01-01

    The time-course of cross-modal semantic interactions between pictures and either naturalistic sounds or spoken words was compared. Participants performed a speeded picture categorization task while hearing a task-irrelevant auditory stimulus presented at various stimulus onset asynchronies (SOAs) with respect to the visual picture. Both naturalistic sounds and spoken words gave rise to cross-modal semantic congruency effects (i.e., facilitation by semantically congruent sounds and inhibition by semantically incongruent sounds, as compared to a baseline noise condition) when the onset of the sound led that of the picture by 240 ms or more. Both naturalistic sounds and spoken words also gave rise to inhibition irrespective of their semantic congruency when presented within 106 ms of the onset of the picture. The peak of this cross-modal inhibitory effect occurred earlier for spoken words than for naturalistic sounds. These results therefore demonstrate that the semantic priming of visual picture categorization by auditory stimuli only occurs when the onset of the sound precedes that of the visual stimulus. The different time-courses observed for naturalistic sounds and spoken words likely reflect the different processing pathways to access the relevant semantic representations.

  19. Debates—Stochastic subsurface hydrology from theory to practice: A geologic perspective

    NASA Astrophysics Data System (ADS)

    Fogg, Graham E.; Zhang, Yong

    2016-12-01

    A geologic perspective on stochastic subsurface hydrology offers insights on representativeness of prominent field experiments and their general relevance to other hydrogeologic settings. Although the gains in understanding afforded by some 30 years of research in stochastic hydrogeology have been important and even essential, adoption of the technologies and insights by practitioners has been limited, due in part to a lack of geologic context in both the field and theoretical studies. In general, unintentional, biased sampling of hydraulic conductivity (K) using mainly hydrologic, well-based methods has resulted in the tacit assumption by many in the community that the subsurface is much less heterogeneous than in reality. Origins of the bias range from perspectives that are limited by scale and the separation of disciplines (geology, soils, aquifer hydrology, groundwater hydraulics, etc.). Consequences include a misfit between stochastic hydrogeology research results and the needs of, for example, practitioners who are dealing with local plume site cleanup that is often severely hampered by very low velocities in the very aquitard facies that are commonly overlooked or missing from low-variance stochastic models or theories. We suggest that answers to many of the problems exposed by stochastic hydrogeology research can be found through greater geologic integration into the analyses, including the recognition of not only the nearly ubiquitously high variances of K but also the strong tendency for the good connectivity of the high-K facies when spatially persistent geologic unconformities are absent. We further suggest that although such integration may appear to make the contaminant transport problem more complex, expensive and intractable, it may in fact lead to greater simplification and more reliable, less expensive site characterizations and models.

  20. Non-Orthogonal Multiple Access for Ubiquitous Wireless Sensor Networks.

    PubMed

    Anwar, Asim; Seet, Boon-Chong; Ding, Zhiguo

    2018-02-08

    Ubiquitous wireless sensor networks (UWSNs) have become a critical technology for enabling smart cities and other ubiquitous monitoring applications. Their deployment, however, can be seriously hampered by the spectrum available to the sheer number of sensors for communication. To support the communication needs of UWSNs without requiring more spectrum resources, the power-domain non-orthogonal multiple access (NOMA) technique originally proposed for 5th Generation (5G) cellular networks is investigated for UWSNs for the first time in this paper. However, unlike 5G networks that operate in the licensed spectrum, UWSNs mostly operate in unlicensed spectrum where sensors also experience cross-technology interferences from other devices sharing the same spectrum. In this paper, we model the interferences from various sources at the sensors using stochastic geometry framework. To evaluate the performance, we derive a theorem and present new closed form expression for the outage probability of the sensors in a downlink scenario under interference limited environment. In addition, diversity analysis for the ordered NOMA users is performed. Based on the derived outage probability, we evaluate the average link throughput and energy consumption efficiency of NOMA against conventional orthogonal multiple access (OMA) technique in UWSNs. Further, the required computational complexity for the NOMA users is presented.

  1. Visual and cross-modal cues increase the identification of overlapping visual stimuli in Balint's syndrome.

    PubMed

    D'Imperio, Daniela; Scandola, Michele; Gobbetto, Valeria; Bulgarelli, Cristina; Salgarello, Matteo; Avesani, Renato; Moro, Valentina

    2017-10-01

    Cross-modal interactions improve the processing of external stimuli, particularly when an isolated sensory modality is impaired. When information from different modalities is integrated, object recognition is facilitated probably as a result of bottom-up and top-down processes. The aim of this study was to investigate the potential effects of cross-modal stimulation in a case of simultanagnosia. We report a detailed analysis of clinical symptoms and an 18 F-fluorodeoxyglucose (FDG) brain positron emission tomography/computed tomography (PET/CT) study of a patient affected by Balint's syndrome, a rare and invasive visual-spatial disorder following bilateral parieto-occipital lesions. An experiment was conducted to investigate the effects of visual and nonvisual cues on performance in tasks involving the recognition of overlapping pictures. Four modalities of sensory cues were used: visual, tactile, olfactory, and auditory. Data from neuropsychological tests showed the presence of ocular apraxia, optic ataxia, and simultanagnosia. The results of the experiment indicate a positive effect of the cues on the recognition of overlapping pictures, not only in the identification of the congruent valid-cued stimulus (target) but also in the identification of the other, noncued stimuli. All the sensory modalities analyzed (except the auditory stimulus) were efficacious in terms of increasing visual recognition. Cross-modal integration improved the patient's ability to recognize overlapping figures. However, while in the visual unimodal modality both bottom-up (priming, familiarity effect, disengagement of attention) and top-down processes (mental representation and short-term memory, the endogenous orientation of attention) are involved, in the cross-modal integration it is semantic representations that mainly activate visual recognition processes. These results are potentially useful for the design of rehabilitation training for attentional and visual-perceptual deficits.

  2. THE REELIN RECEPTORS VLDLR AND ApoER2 REGULATE SENSORIMOTOR GATING IN MICE

    PubMed Central

    Barr, Alasdair M.; Fish, Kenneth N.; Markou, Athina

    2007-01-01

    Summary Postmortem brain loss of reelin is noted in schizophrenia patients. Accordingly, heterozygous reeler mutant mice have been proposed as a putative model of this disorder. Little is known, however, about the involvement of the two receptors for reelin, Very-Low-Density Lipoprotein Receptor (VLDLR) and Apolipoprotein E Receptor 2 (ApoER2), on pre-cognitive processes of relevance to deficits seen in schizophrenia. Thus, we evaluated sensorimotor gating in mutant mice heterozygous or homozygous for the two reelin receptors. Mutant mice lacking one of these reelin receptors were tested for prepulse inhibition (PPI) of the acoustic startle reflex prior to and following puberty, and on a crossmodal PPI task, involving the presentation of acoustic and tactile stimuli. Furthermore, because schizophrenia patients show increased sensitivity to N-methyl-D-aspartate (NMDA) receptor blockade, we assessed the sensitivity of these mice to the PPI-disruptive effects of the NMDA receptor antagonist phencyclidine. The results demonstrated that acoustic PPI did not differ between mutant and wildtype mice. However, VLDLR homozygous mice displayed significant deficits in crossmodal PPI, while ApoER2 heterozygous and homozygous mice displayed significantly increased crossmodal PPI. Both ApoER2 and VLDLR heterozygous and homozygous mice exhibited greater sensitivity to the PPI-disruptive effects of phencyclidine than wildtype mice. These results indicate that partial or complete loss of either one of the reelin receptors results in a complex pattern of alterations in PPI function that include alterations in crossmodal, but not acoustic, PPI and increased sensitivity to NMDA receptor blockade. Thus, reelin receptor function appears to be critically involved in crossmodal PPI and the modulation of the PPI response by NMDA receptors. These findings have relevance to a range of neuropsychiatric disorders that involve sensorimotor gating deficits, including schizophrenia.. PMID:17261317

  3. Stochastic modeling of central apnea events in preterm infants.

    PubMed

    Clark, Matthew T; Delos, John B; Lake, Douglas E; Lee, Hoshik; Fairchild, Karen D; Kattwinkel, John; Moorman, J Randall

    2016-04-01

    A near-ubiquitous pathology in very low birth weight infants is neonatal apnea, breathing pauses with slowing of the heart and falling blood oxygen. Events of substantial duration occasionally occur after an infant is discharged from the neonatal intensive care unit (NICU). It is not known whether apneas result from a predictable process or from a stochastic process, but the observation that they occur in seemingly random clusters justifies the use of stochastic models. We use a hidden-Markov model to analyze the distribution of durations of apneas and the distribution of times between apneas. The model suggests the presence of four breathing states, ranging from very stable (with an average lifetime of 12 h) to very unstable (with an average lifetime of 10 s). Although the states themselves are not visible, the mathematical analysis gives estimates of the transition rates among these states. We have obtained these transition rates, and shown how they change with post-menstrual age; as expected, the residence time in the more stable breathing states increases with age. We also extrapolated the model to predict the frequency of very prolonged apnea during the first year of life. This paradigm-stochastic modeling of cardiorespiratory control in neonatal infants to estimate risk for severe clinical events-may be a first step toward personalized risk assessment for life threatening apnea events after NICU discharge.

  4. Preserved Discrimination Performance and Neural Processing during Crossmodal Attention in Aging

    PubMed Central

    Mishra, Jyoti; Gazzaley, Adam

    2013-01-01

    In a recent study in younger adults (19-29 year olds) we showed evidence that distributed audiovisual attention resulted in improved discrimination performance for audiovisual stimuli compared to focused visual attention. Here, we extend our findings to healthy older adults (60-90 year olds), showing that performance benefits of distributed audiovisual attention in this population match those of younger adults. Specifically, improved performance was revealed in faster response times for semantically congruent audiovisual stimuli during distributed relative to focused visual attention, without any differences in accuracy. For semantically incongruent stimuli, discrimination accuracy was significantly improved during distributed relative to focused attention. Furthermore, event-related neural processing showed intact crossmodal integration in higher performing older adults similar to younger adults. Thus, there was insufficient evidence to support an age-related deficit in crossmodal attention. PMID:24278464

  5. Characteristic sounds facilitate visual search.

    PubMed

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2008-06-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.

  6. Aging and the visual, haptic, and cross-modal perception of natural object shape.

    PubMed

    Norman, J Farley; Crabtree, Charles E; Norman, Hideko F; Moncrief, Brandon K; Herrmann, Molly; Kapley, Noah

    2006-01-01

    One hundred observers participated in two experiments designed to investigate aging and the perception of natural object shape. In the experiments, younger and older observers performed either a same/different shape discrimination task (experiment 1) or a cross-modal matching task (experiment 2). Quantitative effects of age were found in both experiments. The effect of age in experiment 1 was limited to cross-modal shape discrimination: there was no effect of age upon unimodal (ie within a single perceptual modality) shape discrimination. The effect of age in experiment 2 was eliminated when the older observers were either given an unlimited amount of time to perform the task or when the number of response alternatives was decreased. Overall, the results of the experiments reveal that older observers can effectively perceive 3-D shape from both vision and haptics.

  7. Effect of perceptual load on semantic access by speech in children

    PubMed Central

    Jerger, Susan; Damian, Markus F.; Mills, Candice; Bartlett, James; Tye-Murray, Nancy; Abdi, Hervè

    2013-01-01

    Purpose To examine whether semantic access by speech requires attention in children. Method Children (N=200) named pictures and ignored distractors on a cross-modal (distractors: auditory-no face) or multi-modal (distractors: auditory-static face and audiovisual-dynamic face) picture word task. The cross-modal had a low load, and the multi-modal had a high load [i.e., respectively naming pictures displayed 1) on a blank screen vs 2) below the talker’s face on his T-shirt]. Semantic content of distractors was manipulated to be related vs unrelated to picture (e.g., picture dog with distractors bear vs cheese). Lavie's (2005) perceptual load model proposes that semantic access is independent of capacity limited attentional resources if irrelevant semantic-content manipulation influences naming times on both tasks despite variations in loads but dependent on attentional resources exhausted by higher load task if irrelevant content influences naming only on cross-modal (low load). Results Irrelevant semantic content affected performance for both tasks in 6- to 9-year-olds, but only on cross-modal in 4–5-year-olds. The addition of visual speech did not influence results on the multi-modal task. Conclusion Younger and older children differ in dependence on attentional resources for semantic access by speech. PMID:22896045

  8. Crossmodal correspondences in product packaging. Assessing color-flavor correspondences for potato chips (crisps).

    PubMed

    Piqueras-Fiszman, Betina; Spence, Charles

    2011-12-01

    We report a study designed to investigate consumers' crossmodal associations between the color of packaging and flavor varieties in crisps (potato chips). This product category was chosen because of the long-established but conflicting color-flavor conventions that exist for the salt and vinegar and cheese and onion flavor varieties in the UK. The use of both implicit and explicit measures of this crossmodal association revealed that consumers responded more slowly, and made more errors, when they had to pair the color and flavor that they implicitly thought of as being "incongruent" with the same response key. Furthermore, clustering consumers by the brand that they normally purchased revealed that the main reason why this pattern of results was observed could be their differing acquaintance with one brand versus another. In addition, when participants tried the two types of crisps from "congruently" and "incongruently" colored packets, some were unable to guess the flavor correctly in the latter case. These strong crossmodal associations did not have a significant effect on participants' hedonic appraisal of the crisps, but did arouse confusion. These results are relevant in terms of R&D, since ascertaining the appropriate color of the packaging across flavor varieties ought normally to help achieve immediate product recognition and consumer satisfaction. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Effect of perceptual load on semantic access by speech in children.

    PubMed

    Jerger, Susan; Damian, Markus F; Mills, Candice; Bartlett, James; Tye-Murray, Nancy; Abdi, Hervé

    2013-04-01

    To examine whether semantic access by speech requires attention in children. Children (N = 200) named pictures and ignored distractors on a cross-modal (distractors: auditory-no face) or multimodal (distractors: auditory-static face and audiovisual-dynamic face) picture word task. The cross-modal task had a low load, and the multimodal task had a high load (i.e., respectively naming pictures displayed on a blank screen vs. below the talker's face on his T-shirt). Semantic content of distractors was manipulated to be related vs. unrelated to the picture (e.g., picture "dog" with distractors "bear" vs. "cheese"). If irrelevant semantic content manipulation influences naming times on both tasks despite variations in loads, Lavie's (2005) perceptual load model proposes that semantic access is independent of capacity-limited attentional resources; if, however, irrelevant content influences naming only on the cross-modal task (low load), the perceptual load model proposes that semantic access is dependent on attentional resources exhausted by the higher load task. Irrelevant semantic content affected performance for both tasks in 6- to 9-year-olds but only on the cross-modal task in 4- to 5-year-olds. The addition of visual speech did not influence results on the multimodal task. Younger and older children differ in dependence on attentional resources for semantic access by speech.

  10. Learning to perceive differences in solid shape through vision and touch.

    PubMed

    Norman, J Farley; Clayton, Anna Marie; Norman, Hideko F; Crabtree, Charles E

    2008-01-01

    A single experiment was designed to investigate perceptual learning and the discrimination of 3-D object shape. Ninety-six observers were presented with naturally shaped solid objects either visually, haptically, or across the modalities of vision and touch. The observers' task was to judge whether the two sequentially presented objects on any given trial possessed the same or different 3-D shapes. The results of the experiment revealed that significant perceptual learning occurred in all modality conditions, both unimodal and cross-modal. The amount of the observers' perceptual learning, as indexed by increases in hit rate and d', was similar for all of the modality conditions. The observers' hit rates were highest for the unimodal conditions and lowest in the cross-modal conditions. Lengthening the inter-stimulus interval from 3 to 15 s led to increases in hit rates and decreases in response bias. The results also revealed the existence of an asymmetry between two otherwise equivalent cross-modal conditions: in particular, the observers' perceptual sensitivity was higher for the vision-haptic condition and lower for the haptic-vision condition. In general, the results indicate that effective cross-modal shape comparisons can be made between the modalities of vision and active touch, but that complete information transfer does not occur.

  11. Cortical reorganization in postlingually deaf cochlear implant users: Intra-modal and cross-modal considerations.

    PubMed

    Stropahl, Maren; Chen, Ling-Chia; Debener, Stefan

    2017-01-01

    With the advances of cochlear implant (CI) technology, many deaf individuals can partially regain their hearing ability. However, there is a large variation in the level of recovery. Cortical changes induced by hearing deprivation and restoration with CIs have been thought to contribute to this variation. The current review aims to identify these cortical changes in postlingually deaf CI users and discusses their maladaptive or adaptive relationship to the CI outcome. Overall, intra-modal and cross-modal reorganization patterns have been identified in postlingually deaf CI users in visual and in auditory cortex. Even though cross-modal activation in auditory cortex is considered as maladaptive for speech recovery in CI users, a similar activation relates positively to lip reading skills. Furthermore, cross-modal activation of the visual cortex seems to be adaptive for speech recognition. Currently available evidence points to an involvement of further brain areas and suggests that a focus on the reversal of visual take-over of the auditory cortex may be too limited. Future investigations should consider expanded cortical as well as multi-sensory processing and capture different hierarchical processing steps. Furthermore, prospective longitudinal designs are needed to track the dynamics of cortical plasticity that takes place before and after implantation. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  12. Super Generalized Central Limit Theorem —Limit Distributions for Sums of Non-identical Random Variables with Power Laws—

    NASA Astrophysics Data System (ADS)

    Shintani, Masaru; Umeno, Ken

    2018-04-01

    The power law is present ubiquitously in nature and in our societies. Therefore, it is important to investigate the characteristics of power laws in the current era of big data. In this paper we prove that the superposition of non-identical stochastic processes with power laws converges in density to a unique stable distribution. This property can be used to explain the universality of stable laws that the sums of the logarithmic returns of non-identical stock price fluctuations follow stable distributions.

  13. When music is salty: The crossmodal associations between sound and taste.

    PubMed

    Guetta, Rachel; Loui, Psyche

    2017-01-01

    Here we investigate associations between complex auditory and complex taste stimuli. A novel piece of music was composed and recorded in four different styles of musical articulation to reflect the four basic tastes groups (sweet, sour, salty, bitter). In Experiment 1, participants performed above chance at pairing the music clips with corresponding taste words. Experiment 2 uses multidimensional scaling to interpret how participants categorize these musical stimuli, and to show that auditory categories can be organized in a similar manner as taste categories. Experiment 3 introduces four different flavors of custom-made chocolate ganache and shows that participants can match music clips with the corresponding taste stimuli with above-chance accuracy. Experiment 4 demonstrates the partial role of pleasantness in crossmodal mappings between sound and taste. The present findings confirm that individuals are able to make crossmodal associations between complex auditory and gustatory stimuli, and that valence may mediate multisensory integration in the general population.

  14. Enhanced tactile encoding and memory recognition in congenital blindness.

    PubMed

    D'Angiulli, Amedeo; Waraich, Paul

    2002-06-01

    Several behavioural studies have shown that early-blind persons possess superior tactile skills. Since neurophysiological data show that early-blind persons recruit visual as well as somatosensory cortex to carry out tactile processing (cross-modal plasticity), blind persons' sharper tactile skills may be related to cortical re-organisation resulting from loss of vision early in their life. To examine the nature of blind individuals' tactile superiority and its implications for cross-modal plasticity, we compared the tactile performance of congenitally totally blind, low-vision and sighted children on raised-line picture identification test and re-test, assessing effects of task familiarity, exploratory strategy and memory recognition. What distinguished the blind from the other children was higher memory recognition and higher tactile encoding associated with efficient exploration. These results suggest that enhanced perceptual encoding and recognition memory may be two cognitive correlates of cross-modal plasticity in congenital blindness.

  15. Characteristic sounds facilitate visual search

    PubMed Central

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2009-01-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing “meow” did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds. PMID:18567253

  16. Cross-Modal Attention Effects in the Vestibular Cortex during Attentive Tracking of Moving Objects.

    PubMed

    Frank, Sebastian M; Sun, Liwei; Forster, Lisa; Tse, Peter U; Greenlee, Mark W

    2016-12-14

    The midposterior fundus of the Sylvian fissure in the human brain is central to the cortical processing of vestibular cues. At least two vestibular areas are located at this site: the parietoinsular vestibular cortex (PIVC) and the posterior insular cortex (PIC). It is now well established that activity in sensory systems is subject to cross-modal attention effects. Attending to a stimulus in one sensory modality enhances activity in the corresponding cortical sensory system, but simultaneously suppresses activity in other sensory systems. Here, we wanted to probe whether such cross-modal attention effects also target the vestibular system. To this end, we used a visual multiple-object tracking task. By parametrically varying the number of tracked targets, we could measure the effect of attentional load on the PIVC and the PIC while holding the perceptual load constant. Participants performed the tracking task during functional magnetic resonance imaging. Results show that, compared with passive viewing of object motion, activity during object tracking was suppressed in the PIVC and enhanced in the PIC. Greater attentional load, induced by increasing the number of tracked targets, was associated with a corresponding increase in the suppression of activity in the PIVC. Activity in the anterior part of the PIC decreased with increasing load, whereas load effects were absent in the posterior PIC. Results of a control experiment show that attention-induced suppression in the PIVC is stronger than any suppression evoked by the visual stimulus per se. Overall, our results suggest that attention has a cross-modal modulatory effect on the vestibular cortex during visual object tracking. In this study we investigate cross-modal attention effects in the human vestibular cortex. We applied the visual multiple-object tracking task because it is known to evoke attentional load effects on neural activity in visual motion-processing and attention-processing areas. Here we demonstrate a load-dependent effect of attention on the activation in the vestibular cortex, despite constant visual motion stimulation. We find that activity in the parietoinsular vestibular cortex is more strongly suppressed the greater the attentional load on the visual tracking task. These findings suggest cross-modal attentional modulation in the vestibular cortex. Copyright © 2016 the authors 0270-6474/16/3612720-09$15.00/0.

  17. Cross-modal transfer of the conditioned eyeblink response during interstimulus interval discrimination training in young rats

    PubMed Central

    Brown, Kevin L.; Stanton, Mark E.

    2008-01-01

    Eyeblink classical conditioning (EBC) was observed across a broad developmental period with tasks utilizing two interstimulus intervals (ISIs). In ISI discrimination, two distinct conditioned stimuli (CSs; light and tone) are reinforced with a periocular shock unconditioned stimulus (US) at two different CS-US intervals. Temporal uncertainty is identical in design with the exception that the same CS is presented at both intervals. Developmental changes in conditioning have been reported in each task beyond ages when single-ISI learning is well developed. The present study sought to replicate and extend these previous findings by testing each task at four separate ages. Consistent with previous findings, younger rats (postnatal day – PD - 23 and 30) trained in ISI discrimination showed evidence of enhanced cross-modal influence of the short CS-US pairing upon long CS conditioning relative to older subjects. ISI discrimination training at PD43-47 yielded outcomes similar to those in adults (PD65-71). Cross-modal transfer effects in this task therefore appear to diminish between PD30 and PD43-47. Comparisons of ISI discrimination with temporal uncertainty indicated that cross-modal transfer in ISI discrimination at the youngest ages did not represent complete generalization across CSs. ISI discrimination undergoes a more protracted developmental emergence than single-cue EBC and may be a more sensitive indicator of developmental disorders involving cerebellar dysfunction. PMID:18726989

  18. Cross-modal Savings in the Contralateral Eyelid Conditioned Response

    PubMed Central

    Campolattaro, Matthew M.; Buss, Eric W.; Freeman, John H.

    2015-01-01

    The present experiment monitored bilateral eyelid responses during eyeblink conditioning in rats trained with a unilateral unconditioned stimulus (US). Three groups of rats were used to determine if cross-modal savings occurs when the location of the US is switched from one eye to the other. Rats in each group first received paired or unpaired eyeblink conditioning with a conditioned stimulus (tone or light; CS) and a unilateral periorbital electrical stimulation US. All rats were subsequently given paired training, but with the US location (Group 1), CS modality (Group 2), or US location and CS modality (Group 3) changed. Changing the location of the US alone resulted in an immediate transfer of responding in both eyelids (Group 1) in rats that received paired training prior to the transfer session. Rats in groups 2 and 3 that initially received paired training showed facilitated learning to the new CS modality during the transfer sessions, indicating that cross-modal savings occurs whether or not the location of the US is changed. All rats that were initially given unpaired training acquired conditioned eyeblink responses similar to de novo acquisition rate during the transfer sessions. Savings of CR incidence was more robust than savings of CR amplitude when the US switched sides, a finding that has implications for elucidating the neural mechanisms of cross-modal savings. PMID:26501170

  19. Cross-Modality Image Synthesis via Weakly Coupled and Geometry Co-Regularized Joint Dictionary Learning.

    PubMed

    Huang, Yawen; Shao, Ling; Frangi, Alejandro F

    2018-03-01

    Multi-modality medical imaging is increasingly used for comprehensive assessment of complex diseases in either diagnostic examinations or as part of medical research trials. Different imaging modalities provide complementary information about living tissues. However, multi-modal examinations are not always possible due to adversary factors, such as patient discomfort, increased cost, prolonged scanning time, and scanner unavailability. In additionally, in large imaging studies, incomplete records are not uncommon owing to image artifacts, data corruption or data loss, which compromise the potential of multi-modal acquisitions. In this paper, we propose a weakly coupled and geometry co-regularized joint dictionary learning method to address the problem of cross-modality synthesis while considering the fact that collecting the large amounts of training data is often impractical. Our learning stage requires only a few registered multi-modality image pairs as training data. To employ both paired images and a large set of unpaired data, a cross-modality image matching criterion is proposed. Then, we propose a unified model by integrating such a criterion into the joint dictionary learning and the observed common feature space for associating cross-modality data for the purpose of synthesis. Furthermore, two regularization terms are added to construct robust sparse representations. Our experimental results demonstrate superior performance of the proposed model over state-of-the-art methods.

  20. Cross-modal signatures in maternal speech and singing

    PubMed Central

    Trehub, Sandra E.; Plantinga, Judy; Brcic, Jelena; Nowicki, Magda

    2013-01-01

    We explored the possibility of a unique cross-modal signature in maternal speech and singing that enables adults and infants to link unfamiliar speaking or singing voices with subsequently viewed silent videos of the talkers or singers. In Experiment 1, adults listened to 30-s excerpts of speech followed by successively presented 7-s silent video clips, one from the previously heard speaker (different speech content) and the other from a different speaker. They successfully identified the previously heard speaker. In Experiment 2, adults heard comparable excerpts of singing followed by silent video clips from the previously heard singer (different song) and another singer. They failed to identify the previously heard singer. In Experiment 3, the videos of talkers and singers were blurred to obscure mouth movements. Adults successfully identified the talkers and they also identified the singers from videos of different portions of the song previously heard. In Experiment 4, 6− to 8-month-old infants listened to 30-s excerpts of the same maternal speech or singing followed by exposure to the silent videos on alternating trials. They looked longer at the silent videos of previously heard talkers and singers. The findings confirm the individuality of maternal speech and singing performance as well as adults' and infants' ability to discern the unique cross-modal signatures. The cues that enable cross-modal matching of talker and singer identity remain to be determined. PMID:24198805

  1. Early Cross-modal Plasticity in Adults.

    PubMed

    Lo Verde, Luca; Morrone, Maria Concetta; Lunghi, Claudia

    2017-03-01

    It is known that, after a prolonged period of visual deprivation, the adult visual cortex can be recruited for nonvisual processing, reflecting cross-modal plasticity. Here, we investigated whether cross-modal plasticity can occur at short timescales in the typical adult brain by comparing the interaction between vision and touch during binocular rivalry before and after a brief period of monocular deprivation, which strongly alters ocular balance favoring the deprived eye. While viewing dichoptically two gratings of orthogonal orientation, participants were asked to actively explore a haptic grating congruent in orientation to one of the two rivalrous stimuli. We repeated this procedure before and after 150 min of monocular deprivation. We first confirmed that haptic stimulation interacted with vision during rivalry promoting dominance of the congruent visuo-haptic stimulus and that monocular deprivation increased the deprived eye and decreased the nondeprived eye dominance. Interestingly, after deprivation, we found that the effect of touch did not change for the nondeprived eye, whereas it disappeared for the deprived eye, which was potentiated after deprivation. The absence of visuo-haptic interaction for the deprived eye lasted for over 1 hr and was not attributable to a masking induced by the stronger response of the deprived eye as confirmed by a control experiment. Taken together, our results demonstrate that the adult human visual cortex retains a high degree of cross-modal plasticity, which can occur even at very short timescales.

  2. Visual and auditory synchronization deficits among dyslexic readers as compared to non-impaired readers: a cross-correlation algorithm analysis

    PubMed Central

    Sela, Itamar

    2014-01-01

    Visual and auditory temporal processing and crossmodal integration are crucial factors in the word decoding process. The speed of processing (SOP) gap (Asynchrony) between these two modalities, which has been suggested as related to the dyslexia phenomenon, is the focus of the current study. Nineteen dyslexic and 17 non-impaired University adult readers were given stimuli in a reaction time (RT) procedure where participants were asked to identify whether the stimulus type was only visual, only auditory or crossmodally integrated. Accuracy, RT, and Event Related Potential (ERP) measures were obtained for each of the three conditions. An algorithm to measure the contribution of the temporal SOP of each modality to the crossmodal integration in each group of participants was developed. Results obtained using this model for the analysis of the current study data, indicated that in the crossmodal integration condition the presence of the auditory modality at the pre-response time frame (between 170 and 240 ms after stimulus presentation), increased processing speed in the visual modality among the non-impaired readers, but not in the dyslexic group. The differences between the temporal SOP of the modalities among the dyslexics and the non-impaired readers give additional support to the theory that an asynchrony between the visual and auditory modalities is a cause of dyslexia. PMID:24959125

  3. Cross-modal signatures in maternal speech and singing.

    PubMed

    Trehub, Sandra E; Plantinga, Judy; Brcic, Jelena; Nowicki, Magda

    2013-01-01

    We explored the possibility of a unique cross-modal signature in maternal speech and singing that enables adults and infants to link unfamiliar speaking or singing voices with subsequently viewed silent videos of the talkers or singers. In Experiment 1, adults listened to 30-s excerpts of speech followed by successively presented 7-s silent video clips, one from the previously heard speaker (different speech content) and the other from a different speaker. They successfully identified the previously heard speaker. In Experiment 2, adults heard comparable excerpts of singing followed by silent video clips from the previously heard singer (different song) and another singer. They failed to identify the previously heard singer. In Experiment 3, the videos of talkers and singers were blurred to obscure mouth movements. Adults successfully identified the talkers and they also identified the singers from videos of different portions of the song previously heard. In Experiment 4, 6- to 8-month-old infants listened to 30-s excerpts of the same maternal speech or singing followed by exposure to the silent videos on alternating trials. They looked longer at the silent videos of previously heard talkers and singers. The findings confirm the individuality of maternal speech and singing performance as well as adults' and infants' ability to discern the unique cross-modal signatures. The cues that enable cross-modal matching of talker and singer identity remain to be determined.

  4. Short-term memory stores organized by information domain.

    PubMed

    Noyce, Abigail L; Cestero, Nishmar; Shinn-Cunningham, Barbara G; Somers, David C

    2016-04-01

    Vision and audition have complementary affinities, with vision excelling in spatial resolution and audition excelling in temporal resolution. Here, we investigated the relationships among the visual and auditory modalities and spatial and temporal short-term memory (STM) using change detection tasks. We created short sequences of visual or auditory items, such that each item within a sequence arose at a unique spatial location at a unique time. On each trial, two successive sequences were presented; subjects attended to either space (the sequence of locations) or time (the sequence of inter item intervals) and reported whether the patterns of locations or intervals were identical. Each subject completed blocks of unimodal trials (both sequences presented in the same modality) and crossmodal trials (Sequence 1 visual, Sequence 2 auditory, or vice versa) for both spatial and temporal tasks. We found a strong interaction between modality and task: Spatial performance was best on unimodal visual trials, whereas temporal performance was best on unimodal auditory trials. The order of modalities on crossmodal trials also mattered, suggesting that perceptual fidelity at encoding is critical to STM. Critically, no cost was attributable to crossmodal comparison: In both tasks, performance on crossmodal trials was as good as or better than on the weaker unimodal trials. STM representations of space and time can guide change detection in either the visual or the auditory modality, suggesting that the temporal or spatial organization of STM may supersede sensory-specific organization.

  5. Stochastic amplification and signaling in enzymatic futile cycles through noise-induced bistability with oscillations

    NASA Astrophysics Data System (ADS)

    Samoilov, Michael; Plyasunov, Sergey; Arkin, Adam P.

    2005-02-01

    Stochastic effects in biomolecular systems have now been recognized as a major physiologically and evolutionarily important factor in the development and function of many living organisms. Nevertheless, they are often thought of as providing only moderate refinements to the behaviors otherwise predicted by the classical deterministic system description. In this work we show by using both analytical and numerical investigation that at least in one ubiquitous class of (bio)chemical-reaction mechanisms, enzymatic futile cycles, the external noise may induce a bistable oscillatory (dynamic switching) behavior that is both quantitatively and qualitatively different from what is predicted or possible deterministically. We further demonstrate that the noise required to produce these distinct properties can itself be caused by a set of auxiliary chemical reactions, making it feasible for biological systems of sufficient complexity to generate such behavior internally. This new stochastic dynamics then serves to confer additional functional modalities on the enzymatic futile cycle mechanism that include stochastic amplification and signaling, the characteristics of which could be controlled by both the type and parameters of the driving noise. Hence, such noise-induced phenomena may, among other roles, potentially offer a novel type of control mechanism in pathways that contain these cycles and the like units. In particular, observations of endogenous or externally driven noise-induced dynamics in regulatory networks may thus provide additional insight into their topology, structure, and kinetics. network motif | signal transduction | chemical reaction | synthetic biology | systems biology

  6. Single-electron random-number generator (RNG) for highly secure ubiquitous computing applications

    NASA Astrophysics Data System (ADS)

    Uchida, Ken; Tanamoto, Tetsufumi; Fujita, Shinobu

    2007-11-01

    Since the security of all modern cryptographic techniques relies on unpredictable and irreproducible digital keys generated by random-number generators (RNGs), the realization of high-quality RNG is essential for secure communications. In this report, a new RNG, which utilizes single-electron phenomena, is proposed. A room-temperature operating silicon single-electron transistor (SET) having nearby an electron pocket is used as a high-quality, ultra-small RNG. In the proposed RNG, stochastic single-electron capture/emission processes to/from the electron pocket are detected with high sensitivity by the SET, and result in giant random telegraphic signals (GRTS) on the SET current. It is experimentally demonstrated that the single-electron RNG generates extremely high-quality random digital sequences at room temperature, in spite of its simple configuration. Because of its small-size and low-power properties, the single-electron RNG is promising as a key nanoelectronic device for future ubiquitous computing systems with highly secure mobile communication capabilities.

  7. Minding Impacting Events in a Model of Stochastic Variance

    PubMed Central

    Duarte Queirós, Sílvio M.; Curado, Evaldo M. F.; Nobre, Fernando D.

    2011-01-01

    We introduce a generalization of the well-known ARCH process, widely used for generating uncorrelated stochastic time series with long-term non-Gaussian distributions and long-lasting correlations in the (instantaneous) standard deviation exhibiting a clustering profile. Specifically, inspired by the fact that in a variety of systems impacting events are hardly forgot, we split the process into two different regimes: a first one for regular periods where the average volatility of the fluctuations within a certain period of time is below a certain threshold, , and another one when the local standard deviation outnumbers . In the former situation we use standard rules for heteroscedastic processes whereas in the latter case the system starts recalling past values that surpassed the threshold. Our results show that for appropriate parameter values the model is able to provide fat tailed probability density functions and strong persistence of the instantaneous variance characterized by large values of the Hurst exponent (), which are ubiquitous features in complex systems. PMID:21483864

  8. Experiments and modelling of rate-dependent transition delay in a stochastic subcritical bifurcation

    NASA Astrophysics Data System (ADS)

    Bonciolini, Giacomo; Ebi, Dominik; Boujo, Edouard; Noiray, Nicolas

    2018-03-01

    Complex systems exhibiting critical transitions when one of their governing parameters varies are ubiquitous in nature and in engineering applications. Despite a vast literature focusing on this topic, there are few studies dealing with the effect of the rate of change of the bifurcation parameter on the tipping points. In this work, we consider a subcritical stochastic Hopf bifurcation under two scenarios: the bifurcation parameter is first changed in a quasi-steady manner and then, with a finite ramping rate. In the latter case, a rate-dependent bifurcation delay is observed and exemplified experimentally using a thermoacoustic instability in a combustion chamber. This delay increases with the rate of change. This leads to a state transition of larger amplitude compared with the one that would be experienced by the system with a quasi-steady change of the parameter. We also bring experimental evidence of a dynamic hysteresis caused by the bifurcation delay when the parameter is ramped back. A surrogate model is derived in order to predict the statistic of these delays and to scrutinize the underlying stochastic dynamics. Our study highlights the dramatic influence of a finite rate of change of bifurcation parameters upon tipping points, and it pinpoints the crucial need of considering this effect when investigating critical transitions.

  9. Discrete stochastic analogs of Erlang epidemic models.

    PubMed

    Getz, Wayne M; Dougherty, Eric R

    2018-12-01

    Erlang differential equation models of epidemic processes provide more realistic disease-class transition dynamics from susceptible (S) to exposed (E) to infectious (I) and removed (R) categories than the ubiquitous SEIR model. The latter is itself is at one end of the spectrum of Erlang SE[Formula: see text]I[Formula: see text]R models with [Formula: see text] concatenated E compartments and [Formula: see text] concatenated I compartments. Discrete-time models, however, are computationally much simpler to simulate and fit to epidemic outbreak data than continuous-time differential equations, and are also much more readily extended to include demographic and other types of stochasticity. Here we formulate discrete-time deterministic analogs of the Erlang models, and their stochastic extension, based on a time-to-go distributional principle. Depending on which distributions are used (e.g. discretized Erlang, Gamma, Beta, or Uniform distributions), we demonstrate that our formulation represents both a discretization of Erlang epidemic models and generalizations thereof. We consider the challenges of fitting SE[Formula: see text]I[Formula: see text]R models and our discrete-time analog to data (the recent outbreak of Ebola in Liberia). We demonstrate that the latter performs much better than the former; although confining fits to strict SEIR formulations reduces the numerical challenges, but sacrifices best-fit likelihood scores by at least 7%.

  10. Experiments and modelling of rate-dependent transition delay in a stochastic subcritical bifurcation

    PubMed Central

    Noiray, Nicolas

    2018-01-01

    Complex systems exhibiting critical transitions when one of their governing parameters varies are ubiquitous in nature and in engineering applications. Despite a vast literature focusing on this topic, there are few studies dealing with the effect of the rate of change of the bifurcation parameter on the tipping points. In this work, we consider a subcritical stochastic Hopf bifurcation under two scenarios: the bifurcation parameter is first changed in a quasi-steady manner and then, with a finite ramping rate. In the latter case, a rate-dependent bifurcation delay is observed and exemplified experimentally using a thermoacoustic instability in a combustion chamber. This delay increases with the rate of change. This leads to a state transition of larger amplitude compared with the one that would be experienced by the system with a quasi-steady change of the parameter. We also bring experimental evidence of a dynamic hysteresis caused by the bifurcation delay when the parameter is ramped back. A surrogate model is derived in order to predict the statistic of these delays and to scrutinize the underlying stochastic dynamics. Our study highlights the dramatic influence of a finite rate of change of bifurcation parameters upon tipping points, and it pinpoints the crucial need of considering this effect when investigating critical transitions. PMID:29657803

  11. Instability, rupture and fluctuations in thin liquid films: Theory and computations

    NASA Astrophysics Data System (ADS)

    Gvalani, Rishabh; Duran-Olivencia, Miguel; Kalliadasis, Serafim; Pavliotis, Grigorios

    2017-11-01

    Thin liquid films are ubiquitous in natural phenomena and technological applications. They are commonly studied via deterministic hydrodynamic equations, but thermal fluctuations often play a crucial role that still needs to be understood. An example of this is dewetting, which involves the rupture of a thin liquid film and the formation of droplets. Such a process is thermally activated and requires fluctuations to be taken into account self-consistently. Here we present an analytical and numerical study of a stochastic thin-film equation derived from first principles. We scrutinise the behaviour of the stochastic thin film equation in the limit of perfectly correlated noise along the wall-normal direction. We also perform Monte Carlo simulations of the stochastic equation by adopting a numerical scheme based on a spectral collocation method. The numerical scheme allows us to explore the fluctuating dynamics of the thin film and the behaviour of the system's free energy close to rupture. Finally, we also study the effect of the noise intensity on the rupture time, which is in good agreement with previous works. Imperial College London (ICL) President's PhD Scholarship; European Research Council Advanced Grant No. 247031; EPSRC Grants EP/L025159, EP/L020564, EP/P031587, EP/L024926, and EP/L016230/1.

  12. Learning Orthographic Structure With Sequential Generative Neural Networks.

    PubMed

    Testolin, Alberto; Stoianov, Ivilin; Sperduti, Alessandro; Zorzi, Marco

    2016-04-01

    Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in connectionist modeling. Here, we investigated a sequential version of the restricted Boltzmann machine (RBM), a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and can encode contextual information in the form of internal, distributed representations. We assessed whether this type of network can extract the orthographic structure of English monosyllables by learning a generative model of the letter sequences forming a word training corpus. We show that the network learned an accurate probabilistic model of English graphotactics, which can be used to make predictions about the letter following a given context as well as to autonomously generate high-quality pseudowords. The model was compared to an extended version of simple recurrent networks, augmented with a stochastic process that allows autonomous generation of sequences, and to non-connectionist probabilistic models (n-grams and hidden Markov models). We conclude that sequential RBMs and stochastic simple recurrent networks are promising candidates for modeling cognition in the temporal domain. Copyright © 2015 Cognitive Science Society, Inc.

  13. When music is salty: The crossmodal associations between sound and taste

    PubMed Central

    Guetta, Rachel; Loui, Psyche

    2017-01-01

    Here we investigate associations between complex auditory and complex taste stimuli. A novel piece of music was composed and recorded in four different styles of musical articulation to reflect the four basic tastes groups (sweet, sour, salty, bitter). In Experiment 1, participants performed above chance at pairing the music clips with corresponding taste words. Experiment 2 uses multidimensional scaling to interpret how participants categorize these musical stimuli, and to show that auditory categories can be organized in a similar manner as taste categories. Experiment 3 introduces four different flavors of custom-made chocolate ganache and shows that participants can match music clips with the corresponding taste stimuli with above-chance accuracy. Experiment 4 demonstrates the partial role of pleasantness in crossmodal mappings between sound and taste. The present findings confirm that individuals are able to make crossmodal associations between complex auditory and gustatory stimuli, and that valence may mediate multisensory integration in the general population. PMID:28355227

  14. Cross-modal prediction changes the timing of conscious access during the motion-induced blindness.

    PubMed

    Chang, Acer Y C; Kanai, Ryota; Seth, Anil K

    2015-01-01

    Despite accumulating evidence that perceptual predictions influence perceptual content, the relations between these predictions and conscious contents remain unclear, especially for cross-modal predictions. We examined whether predictions of visual events by auditory cues can facilitate conscious access to the visual stimuli. We trained participants to learn associations between auditory cues and colour changes. We then asked whether congruency between auditory cues and target colours would speed access to consciousness. We did this by rendering a visual target subjectively invisible using motion-induced blindness and then gradually changing its colour while presenting congruent or incongruent auditory cues. Results showed that the visual target gained access to consciousness faster in congruent than in incongruent trials; control experiments excluded potentially confounding effects of attention and motor response. The expectation effect was gradually established over blocks suggesting a role for extensive training. Overall, our findings show that predictions learned through cross-modal training can facilitate conscious access to visual stimuli. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Integrating Conceptual Knowledge Within and Across Representational Modalities

    PubMed Central

    McNorgan, Chris; Reid, Jackie; McRae, Ken

    2011-01-01

    Research suggests that concepts are distributed across brain regions specialized for processing information from different sensorimotor modalities. Multimodal semantic models fall into one of two broad classes differentiated by the assumed hierarchy of convergence zones over which information is integrated. In shallow models, communication within- and between-modality is accomplished using either direct connectivity, or a central semantic hub. In deep models, modalities are connected via cascading integration sites with successively wider receptive fields. Four experiments provide the first direct behavioral tests of these models using speeded tasks involving feature inference and concept activation. Shallow models predict no within-modal versus cross-modal difference in either task, whereas deep models predict a within-modal advantage for feature inference, but a cross-modal advantage for concept activation. Experiments 1 and 2 used relatedness judgments to tap participants’ knowledge of relations for within- and cross-modal feature pairs. Experiments 3 and 4 used a dual feature verification task. The pattern of decision latencies across Experiments 1 to 4 is consistent with a deep integration hierarchy. PMID:21093853

  16. Different patterns of modality dominance across development.

    PubMed

    Barnhart, Wesley R; Rivera, Samuel; Robinson, Christopher W

    2018-01-01

    The present study sought to better understand how children, young adults, and older adults attend and respond to multisensory information. In Experiment 1, young adults were presented with two spoken words, two pictures, or two word-picture pairings and they had to determine if the two stimuli/pairings were exactly the same or different. Pairing the words and pictures together slowed down visual but not auditory response times and delayed the latency of first fixations, both of which are consistent with a proposed mechanism underlying auditory dominance. Experiment 2 examined the development of modality dominance in children, young adults, and older adults. Cross-modal presentation attenuated visual accuracy and slowed down visual response times in children, whereas older adults showed the opposite pattern, with cross-modal presentation attenuating auditory accuracy and slowing down auditory response times. Cross-modal presentation also delayed first fixations in children and young adults. Mechanisms underlying modality dominance and multisensory processing are discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Cross-modal perception of rhythm in music and dance by cochlear implant users.

    PubMed

    Vongpaisal, Tara; Monaghan, Melanie

    2014-05-01

    Two studies examined adult cochlear implant (CI) users' ability to match auditory rhythms occurring in music to visual rhythms occurring in dance (Cha Cha, Slow Swing, Tango and Jive). In Experiment 1, adults CI users (n = 10) and hearing controls matched a music excerpt to choreographed dance sequences presented as silent videos. In Experiment 2, participants matched a silent video of a dance sequence to music excerpts. CI users were successful in detecting timing congruencies across music and dance at well above-chance levels suggesting that they were able to process distinctive auditory and visual rhythm patterns that characterized each style. However, they were better able to detect cross-modal timing congruencies when the reference was an auditory rhythm than when the reference was a visual rhythm. Learning strategies that encourage cross-modal learning of musical rhythms may have applications in developing novel rehabilitative strategies to enhance music perception and appreciation outcomes of child implant users.

  18. Cross-modal learning to rank via latent joint representation.

    PubMed

    Wu, Fei; Jiang, Xinyang; Li, Xi; Tang, Siliang; Lu, Weiming; Zhang, Zhongfei; Zhuang, Yueting

    2015-05-01

    Cross-modal ranking is a research topic that is imperative to many applications involving multimodal data. Discovering a joint representation for multimodal data and learning a ranking function are essential in order to boost the cross-media retrieval (i.e., image-query-text or text-query-image). In this paper, we propose an approach to discover the latent joint representation of pairs of multimodal data (e.g., pairs of an image query and a text document) via a conditional random field and structural learning in a listwise ranking manner. We call this approach cross-modal learning to rank via latent joint representation (CML²R). In CML²R, the correlations between multimodal data are captured in terms of their sharing hidden variables (e.g., topics), and a hidden-topic-driven discriminative ranking function is learned in a listwise ranking manner. The experiments show that the proposed approach achieves a good performance in cross-media retrieval and meanwhile has the capability to learn the discriminative representation of multimodal data.

  19. Learning piano melodies in visuo-motor or audio-motor training conditions and the neural correlates of their cross-modal transfer.

    PubMed

    Engel, Annerose; Bangert, Marc; Horbank, David; Hijmans, Brenda S; Wilkens, Katharina; Keller, Peter E; Keysers, Christian

    2012-11-01

    To investigate the cross-modal transfer of movement patterns necessary to perform melodies on the piano, 22 non-musicians learned to play short sequences on a piano keyboard by (1) merely listening and replaying (vision of own fingers occluded) or (2) merely observing silent finger movements and replaying (on a silent keyboard). After training, participants recognized with above chance accuracy (1) audio-motor learned sequences upon visual presentation (89±17%), and (2) visuo-motor learned sequences upon auditory presentation (77±22%). The recognition rates for visual presentation significantly exceeded those for auditory presentation (p<.05). fMRI revealed that observing finger movements corresponding to audio-motor trained melodies is associated with stronger activation in the left rolandic operculum than observing untrained sequences. This region was also involved in silent execution of sequences, suggesting that a link to motor representations may play a role in cross-modal transfer from audio-motor training condition to visual recognition. No significant differences in brain activity were found during listening to visuo-motor trained compared to untrained melodies. Cross-modal transfer was stronger from the audio-motor training condition to visual recognition and this is discussed in relation to the fact that non-musicians are familiar with how their finger movements look (motor-to-vision transformation), but not with how they sound on a piano (motor-to-sound transformation). Copyright © 2012 Elsevier Inc. All rights reserved.

  20. The neural basis of visual dominance in the context of audio-visual object processing.

    PubMed

    Schmid, Carmen; Büchel, Christian; Rose, Michael

    2011-03-01

    Visual dominance refers to the observation that in bimodal environments vision often has an advantage over other senses in human. Therefore, a better memory performance for visual compared to, e.g., auditory material is assumed. However, the reason for this preferential processing and the relation to the memory formation is largely unknown. In this fMRI experiment, we manipulated cross-modal competition and attention, two factors that both modulate bimodal stimulus processing and can affect memory formation. Pictures and sounds of objects were presented simultaneously in two levels of recognisability, thus manipulating the amount of cross-modal competition. Attention was manipulated via task instruction and directed either to the visual or the auditory modality. The factorial design allowed a direct comparison of the effects between both modalities. The resulting memory performance showed that visual dominance was limited to a distinct task setting. Visual was superior to auditory object memory only when allocating attention towards the competing modality. During encoding, cross-modal competition and attention towards the opponent domain reduced fMRI signals in both neural systems, but cross-modal competition was more pronounced in the auditory system and only in auditory cortex this competition was further modulated by attention. Furthermore, neural activity reduction in auditory cortex during encoding was closely related to the behavioural auditory memory impairment. These results indicate that visual dominance emerges from a less pronounced vulnerability of the visual system against competition from the auditory domain. Copyright © 2010 Elsevier Inc. All rights reserved.

  1. Object discrimination using optimized multi-frequency auditory cross-modal haptic feedback.

    PubMed

    Gibson, Alison; Artemiadis, Panagiotis

    2014-01-01

    As the field of brain-machine interfaces and neuro-prosthetics continues to grow, there is a high need for sensor and actuation mechanisms that can provide haptic feedback to the user. Current technologies employ expensive, invasive and often inefficient force feedback methods, resulting in an unrealistic solution for individuals who rely on these devices. This paper responds through the development, integration and analysis of a novel feedback architecture where haptic information during the neural control of a prosthetic hand is perceived through multi-frequency auditory signals. Through representing force magnitude with volume and force location with frequency, the feedback architecture can translate the haptic experiences of a robotic end effector into the alternative sensory modality of sound. Previous research with the proposed cross-modal feedback method confirmed its learnability, so the current work aimed to investigate which frequency map (i.e. frequency-specific locations on the hand) is optimal in helping users distinguish between hand-held objects and tasks associated with them. After short use with the cross-modal feedback during the electromyographic (EMG) control of a prosthetic hand, testing results show that users are able to use audial feedback alone to discriminate between everyday objects. While users showed adaptation to three different frequency maps, the simplest map containing only two frequencies was found to be the most useful in discriminating between objects. This outcome provides support for the feasibility and practicality of the cross-modal feedback method during the neural control of prosthetics.

  2. Cross-modal face recognition using multi-matcher face scores

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2015-05-01

    The performance of face recognition can be improved using information fusion of multimodal images and/or multiple algorithms. When multimodal face images are available, cross-modal recognition is meaningful for security and surveillance applications. For example, a probe face is a thermal image (especially at nighttime), while only visible face images are available in the gallery database. Matching a thermal probe face onto the visible gallery faces requires crossmodal matching approaches. A few such studies were implemented in facial feature space with medium recognition performance. In this paper, we propose a cross-modal recognition approach, where multimodal faces are cross-matched in feature space and the recognition performance is enhanced with stereo fusion at image, feature and/or score level. In the proposed scenario, there are two cameras for stereo imaging, two face imagers (visible and thermal images) in each camera, and three recognition algorithms (circular Gaussian filter, face pattern byte, linear discriminant analysis). A score vector is formed with three cross-matched face scores from the aforementioned three algorithms. A classifier (e.g., k-nearest neighbor, support vector machine, binomial logical regression [BLR]) is trained then tested with the score vectors by using 10-fold cross validations. The proposed approach was validated with a multispectral stereo face dataset from 105 subjects. Our experiments show very promising results: ACR (accuracy rate) = 97.84%, FAR (false accept rate) = 0.84% when cross-matching the fused thermal faces onto the fused visible faces by using three face scores and the BLR classifier.

  3. Investigating common coding of observed and executed actions in the monkey brain using cross-modal multi-variate fMRI classification.

    PubMed

    Fiave, Prosper Agbesi; Sharma, Saloni; Jastorff, Jan; Nelissen, Koen

    2018-05-19

    Mirror neurons are generally described as a neural substrate hosting shared representations of actions, by simulating or 'mirroring' the actions of others onto the observer's own motor system. Since single neuron recordings are rarely feasible in humans, it has been argued that cross-modal multi-variate pattern analysis (MVPA) of non-invasive fMRI data is a suitable technique to investigate common coding of observed and executed actions, allowing researchers to infer the presence of mirror neurons in the human brain. In an effort to close the gap between monkey electrophysiology and human fMRI data with respect to the mirror neuron system, here we tested this proposal for the first time in the monkey. Rhesus monkeys either performed reach-and-grasp or reach-and-touch motor acts with their right hand in the dark or observed videos of human actors performing similar motor acts. Unimodal decoding showed that both executed or observed motor acts could be decoded from numerous brain regions. Specific portions of rostral parietal, premotor and motor cortices, previously shown to house mirror neurons, in addition to somatosensory regions, yielded significant asymmetric action-specific cross-modal decoding. These results validate the use of cross-modal multi-variate fMRI analyses to probe the representations of own and others' actions in the primate brain and support the proposed mapping of others' actions onto the observer's own motor cortices. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Is Red Heavier Than Yellow Even for Blind?

    PubMed

    Barilari, Marco; de Heering, Adélaïde; Crollen, Virginie; Collignon, Olivier; Bottini, Roberto

    2018-01-01

    Across cultures and languages, people find similarities between the products of different senses in mysterious ways. By studying what is called cross-modal correspondences, cognitive psychologists discovered that lemons are fast rather than slow, boulders are sour, and red is heavier than yellow. Are these cross-modal correspondences established via sensory perception or can they be learned merely through language? We contribute to this debate by demonstrating that early blind people who lack the perceptual experience of color also think that red is heavier than yellow but to a lesser extent than sighted do.

  5. Observation and analysis of the Coulter effect through carbon nanotube and graphene nanopores.

    PubMed

    Agrawal, Kumar Varoon; Drahushuk, Lee W; Strano, Michael S

    2016-02-13

    Carbon nanotubes (CNTs) and graphene are the rolled and flat analogues of graphitic carbon, respectively, with hexagonal crystalline lattices, and show exceptional molecular transport properties. The empirical study of a single isolated nanopore requires, as evidence, the observation of stochastic, telegraphic noise from a blocking molecule commensurate in size with the pore. This standard is used ubiquitously in patch clamp studies of single, isolated biological ion channels and a wide range of inorganic, synthetic nanopores. In this work, we show that observation and study of stochastic fluctuations for carbon nanopores, both CNTs and graphene-based, enable precision characterization of pore properties that is otherwise unattainable. In the case of voltage clamp measurements of long (0.5-1 mm) CNTs between 0.9 and 2.2 nm in diameter, Coulter blocking of cationic species reveals the complex structuring of the fluid phase for confined water in this diameter range. In the case of graphene, we have pioneered the study and the analysis of stochastic fluctuations in gas transport from a pressurized, graphene-covered micro-well compartment that reveal switching between different values of the membrane permeance attributed to chemical rearrangements of individual graphene pores. This analysis remains the only way to study such single isolated graphene nanopores under these realistic transport conditions of pore rearrangements, in keeping with the thesis of this work. In summary, observation and analysis of Coulter blocking or stochastic fluctuations of permeating flux is an invaluable tool to understand graphene and graphitic nanopores including CNTs. © 2015 The Author(s).

  6. Cross-modal representation of spoken and written word meaning in left pars triangularis.

    PubMed

    Liuzzi, Antonietta Gabriella; Bruffaerts, Rose; Peeters, Ronald; Adamczuk, Katarzyna; Keuleers, Emmanuel; De Deyne, Simon; Storms, Gerrit; Dupont, Patrick; Vandenberghe, Rik

    2017-04-15

    The correspondence in meaning extracted from written versus spoken input remains to be fully understood neurobiologically. Here, in a total of 38 subjects, the functional anatomy of cross-modal semantic similarity for concrete words was determined based on a dual criterion: First, a voxelwise univariate analysis had to show significant activation during a semantic task (property verification) performed with written and spoken concrete words compared to the perceptually matched control condition. Second, in an independent dataset, in these clusters, the similarity in fMRI response pattern to two distinct entities, one presented as a written and the other as a spoken word, had to correlate with the similarity in meaning between these entities. The left ventral occipitotemporal transition zone and ventromedial temporal cortex, retrosplenial cortex, pars orbitalis bilaterally, and the left pars triangularis were all activated in the univariate contrast. Only the left pars triangularis showed a cross-modal semantic similarity effect. There was no effect of phonological nor orthographic similarity in this region. The cross-modal semantic similarity effect was confirmed by a secondary analysis in the cytoarchitectonically defined BA45. A semantic similarity effect was also present in the ventral occipital regions but only within the visual modality, and in the anterior superior temporal cortex only within the auditory modality. This study provides direct evidence for the coding of word meaning in BA45 and positions its contribution to semantic processing at the confluence of input-modality specific pathways that code for meaning within the respective input modalities. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Music-to-Color Associations of Single-Line Piano Melodies in Non-synesthetes.

    PubMed

    Palmer, Stephen E; Langlois, Thomas A; Schloss, Karen B

    2016-01-01

    Prior research has shown that non-synesthetes' color associations to classical orchestral music are strongly mediated by emotion. The present study examines similar cross-modal music-to-color associations for much better controlled musical stimuli: 64 single-line piano melodies that were generated from four basic melodies by Mozart, whose global musical parameters were manipulated in tempo(slow/fast), note-density (sparse/dense), mode (major/minor) and pitch-height (low/high). Participants first chose the three colors (from 37) that they judged to be most consistent with (and, later, the three that were most inconsistent with) the music they were hearing. They later rated each melody and each color for the strength of its association along four emotional dimensions: happy/sad, agitated/calm, angry/not-angry and strong/weak. The cross-modal choices showed that faster music in the major mode was associated with lighter, more saturated, yellower (warmer) colors than slower music in the minor mode. These results replicate and extend those of Palmer et al. (2013, Proc. Natl Acad. Sci. 110, 8836-8841) with more precisely controlled musical stimuli. Further results replicated strong evidence for emotional mediation of these cross-modal associations, in that the emotional ratings of the melodies were very highly correlated with the emotional associations of the colors chosen as going best/worst with the melodies (r = 0.92, 0.85, 0.82 and 0.70 for happy/sad, strong/weak,angry/not-angry and agitated/calm, respectively). The results are discussed in terms of common emotional associations forming a cross-modal bridge between highly disparate sensory inputs.

  8. A pain in the bud? Implications of cross-modal sensitivity for pain experience.

    PubMed

    Perkins, Monica; de Bruyne, Marien; Giummarra, Melita J

    2016-11-01

    There is growing evidence that enhanced sensitivity to painful clinical procedures and chronic pain are related to greater sensitivity to other sensory inputs, such as bitter taste. We examined cross-modal sensitivities in two studies. Study 1 assessed associations between bitter taste sensitivity, pain tolerance, and fear of pain in 48 healthy young adults. Participants were classified as non-tasters, tasters and super-tasters using a bitter taste test (6-n-propythiouracil; PROP). The latter group had significantly higher fear of pain (Fear of Pain Questionnaire) than tasters (p=.036, effect size r = .48). There was only a trend for an association between bitter taste intensity ratings and intensity of pain at the point of pain tolerance in a cold pressor test (p=.04). In Study 2, 40 healthy young adults completed the Adolescent/Adult Sensory Profile before rating intensity and unpleasantness of innocuous (33 °C), moderate (41 °C), and high intensity (44 °C) thermal pain stimulations. The sensory-sensitivity subscale was positively correlated with both intensity and unpleasantness ratings. Canonical correlation showed that only sensitivity to audition and touch (not taste/smell) were associated with intensity of moderate and high (not innocuous) thermal stimuli. Together these findings suggest that there are cross-modal associations predominantly between sensitivity to exteroceptive inputs (i.e., taste, touch, sound) and the affective dimensions of pain, including noxious heat and intolerable cold pain, in healthy adults. These cross-modal sensitivities may arise due to greater psychological aversion to salient sensations, or from shared neural circuitry for processing disparate sensory modalities.

  9. A perception theory in mind-body medicine: guided imagery and mindful meditation as cross-modal adaptation.

    PubMed

    Bedford, Felice L

    2012-02-01

    A new theory of mind-body interaction in healing is proposed based on considerations from the field of perception. It is suggested that the combined effect of visual imagery and mindful meditation on physical healing is simply another example of cross-modal adaptation in perception, much like adaptation to prism-displaced vision. It is argued that psychological interventions produce a conflict between the perceptual modalities of the immune system and vision (or touch), which leads to change in the immune system in order to realign the modalities. It is argued that mind-body interactions do not exist because of higher-order cognitive thoughts or beliefs influencing the body, but instead result from ordinary interactions between lower-level perceptual modalities that function to detect when sensory systems have made an error. The theory helps explain why certain illnesses may be more amenable to mind-body interaction, such as autoimmune conditions in which a sensory system (the immune system) has made an error. It also renders sensible erroneous changes, such as those brought about by "faith healers," as conflicts between modalities that are resolved in favor of the wrong modality. The present view provides one of very few psychological theories of how guided imagery and mindfulness meditation bring about positive physical change. Also discussed are issues of self versus non-self, pain, cancer, body schema, attention, consciousness, and, importantly, developing the concept that the immune system is a rightful perceptual modality. Recognizing mind-body healing as perceptual cross-modal adaptation implies that a century of cross-modal perception research is applicable to the immune system.

  10. Aging and the interaction of sensory cortical function and structure.

    PubMed

    Peiffer, Ann M; Hugenschmidt, Christina E; Maldjian, Joseph A; Casanova, Ramon; Srikanth, Ryali; Hayasaka, Satoru; Burdette, Jonathan H; Kraft, Robert A; Laurienti, Paul J

    2009-01-01

    Even the healthiest older adults experience changes in cognitive and sensory function. Studies show that older adults have reduced neural responses to sensory information. However, it is well known that sensory systems do not act in isolation but function cooperatively to either enhance or suppress neural responses to individual environmental stimuli. Very little research has been dedicated to understanding how aging affects the interactions between sensory systems, especially cross-modal deactivations or the ability of one sensory system (e.g., audition) to suppress the neural responses in another sensory system cortex (e.g., vision). Such cross-modal interactions have been implicated in attentional shifts between sensory modalities and could account for increased distractibility in older adults. To assess age-related changes in cross-modal deactivations, functional MRI studies were performed in 61 adults between 18 and 80 years old during simple auditory and visual discrimination tasks. Results within visual cortex confirmed previous findings of decreased responses to visual stimuli for older adults. Age-related changes in the visual cortical response to auditory stimuli were, however, much more complex and suggested an alteration with age in the functional interactions between the senses. Ventral visual cortical regions exhibited cross-modal deactivations in younger but not older adults, whereas more dorsal aspects of visual cortex were suppressed in older but not younger adults. These differences in deactivation also remained after adjusting for age-related reductions in brain volume of sensory cortex. Thus, functional differences in cortical activity between older and younger adults cannot solely be accounted for by differences in gray matter volume. (c) 2007 Wiley-Liss, Inc.

  11. Cross-modal perceptual load: the impact of modality and individual differences.

    PubMed

    Sandhu, Rajwant; Dyson, Benjamin James

    2016-05-01

    Visual distractor processing tends to be more pronounced when the perceptual load (PL) of a task is low compared to when it is high [perpetual load theory (PLT); Lavie in J Exp Psychol Hum Percept Perform 21(3):451-468, 1995]. While PLT is well established in the visual domain, application to cross-modal processing has produced mixed results, and the current study was designed in an attempt to improve previous methodologies. First, we assessed PLT using response competition, a typical metric from the uni-modal domain. Second, we looked at the impact of auditory load on visual distractors, and of visual load on auditory distractors, within the same individual. Third, we compared individual uni- and cross-modal selective attention abilities, by correlating performance with the visual Attentional Network Test (ANT). Fourth, we obtained a measure of the relative processing efficiency between vision and audition, to investigate whether processing ease influences the extent of distractor processing. Although distractor processing was evident during both attend auditory and attend visual conditions, we found that PL did not modulate processing of either visual or auditory distractors. We also found support for a correlation between the uni-modal (visual) ANT and our cross-modal task but only when the distractors were visual. Finally, although auditory processing was more impacted by visual distractors, our measure of processing efficiency only accounted for this asymmetry in the auditory high-load condition. The results are discussed with respect to the continued debate regarding the shared or separate nature of processing resources across modalities.

  12. On the role of crossmodal prediction in audiovisual emotion perception.

    PubMed

    Jessen, Sarah; Kotz, Sonja A

    2013-01-01

    Humans rely on multiple sensory modalities to determine the emotional state of others. In fact, such multisensory perception may be one of the mechanisms explaining the ease and efficiency by which others' emotions are recognized. But how and when exactly do the different modalities interact? One aspect in multisensory perception that has received increasing interest in recent years is the concept of cross-modal prediction. In emotion perception, as in most other settings, visual information precedes the auditory information. Thereby, leading in visual information can facilitate subsequent auditory processing. While this mechanism has often been described in audiovisual speech perception, so far it has not been addressed in audiovisual emotion perception. Based on the current state of the art in (a) cross-modal prediction and (b) multisensory emotion perception research, we propose that it is essential to consider the former in order to fully understand the latter. Focusing on electroencephalographic (EEG) and magnetoencephalographic (MEG) studies, we provide a brief overview of the current research in both fields. In discussing these findings, we suggest that emotional visual information may allow more reliable predicting of auditory information compared to non-emotional visual information. In support of this hypothesis, we present a re-analysis of a previous data set that shows an inverse correlation between the N1 EEG response and the duration of visual emotional, but not non-emotional information. If the assumption that emotional content allows more reliable predicting can be corroborated in future studies, cross-modal prediction is a crucial factor in our understanding of multisensory emotion perception.

  13. How does visual language affect crossmodal plasticity and cochlear implant success?

    PubMed Central

    Lyness, C.R.; Woll, B.; Campbell, R.; Cardin, V.

    2013-01-01

    Cochlear implants (CI) are the most successful intervention for ameliorating hearing loss in severely or profoundly deaf children. Despite this, educational performance in children with CI continues to lag behind their hearing peers. From animal models and human neuroimaging studies it has been proposed the integrative functions of auditory cortex are compromised by crossmodal plasticity. This has been argued to result partly from the use of a visual language. Here we argue that ‘cochlear implant sensitive periods’ comprise both auditory and language sensitive periods, and thus cannot be fully described with animal models. Despite prevailing assumptions, there is no evidence to link the use of a visual language to poorer CI outcome. Crossmodal reorganisation of auditory cortex occurs regardless of compensatory strategies, such as sign language, used by the deaf person. In contrast, language deprivation during early sensitive periods has been repeatedly linked to poor language outcomes. Language sensitive periods have largely been ignored when considering variation in CI outcome, leading to ill-founded recommendations concerning visual language in CI habilitation. PMID:23999083

  14. Adaptation to faces and voices: unimodal, cross-modal, and sex-specific effects.

    PubMed

    Little, Anthony C; Feinberg, David R; Debruine, Lisa M; Jones, Benedict C

    2013-11-01

    Exposure, or adaptation, to faces or voices biases perceptions of subsequent stimuli, for example, causing faces to appear more normal than they would be otherwise if they are similar to the previously presented stimuli. Studies also suggest that there may be cross-modal adaptation between sound and vision, although the evidence is inconsistent. We examined adaptation effects within and across voices and faces and also tested whether adaptation crosses between male and female stimuli. We exposed participants to sex-typical or sex-atypical stimuli and measured the perceived normality of subsequent stimuli. Exposure to female faces or voices altered perceptions of subsequent female stimuli, and these adaptation effects crossed modality; exposure to voices influenced judgments of faces, and vice versa. We also found that exposure to female stimuli did not influence perception of subsequent male stimuli. Our data demonstrate that recent experience of faces and voices changes subsequent perception and that mental representations of faces and voices may not be modality dependent. Both unimodal and cross-modal adaptation effects appear to be relatively sex-specific.

  15. Integrating conceptual knowledge within and across representational modalities.

    PubMed

    McNorgan, Chris; Reid, Jackie; McRae, Ken

    2011-02-01

    Research suggests that concepts are distributed across brain regions specialized for processing information from different sensorimotor modalities. Multimodal semantic models fall into one of two broad classes differentiated by the assumed hierarchy of convergence zones over which information is integrated. In shallow models, communication within- and between-modality is accomplished using either direct connectivity, or a central semantic hub. In deep models, modalities are connected via cascading integration sites with successively wider receptive fields. Four experiments provide the first direct behavioral tests of these models using speeded tasks involving feature inference and concept activation. Shallow models predict no within-modal versus cross-modal difference in either task, whereas deep models predict a within-modal advantage for feature inference, but a cross-modal advantage for concept activation. Experiments 1 and 2 used relatedness judgments to tap participants' knowledge of relations for within- and cross-modal feature pairs. Experiments 3 and 4 used a dual-feature verification task. The pattern of decision latencies across Experiments 1-4 is consistent with a deep integration hierarchy. Copyright © 2010 Elsevier B.V. All rights reserved.

  16. Tracking the evolution of crossmodal plasticity and visual functions before and after sight restoration

    PubMed Central

    Dormal, Giulia; Lepore, Franco; Harissi-Dagher, Mona; Albouy, Geneviève; Bertone, Armando; Rossion, Bruno

    2014-01-01

    Visual deprivation leads to massive reorganization in both the structure and function of the occipital cortex, raising crucial challenges for sight restoration. We tracked the behavioral, structural, and neurofunctional changes occurring in an early and severely visually impaired patient before and 1.5 and 7 mo after sight restoration with magnetic resonance imaging. Robust presurgical auditory responses were found in occipital cortex despite residual preoperative vision. In primary visual cortex, crossmodal auditory responses overlapped with visual responses and remained elevated even 7 mo after surgery. However, these crossmodal responses decreased in extrastriate occipital regions after surgery, together with improved behavioral vision and with increases in both gray matter density and neural activation in low-level visual regions. Selective responses in high-level visual regions involved in motion and face processing were observable even before surgery and did not evolve after surgery. Taken together, these findings demonstrate that structural and functional reorganization of occipital regions are present in an individual with a long-standing history of severe visual impairment and that such reorganizations can be partially reversed by visual restoration in adulthood. PMID:25520432

  17. Temporal ventriloquism: crossmodal interaction on the time dimension. 1. Evidence from auditory-visual temporal order judgment.

    PubMed

    Bertelson, Paul; Aschersleben, Gisa

    2003-10-01

    In the well-known visual bias of auditory location (alias the ventriloquist effect), auditory and visual events presented in separate locations appear closer together, provided the presentations are synchronized. Here, we consider the possibility of the converse phenomenon: crossmodal attraction on the time dimension conditional on spatial proximity. Participants judged the order of occurrence of sound bursts and light flashes, respectively, separated in time by varying stimulus onset asynchronies (SOAs) and delivered either in the same or in different locations. Presentation was organized using randomly mixed psychophysical staircases, by which the SOA was reduced progressively until a point of uncertainty was reached. This point was reached at longer SOAs with the sounds in the same frontal location as the flashes than in different places, showing that apparent temporal separation is effectively longer in the first condition. Together with a similar one obtained recently in a case of tactile-visual discrepancy, this result supports a view in which timing and spatial layout of the inputs play to some extent inter-changeable roles in the pairing operation at the base of crossmodal interaction.

  18. The effects of perceptual priming on 4-year-olds' haptic-to-visual cross-modal transfer.

    PubMed

    Kalagher, Hilary

    2013-01-01

    Four-year-old children often have difficulty visually recognizing objects that were previously experienced only haptically. This experiment attempts to improve their performance in these haptic-to-visual transfer tasks. Sixty-two 4-year-old children participated in priming trials in which they explored eight unfamiliar objects visually, haptically, or visually and haptically together. Subsequently, all children participated in the same haptic-to-visual cross-modal transfer task. In this task, children haptically explored the objects that were presented in the priming phase and then visually identified a match from among three test objects, each matching the object on only one dimension (shape, texture, or color). Children in all priming conditions predominantly made shape-based matches; however, the most shape-based matches were made in the Visual and Haptic condition. All kinds of priming provided the necessary memory traces upon which subsequent haptic exploration could build a strong enough representation to enable subsequent visual recognition. Haptic exploration patterns during the cross-modal transfer task are discussed and the detailed analyses provide a unique contribution to our understanding of the development of haptic exploratory procedures.

  19. Cross-modal activation of auditory regions during visuo-spatial working memory in early deafness.

    PubMed

    Ding, Hao; Qin, Wen; Liang, Meng; Ming, Dong; Wan, Baikun; Li, Qiang; Yu, Chunshui

    2015-09-01

    Early deafness can reshape deprived auditory regions to enable the processing of signals from the remaining intact sensory modalities. Cross-modal activation has been observed in auditory regions during non-auditory tasks in early deaf subjects. In hearing subjects, visual working memory can evoke activation of the visual cortex, which further contributes to behavioural performance. In early deaf subjects, however, whether and how auditory regions participate in visual working memory remains unclear. We hypothesized that auditory regions may be involved in visual working memory processing and activation of auditory regions may contribute to the superior behavioural performance of early deaf subjects. In this study, 41 early deaf subjects (22 females and 19 males, age range: 20-26 years, age of onset of deafness < 2 years) and 40 age- and gender-matched hearing controls underwent functional magnetic resonance imaging during a visuo-spatial delayed recognition task that consisted of encoding, maintenance and recognition stages. The early deaf subjects exhibited faster reaction times on the spatial working memory task than did the hearing controls. Compared with hearing controls, deaf subjects exhibited increased activation in the superior temporal gyrus bilaterally during the recognition stage. This increased activation amplitude predicted faster and more accurate working memory performance in deaf subjects. Deaf subjects also had increased activation in the superior temporal gyrus bilaterally during the maintenance stage and in the right superior temporal gyrus during the encoding stage. These increased activation amplitude also predicted faster reaction times on the spatial working memory task in deaf subjects. These findings suggest that cross-modal plasticity occurs in auditory association areas in early deaf subjects. These areas are involved in visuo-spatial working memory. Furthermore, amplitudes of cross-modal activation during the maintenance stage were positively correlated with the age of onset of hearing aid use and were negatively correlated with the percentage of lifetime hearing aid use in deaf subjects. These findings suggest that earlier and longer hearing aid use may inhibit cross-modal reorganization in early deaf subjects. Granger causality analysis revealed that, compared to the hearing controls, the deaf subjects had an enhanced net causal flow from the frontal eye field to the superior temporal gyrus. These findings indicate that a top-down mechanism may better account for the cross-modal activation of auditory regions in early deaf subjects.See MacSweeney and Cardin (doi:10/1093/awv197) for a scientific commentary on this article. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. The origin of human complex diversity: Stochastic epistatic modules and the intrinsic compatibility between distributional robustness and phenotypic changeability.

    PubMed

    Ijichi, Shinji; Ijichi, Naomi; Ijichi, Yukina; Imamura, Chikako; Sameshima, Hisami; Kawaike, Yoichi; Morioka, Hirofumi

    2018-01-01

    The continuing prevalence of a highly heritable and hypo-reproductive extreme tail of a human neurobehavioral quantitative diversity suggests the possibility that the reproductive majority retains the genetic mechanism for the extremes. From the perspective of stochastic epistasis, the effect of an epistatic modifier variant can randomly vary in both phenotypic value and effect direction among the careers depending on the genetic individuality, and the modifier careers are ubiquitous in the population distribution. The neutrality of the mean genetic effect in the careers warrants the survival of the variant under selection pressures. Functionally or metabolically related modifier variants make an epistatic network module and dozens of modules may be involved in the phenotype. To assess the significance of stochastic epistasis, a simplified module-based model was employed. The individual repertoire of the modifier variants in a module also participates in the genetic individuality which determines the genetic contribution of each modifier in the career. Because the entire contribution of a module to the phenotypic outcome is consequently unpredictable in the model, the module effect represents the total contribution of the related modifiers as a stochastic unit in the simulations. As a result, the intrinsic compatibility between distributional robustness and quantitative changeability could mathematically be simulated using the model. The artificial normal distribution shape in large-sized simulations was preserved in each generation even if the lowest fitness tail was un-reproductive. The robustness of normality beyond generations is analogous to the real situations of human complex diversity including neurodevelopmental conditions. The repeated regeneration of the un-reproductive extreme tail may be inevitable for the reproductive majority's competence to survive and change, suggesting implications of the extremes for others. Further model-simulations to illustrate how the fitness of extreme individuals can be low through generations may be warranted to increase the credibility of this stochastic epistasis model.

  1. Stochastic Protein Multimerization, Cooperativity and Fitness

    NASA Astrophysics Data System (ADS)

    Hagner, Kyle; Setayeshgar, Sima; Lynch, Michael

    Many proteins assemble into multimeric structures that can vary greatly among phylogenetic lineages. As protein-protein interactions (PPI) require productive encounters among subunits, these structural variations are related in part to variation in cellular protein abundance. The protein abundance in turn depends on the intrinsic rates of production and decay of mRNA and protein molecules, as well as rates of cell growth and division. We present a stochastic model for prediction of the multimeric state of a protein as a function of these processes and the free energy associated with binding interfaces. We demonstrate favorable agreement between the model and a wide class of proteins using E. coli proteome data. As such, this platform, which links protein abundance, PPI and quaternary structure in growing and dividing cells can be extended to evolutionary models for the emergence and diversification of multimeric proteins. We investigate cooperativity - a ubiquitous functional property of multimeric proteins - as a possible selective force driving multimerization, demonstrating a reduction in the cost of protein production relative to the overall proteome energy budget that can be tied to fitness.

  2. The role of the airline transportation network in the prediction and predictability of global epidemics.

    PubMed

    Colizza, Vittoria; Barrat, Alain; Barthélemy, Marc; Vespignani, Alessandro

    2006-02-14

    The systematic study of large-scale networks has unveiled the ubiquitous presence of connectivity patterns characterized by large-scale heterogeneities and unbounded statistical fluctuations. These features affect dramatically the behavior of the diffusion processes occurring on networks, determining the ensuing statistical properties of their evolution pattern and dynamics. In this article, we present a stochastic computational framework for the forecast of global epidemics that considers the complete worldwide air travel infrastructure complemented with census population data. We address two basic issues in global epidemic modeling: (i) we study the role of the large scale properties of the airline transportation network in determining the global diffusion pattern of emerging diseases; and (ii) we evaluate the reliability of forecasts and outbreak scenarios with respect to the intrinsic stochasticity of disease transmission and traffic flows. To address these issues we define a set of quantitative measures able to characterize the level of heterogeneity and predictability of the epidemic pattern. These measures may be used for the analysis of containment policies and epidemic risk assessment.

  3. Intermittency in small-scale turbulence: a velocity gradient approach

    NASA Astrophysics Data System (ADS)

    Meneveau, Charles; Johnson, Perry

    2017-11-01

    Intermittency of small-scale motions is an ubiquitous facet of turbulent flows, and predicting this phenomenon based on reduced models derived from first principles remains an important open problem. Here, a multiple-time scale stochastic model is introduced for the Lagrangian evolution of the full velocity gradient tensor in fluid turbulence at arbitrarily high Reynolds numbers. This low-dimensional model differs fundamentally from prior shell models and other empirically-motivated models of intermittency because the nonlinear gradient self-stretching and rotation A2 term vital to the energy cascade and intermittency development is represented exactly from the Navier-Stokes equations. With only one adjustable parameter needed to determine the model's effective Reynolds number, numerical solutions of the resulting set of stochastic differential equations show that the model predicts anomalous scaling for moments of the velocity gradient components and negative derivative skewness. It also predicts signature topological features of the velocity gradient tensor such as vorticity alignment trends with the eigen-directions of the strain-rate. This research was made possible by a graduate Fellowship from the National Science Foundation and by a Grant from The Gulf of Mexico Research Initiative.

  4. Post-transcriptional regulation tends to attenuate the mRNA noise and to increase the mRNA gain

    NASA Astrophysics Data System (ADS)

    Shi, Changhong; Wang, Shuqiang; Zhou, Tianshou; Jiang, Yiguo

    2015-10-01

    Post-transcriptional regulation is ubiquitous in prokaryotic and eukaryotic cells, but how it impacts gene expression remains to be fully explored. Here, we analyze a simple gene model in which we assume that mRNAs are produced in a constitutive manner but are regulated post-transcriptionally by a decapping enzyme that switches between the active state and the inactive state. We derive the analytical mRNA distribution governed by a chemical master equation, which can be well used to analyze the mechanism of how post-transcription regulation influences the mRNA expression level including the mRNA noise. We demonstrate that the mean mRNA level in the stochastic case is always higher than that in the deterministic case due to the stochastic effect of the enzyme, but the size of the increased part depends mainly on the switching rates between two enzyme states. More interesting is that we find that in contrast to transcriptional regulation, post-transcriptional regulation tends to attenuate noise in mRNA. Our results provide insight into the role of post-transcriptional regulation in controlling the transcriptional noise.

  5. Fall field crickets did not acclimate to simulated seasonal changes in temperature.

    PubMed

    Niehaus, Amanda C; Wilson, Robbie S; Storm, Jonathan J; Angilletta, Michael J

    2012-02-01

    In nature, many organisms alter their developmental trajectory in response to environmental variation. However, studies of thermal acclimation have historically involved stable, unrealistic thermal treatments. In our study, we incorporated ecologically relevant treatments to examine the effects of environmental stochasticity on the thermal acclimation of the fall field cricket (Gryllus pennsylvanicus). We raised crickets for 5 weeks at either a constant temperature (25°C) or at one of three thermal regimes mimicking a seasonal decline in temperature (from 25 to 12°C). The latter three treatments differed in their level of thermal stochasticity: crickets experienced either no diel cycle, a predictable diel cycle, or an unpredictable diel cycle. Following these treatments, we measured several traits considered relevant to survival or reproduction, including growth rate, jumping velocity, feeding rate, metabolic rate, and cold tolerance. Contrary to our predictions, the acclimatory responses of crickets were unrelated to the magnitude or type of thermal variation. Furthermore, acclimation of performance was not ubiquitous among traits. We recommend additional studies of acclimation in fluctuating environments to assess the generality of these findings.

  6. A stochastic vision-based model inspired by zebrafish collective behaviour in heterogeneous environments

    PubMed Central

    Collignon, Bertrand; Séguret, Axel; Halloy, José

    2016-01-01

    Collective motion is one of the most ubiquitous behaviours displayed by social organisms and has led to the development of numerous models. Recent advances in the understanding of sensory system and information processing by animals impels one to revise classical assumptions made in decisional algorithms. In this context, we present a model describing the three-dimensional visual sensory system of fish that adjust their trajectory according to their perception field. Furthermore, we introduce a stochastic process based on a probability distribution function to move in targeted directions rather than on a summation of influential vectors as is classically assumed by most models. In parallel, we present experimental results of zebrafish (alone or in group of 10) swimming in both homogeneous and heterogeneous environments. We use these experimental data to set the parameter values of our model and show that this perception-based approach can simulate the collective motion of species showing cohesive behaviour in heterogeneous environments. Finally, we discuss the advances of this multilayer model and its possible outcomes in biological, physical and robotic sciences. PMID:26909173

  7. Mixed analytical-stochastic simulation method for the recovery of a Brownian gradient source from probability fluxes to small windows.

    PubMed

    Dobramysl, U; Holcman, D

    2018-02-15

    Is it possible to recover the position of a source from the steady-state fluxes of Brownian particles to small absorbing windows located on the boundary of a domain? To address this question, we develop a numerical procedure to avoid tracking Brownian trajectories in the entire infinite space. Instead, we generate particles near the absorbing windows, computed from the analytical expression of the exit probability. When the Brownian particles are generated by a steady-state gradient at a single point, we compute asymptotically the fluxes to small absorbing holes distributed on the boundary of half-space and on a disk in two dimensions, which agree with stochastic simulations. We also derive an expression for the splitting probability between small windows using the matched asymptotic method. Finally, when there are more than two small absorbing windows, we show how to reconstruct the position of the source from the diffusion fluxes. The present approach provides a computational first principle for the mechanism of sensing a gradient of diffusing particles, a ubiquitous problem in cell biology.

  8. Performance of normal adults and children on central auditory diagnostic tests and their corresponding visual analogs.

    PubMed

    Bellis, Teri James; Ross, Jody

    2011-09-01

    It has been suggested that, in order to validate a diagnosis of (C)APD (central auditory processing disorder), testing using direct cross-modal analogs should be performed to demonstrate that deficits exist solely or primarily in the auditory modality (McFarland and Cacace, 1995; Cacace and McFarland, 2005). This modality-specific viewpoint is controversial and not universally accepted (American Speech-Language-Hearing Association [ASHA], 2005; Musiek et al, 2005). Further, no such analogs have been developed to date, and neither the feasibility of such testing in normally functioning individuals nor the concurrent validity of cross-modal analogs has been established. The purpose of this study was to investigate the feasibility of cross-modal testing by examining the performance of normal adults and children on four tests of central auditory function and their corresponding visual analogs. In addition, this study investigated the degree to which concurrent validity of auditory and visual versions of these tests could be demonstrated. An experimental repeated measures design was employed. Participants consisted of two groups (adults, n=10; children, n=10) with normal and symmetrical hearing sensitivity, normal or corrected-to-normal visual acuity, and no family or personal history of auditory/otologic, language, learning, neurologic, or related disorders. Visual analogs of four tests in common clinical use for the diagnosis of (C)APD were developed (Dichotic Digits [Musiek, 1983]; Frequency Patterns [Pinheiro and Ptacek, 1971]; Duration Patterns [Pinheiro and Musiek, 1985]; and the Random Gap Detection Test [RGDT; Keith, 2000]). Participants underwent two 1 hr test sessions separated by at least 1 wk. Order of sessions (auditory, visual) and tests within each session were counterbalanced across participants. ANOVAs (analyses of variance) were used to examine effects of group, modality, and laterality (for the Dichotic/Dichoptic Digits tests) or response condition (for the auditory and visual Frequency Patterns and Duration Patterns tests). Pearson product-moment correlations were used to investigate relationships between auditory and visual performance. Adults performed significantly better than children on the Dichotic/Dichoptic Digits tests. Results also revealed a significant effect of modality, with auditory better than visual, and a significant modality×laterality interaction, with a right-ear advantage seen for the auditory task and a left-visual-field advantage seen for the visual task. For the Frequency Patterns test and its visual analog, results revealed a significant modality×response condition interaction, with humming better than labeling for the auditory version but the reversed effect for the visual version. For Duration Patterns testing, visual performance was significantly poorer than auditory performance. Due to poor test-retest reliability and ceiling effects for the auditory and visual gap-detection tasks, analyses could not be performed. No cross-modal correlations were observed for any test. Results demonstrated that cross-modal testing is at least feasible using easily accessible computer hardware and software. The lack of any cross-modal correlations suggests independent processing mechanisms for auditory and visual versions of each task. Examination of performance in individuals with central auditory and pan-sensory disorders is needed to determine the utility of cross-modal analogs in the differential diagnosis of (C)APD. American Academy of Audiology.

  9. Attentional reorienting triggers spatial asymmetries in a search task with cross-modal spatial cueing

    PubMed Central

    Paladini, Rebecca E.; Diana, Lorenzo; Zito, Giuseppe A.; Nyffeler, Thomas; Wyss, Patric; Mosimann, Urs P.; Müri, René M.; Nef, Tobias

    2018-01-01

    Cross-modal spatial cueing can affect performance in a visual search task. For example, search performance improves if a visual target and an auditory cue originate from the same spatial location, and it deteriorates if they originate from different locations. Moreover, it has recently been postulated that multisensory settings, i.e., experimental settings, in which critical stimuli are concurrently presented in different sensory modalities (e.g., visual and auditory), may trigger asymmetries in visuospatial attention. Thereby, a facilitation has been observed for visual stimuli presented in the right compared to the left visual space. However, it remains unclear whether auditory cueing of attention differentially affects search performance in the left and the right hemifields in audio-visual search tasks. The present study investigated whether spatial asymmetries would occur in a search task with cross-modal spatial cueing. Participants completed a visual search task that contained no auditory cues (i.e., unimodal visual condition), spatially congruent, spatially incongruent, and spatially non-informative auditory cues. To further assess participants’ accuracy in localising the auditory cues, a unimodal auditory spatial localisation task was also administered. The results demonstrated no left/right asymmetries in the unimodal visual search condition. Both an additional incongruent, as well as a spatially non-informative, auditory cue resulted in lateral asymmetries. Thereby, search times were increased for targets presented in the left compared to the right hemifield. No such spatial asymmetry was observed in the congruent condition. However, participants’ performance in the congruent condition was modulated by their tone localisation accuracy. The findings of the present study demonstrate that spatial asymmetries in multisensory processing depend on the validity of the cross-modal cues, and occur under specific attentional conditions, i.e., when visual attention has to be reoriented towards the left hemifield. PMID:29293637

  10. Face Recognition, Musical Appraisal, and Emotional Crossmodal Bias.

    PubMed

    Invitto, Sara; Calcagnì, Antonio; Mignozzi, Arianna; Scardino, Rosanna; Piraino, Giulia; Turchi, Daniele; De Feudis, Irio; Brunetti, Antonio; Bevilacqua, Vitoantonio; de Tommaso, Marina

    2017-01-01

    Recent research on the crossmodal integration of visual and auditory perception suggests that evaluations of emotional information in one sensory modality may tend toward the emotional value generated in another sensory modality. This implies that the emotions elicited by musical stimuli can influence the perception of emotional stimuli presented in other sensory modalities, through a top-down process. The aim of this work was to investigate how crossmodal perceptual processing influences emotional face recognition and how potential modulation of this processing induced by music could be influenced by the subject's musical competence. We investigated how emotional face recognition processing could be modulated by listening to music and how this modulation varies according to the subjective emotional salience of the music and the listener's musical competence. The sample consisted of 24 participants: 12 professional musicians and 12 university students (non-musicians). Participants performed an emotional go/no-go task whilst listening to music by Albeniz, Chopin, or Mozart. The target stimuli were emotionally neutral facial expressions. We examined the N170 Event-Related Potential (ERP) and behavioral responses (i.e., motor reaction time to target recognition and musical emotional judgment). A linear mixed-effects model and a decision-tree learning technique were applied to N170 amplitudes and latencies. The main findings of the study were that musicians' behavioral responses and N170 is more affected by the emotional value of music administered in the emotional go/no-go task and this bias is also apparent in responses to the non-target emotional face. This suggests that emotional information, coming from multiple sensory channels, activates a crossmodal integration process that depends upon the stimuli emotional salience and the listener's appraisal.

  11. Perceptuo-motor compatibility governs multisensory integration in bimanual coordination dynamics.

    PubMed

    Zelic, Gregory; Mottet, Denis; Lagarde, Julien

    2016-02-01

    The brain has the remarkable ability to bind together inputs from different sensory origin into a coherent percept. Behavioral benefits can result from such ability, e.g., a person typically responds faster and more accurately to cross-modal stimuli than to unimodal stimuli. To date, it is, however, largely unknown whether such multisensory benefits, shown for discrete reactive behaviors, generalize to the continuous coordination of movements. The present study addressed multisensory integration from the perspective of bimanual coordination dynamics, where the perceptual activity no longer triggers a single response but continuously guides the motor action. The task consisted in coordinating anti-symmetrically the continuous flexion-extension of the index fingers, while synchronizing with an external pacer. Three different configurations of metronome were tested, for which we examined whether a cross-modal pacing (audio-tactile beats) improved the stability of the coordination in comparison with unimodal pacing condition (auditory or tactile beats). We found a more stable bimanual coordination for cross-modal pacing, but only when the metronome configuration directly matched the anti-symmetric coordination pattern. We conclude that multisensory integration can benefit the continuous coordination of movements; however, this is constrained by whether the perceptual and motor activities match in space and time.

  12. Crossmodal attention switching: auditory dominance in temporal discrimination tasks.

    PubMed

    Lukas, Sarah; Philipp, Andrea M; Koch, Iring

    2014-11-01

    Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Alterations to multisensory and unisensory integration by stimulus competition

    PubMed Central

    Rowland, Benjamin A.; Stanford, Terrence R.; Stein, Barry E.

    2011-01-01

    In environments containing sensory events at competing locations, selecting a target for orienting requires prioritization of stimulus values. Although the superior colliculus (SC) is causally linked to the stimulus selection process, the manner in which SC multisensory integration operates in a competitive stimulus environment is unknown. Here we examined how the activity of visual-auditory SC neurons is affected by placement of a competing target in the opposite hemifield, a stimulus configuration that would, in principle, promote interhemispheric competition for access to downstream motor circuitry. Competitive interactions between the targets were evident in how they altered unisensory and multisensory responses of individual neurons. Responses elicited by a cross-modal stimulus (multisensory responses) proved to be substantially more resistant to competitor-induced depression than were unisensory responses (evoked by the component modality-specific stimuli). Similarly, when a cross-modal stimulus served as the competitor, it exerted considerably more depression than did its individual component stimuli, in some cases producing more depression than predicted by their linear sum. These findings suggest that multisensory integration can help resolve competition among multiple targets by enhancing orientation to the location of cross-modal events while simultaneously suppressing orientation to events at alternate locations. PMID:21957224

  14. Alterations to multisensory and unisensory integration by stimulus competition.

    PubMed

    Pluta, Scott R; Rowland, Benjamin A; Stanford, Terrence R; Stein, Barry E

    2011-12-01

    In environments containing sensory events at competing locations, selecting a target for orienting requires prioritization of stimulus values. Although the superior colliculus (SC) is causally linked to the stimulus selection process, the manner in which SC multisensory integration operates in a competitive stimulus environment is unknown. Here we examined how the activity of visual-auditory SC neurons is affected by placement of a competing target in the opposite hemifield, a stimulus configuration that would, in principle, promote interhemispheric competition for access to downstream motor circuitry. Competitive interactions between the targets were evident in how they altered unisensory and multisensory responses of individual neurons. Responses elicited by a cross-modal stimulus (multisensory responses) proved to be substantially more resistant to competitor-induced depression than were unisensory responses (evoked by the component modality-specific stimuli). Similarly, when a cross-modal stimulus served as the competitor, it exerted considerably more depression than did its individual component stimuli, in some cases producing more depression than predicted by their linear sum. These findings suggest that multisensory integration can help resolve competition among multiple targets by enhancing orientation to the location of cross-modal events while simultaneously suppressing orientation to events at alternate locations.

  15. An Event-Related Potential Study of Cross-modal Morphological and Phonological Priming

    PubMed Central

    Justus, Timothy; Yang, Jennifer; Larsen, Jary; de Mornay Davies, Paul; Swick, Diane

    2009-01-01

    The current work investigated whether differences in phonological overlap between the past- and present-tense forms of regular and irregular verbs can account for the graded neurophysiological effects of verb regularity observed in past-tense priming designs. Event-related potentials were recorded from sixteen healthy participants who performed a lexical-decision task in which past-tense primes immediately preceded present-tense targets. To minimize intra-modal phonological priming effects, cross-modal presentation between auditory primes and visual targets was employed, and results were compared to a companion intra-modal auditory study (Justus, Larsen, de Mornay Davies, & Swick, 2008). For both regular and irregular verbs, faster response times and reduced N400 components were observed for present-tense forms when primed by the corresponding past-tense forms. Although behavioral facilitation was observed with a pseudopast phonological control condition, neither this condition nor an orthographic-phonological control produced significant N400 priming effects. Instead, these two types of priming were associated with a post-lexical anterior negativity (PLAN). Results are discussed with regard to dual- and single-system theories of inflectional morphology, as well as intra- and cross-modal prelexical priming. PMID:20160930

  16. Semantic-based crossmodal processing during visual suppression.

    PubMed

    Cox, Dustin; Hong, Sang Wook

    2015-01-01

    To reveal the mechanisms underpinning the influence of auditory input on visual awareness, we examine, (1) whether purely semantic-based multisensory integration facilitates the access to visual awareness for familiar visual events, and (2) whether crossmodal semantic priming is the mechanism responsible for the semantic auditory influence on visual awareness. Using continuous flash suppression, we rendered dynamic and familiar visual events (e.g., a video clip of an approaching train) inaccessible to visual awareness. We manipulated the semantic auditory context of the videos by concurrently pairing them with a semantically matching soundtrack (congruent audiovisual condition), a semantically non-matching soundtrack (incongruent audiovisual condition), or with no soundtrack (neutral video-only condition). We found that participants identified the suppressed visual events significantly faster (an earlier breakup of suppression) in the congruent audiovisual condition compared to the incongruent audiovisual condition and video-only condition. However, this facilitatory influence of semantic auditory input was only observed when audiovisual stimulation co-occurred. Our results suggest that the enhanced visual processing with a semantically congruent auditory input occurs due to audiovisual crossmodal processing rather than semantic priming, which may occur even when visual information is not available to visual awareness.

  17. Audiovisual Modulation in Mouse Primary Visual Cortex Depends on Cross-Modal Stimulus Configuration and Congruency.

    PubMed

    Meijer, Guido T; Montijn, Jorrit S; Pennartz, Cyriel M A; Lansink, Carien S

    2017-09-06

    The sensory neocortex is a highly connected associative network that integrates information from multiple senses, even at the level of the primary sensory areas. Although a growing body of empirical evidence supports this view, the neural mechanisms of cross-modal integration in primary sensory areas, such as the primary visual cortex (V1), are still largely unknown. Using two-photon calcium imaging in awake mice, we show that the encoding of audiovisual stimuli in V1 neuronal populations is highly dependent on the features of the stimulus constituents. When the visual and auditory stimulus features were modulated at the same rate (i.e., temporally congruent), neurons responded with either an enhancement or suppression compared with unisensory visual stimuli, and their prevalence was balanced. Temporally incongruent tones or white-noise bursts included in audiovisual stimulus pairs resulted in predominant response suppression across the neuronal population. Visual contrast did not influence multisensory processing when the audiovisual stimulus pairs were congruent; however, when white-noise bursts were used, neurons generally showed response suppression when the visual stimulus contrast was high whereas this effect was absent when the visual contrast was low. Furthermore, a small fraction of V1 neurons, predominantly those located near the lateral border of V1, responded to sound alone. These results show that V1 is involved in the encoding of cross-modal interactions in a more versatile way than previously thought. SIGNIFICANCE STATEMENT The neural substrate of cross-modal integration is not limited to specialized cortical association areas but extends to primary sensory areas. Using two-photon imaging of large groups of neurons, we show that multisensory modulation of V1 populations is strongly determined by the individual and shared features of cross-modal stimulus constituents, such as contrast, frequency, congruency, and temporal structure. Congruent audiovisual stimulation resulted in a balanced pattern of response enhancement and suppression compared with unisensory visual stimuli, whereas incongruent or dissimilar stimuli at full contrast gave rise to a population dominated by response-suppressing neurons. Our results indicate that V1 dynamically integrates nonvisual sources of information while still attributing most of its resources to coding visual information. Copyright © 2017 the authors 0270-6474/17/378783-14$15.00/0.

  18. Plasmids as stochastic model systems

    NASA Astrophysics Data System (ADS)

    Paulsson, Johan

    2003-05-01

    Plasmids are self-replicating gene clusters present in on average 2-100 copies per bacterial cell. To reduce random fluctuations and thereby avoid extinction, they ubiquitously autoregulate their own synthesis using negative feedback loops. Here I use van Kampen's Ω-expansion for a two-dimensional model of negative feedback including plasmids and ther replication inhibitors. This analytically summarizes the standard perspective on replication control -- including the effects of sensitivity amplification, exponential time-delays and noisy signaling. I further review the two most common molecular sensitivity mechanisms: multistep control and cooperativity. Finally, I discuss more controversial sensitivity schemes, such as noise-enhanced sensitivity, the exploitation of small-number combinatorics and double-layered feedback loops to suppress noise in disordered environments.

  19. Transport behaviors of locally fractional coupled Brownian motors with fluctuating interactions

    NASA Astrophysics Data System (ADS)

    Wang, Huiqi; Ni, Feixiang; Lin, Lifeng; Lv, Wangyong; Zhu, Hongqiang

    2018-09-01

    In some complex viscoelastic mediums, it is ubiquitous that absorbing and desorbing surrounding Brownian particles randomly occur in coupled systems. The conventional method is to model a variable-mass system driven by both multiplicative and additive noises. In this paper, an improved mathematical model is created based on generalized Langevin equations (GLE) to characterize the random interaction with locally fluctuating number of coupled particles in the elastically coupled factional Brownian motors (FBM). By the numerical simulations, the effect of fluctuating interactions on collective transport behaviors is investigated, and some abnormal phenomena, such as cooperative behaviors, stochastic resonance (SR) and anomalous transport, are observed in the regime of sub-diffusion.

  20. Computing molecular fluctuations in biochemical reaction systems based on a mechanistic, statistical theory of irreversible processes.

    PubMed

    Kulasiri, Don

    2011-01-01

    We discuss the quantification of molecular fluctuations in the biochemical reaction systems within the context of intracellular processes associated with gene expression. We take the molecular reactions pertaining to circadian rhythms to develop models of molecular fluctuations in this chapter. There are a significant number of studies on stochastic fluctuations in intracellular genetic regulatory networks based on single cell-level experiments. In order to understand the fluctuations associated with the gene expression in circadian rhythm networks, it is important to model the interactions of transcriptional factors with the E-boxes in the promoter regions of some of the genes. The pertinent aspects of a near-equilibrium theory that would integrate the thermodynamical and particle dynamic characteristics of intracellular molecular fluctuations would be discussed, and the theory is extended by using the theory of stochastic differential equations. We then model the fluctuations associated with the promoter regions using general mathematical settings. We implemented ubiquitous Gillespie's algorithms, which are used to simulate stochasticity in biochemical networks, for each of the motifs. Both the theory and the Gillespie's algorithms gave the same results in terms of the time evolution of means and variances of molecular numbers. As biochemical reactions occur far away from equilibrium-hence the use of the Gillespie algorithm-these results suggest that the near-equilibrium theory should be a good approximation for some of the biochemical reactions. © 2011 Elsevier Inc. All rights reserved.

  1. A composition algorithm based on crossmodal taste-music correspondences

    PubMed Central

    Mesz, Bruno; Sigman, Mariano; Trevisan, Marcos A.

    2012-01-01

    While there is broad consensus about the structural similarities between language and music, comparably less attention has been devoted to semantic correspondences between these two ubiquitous manifestations of human culture. We have investigated the relations between music and a narrow and bounded domain of semantics: the words and concepts referring to taste sensations. In a recent work, we found that taste words were consistently mapped to musical parameters. Bitter is associated with low-pitched and continuous music (legato), salty is characterized by silences between notes (staccato), sour is high pitched, dissonant and fast and sweet is consonant, slow and soft (Mesz et al., 2011). Here we extended these ideas, in a synergistic dialog between music and science, investigating whether music can be algorithmically generated from taste-words. We developed and implemented an algorithm that exploits a large corpus of classic and popular songs. New musical pieces were produced by choosing fragments from the corpus and modifying them to minimize their distance to the region in musical space that characterizes each taste. In order to test the capability of the produced music to elicit significant associations with the different tastes, musical pieces were produced and judged by a group of non-musicians. Results showed that participants could decode well above chance the taste-word of the composition. We also discuss how our findings can be expressed in a performance bridging music and cognitive science. PMID:22557952

  2. Multistability, cross-modal binding and the additivity of conjoined grouping principles

    PubMed Central

    Kubovy, Michael; Yu, Minhong

    2012-01-01

    We present a sceptical view of multimodal multistability—drawing most of our examples from the relation between audition and vision. We begin by summarizing some of the principal ways in which audio-visual binding takes place. We review the evidence that unambiguous stimulation in one modality may affect the perception of a multistable stimulus in another modality. Cross-modal influences of one multistable stimulus on the multistability of another are different: they have occurred only in speech perception. We then argue that the strongest relation between perceptual organization in vision and perceptual organization in audition is likely to be by way of analogous Gestalt laws. We conclude with some general observations about multimodality. PMID:22371617

  3. The effect of unimodal affective priming on dichotic emotion recognition.

    PubMed

    Voyer, Daniel; Myles, Daniel

    2017-11-15

    The present report concerns two experiments extending to unimodal priming the cross-modal priming effects observed with auditory emotions by Harding and Voyer [(2016). Laterality effects in cross-modal affective priming. Laterality: Asymmetries of Body, Brain and Cognition, 21, 585-605]. Experiment 1 used binaural targets to establish the presence of the priming effect and Experiment 2 used dichotically presented targets to examine auditory asymmetries. In Experiment 1, 82 university students completed a task in which binaural targets consisting of one of 4 English words inflected in one of 4 emotional tones were preceded by binaural primes consisting of one of 4 Mandarin words pronounced in the same (congruent) or different (incongruent) emotional tones. Trials where the prime emotion was congruent with the target emotion showed faster responses and higher accuracy in identifying the target emotion. In Experiment 2, 60 undergraduate students participated and the target was presented dichotically instead of binaurally. Primes congruent with the left ear produced a large left ear advantage, whereas right congruent primes produced a right ear advantage. These results indicate that unimodal priming produces stronger effects than those observed under cross-modal priming. The findings suggest that priming should likely be considered a strong top-down influence on laterality effects.

  4. Cross-modal decoupling in temporal attention.

    PubMed

    Mühlberg, Stefanie; Oriolo, Giovanni; Soto-Faraco, Salvador

    2014-06-01

    Prior studies have repeatedly reported behavioural benefits to events occurring at attended, compared to unattended, points in time. It has been suggested that, as for spatial orienting, temporal orienting of attention spreads across sensory modalities in a synergistic fashion. However, the consequences of cross-modal temporal orienting of attention remain poorly understood. One challenge is that the passage of time leads to an increase in event predictability throughout a trial, thus making it difficult to interpret possible effects (or lack thereof). Here we used a design that avoids complete temporal predictability to investigate whether attending to a sensory modality (vision or touch) at a point in time confers beneficial access to events in the other, non-attended, sensory modality (touch or vision, respectively). In contrast to previous studies and to what happens with spatial attention, we found that events in one (unattended) modality do not automatically benefit from happening at the time point when another modality is expected. Instead, it seems that attention can be deployed in time with relative independence for different sensory modalities. Based on these findings, we argue that temporal orienting of attention can be cross-modally decoupled in order to flexibly react according to the environmental demands, and that the efficiency of this selective decoupling unfolds in time. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  5. Cross-Modal Associations between Sounds and Drink Tastes/Textures: A Study with Spontaneous Production of Sound-Symbolic Words.

    PubMed

    Sakamoto, Maki; Watanabe, Junji

    2016-03-01

    Many languages have a word class whose speech sounds are linked to sensory experiences. Several recent studies have demonstrated cross-modal associations (or correspondences) between sounds and gustatory sensations by asking participants to match predefined sound-symbolic words (e.g., "maluma/takete") with the taste/texture of foods. Here, we further explore cross-modal associations using the spontaneous production of words and semantic ratings of sensations. In the experiment, after drinking liquids, participants were asked to express their taste/texture using Japanese sound-symbolic words, and at the same time, to evaluate it in terms of criteria expressed by adjectives. Because the Japanese language has a large vocabulary of sound-symbolic words, and Japanese people frequently use them to describe taste/texture, analyzing a variety of Japanese sound-symbolic words spontaneously produced to express taste/textures might enable us to explore the mechanism of taste/texture categorization. A hierarchical cluster analysis based on the relationship between linguistic sounds and taste/texture evaluations revealed the structure of sensation categories. The results indicate that an emotional evaluation like pleasant/unpleasant is the primary cluster in gustation. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Olfactory-visual integration facilitates perception of subthreshold negative emotion.

    PubMed

    Novak, Lucas R; Gitelman, Darren R; Schuyler, Brianna; Li, Wen

    2015-10-01

    A fast growing literature of multisensory emotion integration notwithstanding, the chemical senses, intimately associated with emotion, have been largely overlooked. Moreover, an ecologically highly relevant principle of "inverse effectiveness", rendering maximal integration efficacy with impoverished sensory input, remains to be assessed in emotion integration. Presenting minute, subthreshold negative (vs. neutral) cues in faces and odors, we demonstrated olfactory-visual emotion integration in improved emotion detection (especially among individuals with weaker perception of unimodal negative cues) and response enhancement in the amygdala. Moreover, while perceptual gain for visual negative emotion involved the posterior superior temporal sulcus/pSTS, perceptual gain for olfactory negative emotion engaged both the associative olfactory (orbitofrontal) cortex and amygdala. Dynamic causal modeling (DCM) analysis of fMRI timeseries further revealed connectivity strengthening among these areas during crossmodal emotion integration. That multisensory (but not low-level unisensory) areas exhibited both enhanced response and region-to-region coupling favors a top-down (vs. bottom-up) account for olfactory-visual emotion integration. Current findings thus confirm the involvement of multisensory convergence areas, while highlighting unique characteristics of olfaction-related integration. Furthermore, successful crossmodal binding of subthreshold aversive cues not only supports the principle of "inverse effectiveness" in emotion integration but also accentuates the automatic, unconscious quality of crossmodal emotion synthesis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Size-Sensitive Perceptual Representations Underlie Visual and Haptic Object Recognition

    PubMed Central

    Craddock, Matt; Lawson, Rebecca

    2009-01-01

    A variety of similarities between visual and haptic object recognition suggests that the two modalities may share common representations. However, it is unclear whether such common representations preserve low-level perceptual features or whether transfer between vision and haptics is mediated by high-level, abstract representations. Two experiments used a sequential shape-matching task to examine the effects of size changes on unimodal and crossmodal visual and haptic object recognition. Participants felt or saw 3D plastic models of familiar objects. The two objects presented on a trial were either the same size or different sizes and were the same shape or different but similar shapes. Participants were told to ignore size changes and to match on shape alone. In Experiment 1, size changes on same-shape trials impaired performance similarly for both visual-to-visual and haptic-to-haptic shape matching. In Experiment 2, size changes impaired performance on both visual-to-haptic and haptic-to-visual shape matching and there was no interaction between the cost of size changes and direction of transfer. Together the unimodal and crossmodal matching results suggest that the same, size-specific perceptual representations underlie both visual and haptic object recognition, and indicate that crossmodal memory for objects must be at least partly based on common perceptual representations. PMID:19956685

  8. Men's Preferences for Women's Femininity in Dynamic Cross-Modal Stimuli

    PubMed Central

    O'Connor, Jillian J. M.; Fraccaro, Paul J.; Pisanski, Katarzyna; Tigue, Cara C.; Feinberg, David R.

    2013-01-01

    Men generally prefer feminine women's faces and voices over masculine women's faces and voices, and these cross-modal preferences are positively correlated. Men's preferences for female facial and vocal femininity have typically been investigated independently by presenting soundless still images separately from audio-only vocal recordings. For the first time ever, we presented men with short video clips in which dynamic faces and voices were simultaneously manipulated in femininity/masculinity. Men preferred feminine men's faces over masculine men's faces, and preferred masculine men's voices over feminine men's voices. We found that men preferred feminine women's faces and voices over masculine women's faces and voices. Men's attractiveness ratings of both feminine and masculine faces were increased by the addition of vocal femininity. Also, men's attractiveness ratings of feminine and masculine voices were increased by the addition of facial femininity present in the video. Men's preferences for vocal and facial femininity were significantly and positively correlated when stimuli were female, but not when they were male. Our findings complement other evidence for cross-modal femininity preferences among male raters, and show that preferences observed in studies using still images and/or independently presented vocal stimuli are also observed when dynamic faces and voices are displayed simultaneously in video format. PMID:23936037

  9. Long-Lasting Crossmodal Cortical Reorganization Triggered by Brief Postnatal Visual Deprivation.

    PubMed

    Collignon, Olivier; Dormal, Giulia; de Heering, Adelaide; Lepore, Franco; Lewis, Terri L; Maurer, Daphne

    2015-09-21

    Animal and human studies have demonstrated that transient visual deprivation early in life, even for a very short period, permanently alters the response properties of neurons in the visual cortex and leads to corresponding behavioral visual deficits. While it is acknowledged that early-onset and longstanding blindness leads the occipital cortex to respond to non-visual stimulation, it remains unknown whether a short and transient period of postnatal visual deprivation is sufficient to trigger crossmodal reorganization that persists after years of visual experience. In the present study, we characterized brain responses to auditory stimuli in 11 adults who had been deprived of all patterned vision at birth by congenital cataracts in both eyes until they were treated at 9 to 238 days of age. When compared to controls with typical visual experience, the cataract-reversal group showed enhanced auditory-driven activity in focal visual regions. A combination of dynamic causal modeling with Bayesian model selection indicated that this auditory-driven activity in the occipital cortex was better explained by direct cortico-cortical connections with the primary auditory cortex than by subcortical connections. Thus, a short and transient period of visual deprivation early in life leads to enduring large-scale crossmodal reorganization of the brain circuitry typically dedicated to vision. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Cross-Modal Correspondence Among Vision, Audition, and Touch in Natural Objects: An Investigation of the Perceptual Properties of Wood.

    PubMed

    Kanaya, Shoko; Kariya, Kenji; Fujisaki, Waka

    2016-10-01

    Certain systematic relationships are often assumed between information conveyed from multiple sensory modalities; for instance, a small figure and a high pitch may be perceived as more harmonious. This phenomenon, termed cross-modal correspondence, may result from correlations between multi-sensory signals learned in daily experience of the natural environment. If so, we would observe cross-modal correspondences not only in the perception of artificial stimuli but also in perception of natural objects. To test this hypothesis, we reanalyzed data collected previously in our laboratory examining perceptions of the material properties of wood using vision, audition, and touch. We compared participant evaluations of three perceptual properties (surface brightness, sharpness of sound, and smoothness) of the wood blocks obtained separately via vision, audition, and touch. Significant positive correlations were identified for all properties in the audition-touch comparison, and for two of the three properties regarding in the vision-touch comparison. By contrast, no properties exhibited significant positive correlations in the vision-audition comparison. These results suggest that we learn correlations between multi-sensory signals through experience; however, the strength of this statistical learning is apparently dependent on the particular combination of sensory modalities involved. © The Author(s) 2016.

  11. Crossmodal plasticity in the fusiform gyrus of late blind individuals during voice recognition.

    PubMed

    Hölig, Cordula; Föcker, Julia; Best, Anna; Röder, Brigitte; Büchel, Christian

    2014-12-01

    Blind individuals are trained in identifying other people through voices. In congenitally blind adults the anterior fusiform gyrus has been shown to be active during voice recognition. Such crossmodal changes have been associated with a superiority of blind adults in voice perception. The key question of the present functional magnetic resonance imaging (fMRI) study was whether visual deprivation that occurs in adulthood is followed by similar adaptive changes of the voice identification system. Late blind individuals and matched sighted participants were tested in a priming paradigm, in which two voice stimuli were subsequently presented. The prime (S1) and the target (S2) were either from the same speaker (person-congruent voices) or from two different speakers (person-incongruent voices). Participants had to classify the S2 as either coming from an old or a young person. Only in late blind but not in matched sighted controls, the activation in the anterior fusiform gyrus was modulated by voice identity: late blind volunteers showed an increase of the BOLD signal in response to person-incongruent compared with person-congruent trials. These results suggest that the fusiform gyrus adapts to input of a new modality even in the mature brain and thus demonstrate an adult type of crossmodal plasticity. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Treatment of sentence comprehension and production in aphasia: is there cross-modal generalisation?

    PubMed

    Adelt, Anne; Hanne, Sandra; Stadie, Nicole

    2016-09-09

    Exploring generalisation following treatment of language deficits in aphasia can provide insights into the functional relation of the cognitive processing systems involved. In the present study, we first review treatment outcomes of interventions targeting sentence processing deficits and, second report a treatment study examining the occurrence of practice effects and generalisation in sentence comprehension and production. In order to explore the potential linkage between processing systems involved in comprehending and producing sentences, we investigated whether improvements generalise within (i.e., uni-modal generalisation in comprehension or in production) and/or across modalities (i.e., cross-modal generalisation from comprehension to production or vice versa). Two individuals with aphasia displaying co-occurring deficits in sentence comprehension and production were trained on complex, non-canonical sentences in both modalities. Two evidence-based treatment protocols were applied in a crossover intervention study with sequence of treatment phases being randomly allocated. Both participants benefited significantly from treatment, leading to uni-modal generalisation in both comprehension and production. However, cross-modal generalisation did not occur. The magnitude of uni-modal generalisation in sentence production was related to participants' sentence comprehension performance prior to treatment. These findings support the assumption of modality-specific sub-systems for sentence comprehension and production, being linked uni-directionally from comprehension to production.

  13. Pharmacologic attenuation of cross-modal sensory augmentation within the chronic pain insula

    PubMed Central

    Harte, Steven E.; Ichesco, Eric; Hampson, Johnson P.; Peltier, Scott J.; Schmidt-Wilcke, Tobias; Clauw, Daniel J.; Harris, Richard E.

    2016-01-01

    Abstract Pain can be elicited through all mammalian sensory pathways yet cross-modal sensory integration, and its relationship to clinical pain, is largely unexplored. Centralized chronic pain conditions such as fibromyalgia are often associated with symptoms of multisensory hypersensitivity. In this study, female patients with fibromyalgia demonstrated cross-modal hypersensitivity to visual and pressure stimuli compared with age- and sex-matched healthy controls. Functional magnetic resonance imaging revealed that insular activity evoked by an aversive level of visual stimulation was associated with the intensity of fibromyalgia pain. Moreover, attenuation of this insular activity by the analgesic pregabalin was accompanied by concomitant reductions in clinical pain. A multivariate classification method using support vector machines (SVM) applied to visual-evoked brain activity distinguished patients with fibromyalgia from healthy controls with 82% accuracy. A separate SVM classification of treatment effects on visual-evoked activity reliably identified when patients were administered pregabalin as compared with placebo. Both SVM analyses identified significant weights within the insular cortex during aversive visual stimulation. These data suggest that abnormal integration of multisensory and pain pathways within the insula may represent a pathophysiological mechanism in some chronic pain conditions and that insular response to aversive visual stimulation may have utility as a marker for analgesic drug development. PMID:27101425

  14. Low-complexity stochastic modeling of wall-bounded shear flows

    NASA Astrophysics Data System (ADS)

    Zare, Armin

    Turbulent flows are ubiquitous in nature and they appear in many engineering applications. Transition to turbulence, in general, increases skin-friction drag in air/water vehicles compromising their fuel-efficiency and reduces the efficiency and longevity of wind turbines. While traditional flow control techniques combine physical intuition with costly experiments, their effectiveness can be significantly enhanced by control design based on low-complexity models and optimization. In this dissertation, we develop a theoretical and computational framework for the low-complexity stochastic modeling of wall-bounded shear flows. Part I of the dissertation is devoted to the development of a modeling framework which incorporates data-driven techniques to refine physics-based models. We consider the problem of completing partially known sample statistics in a way that is consistent with underlying stochastically driven linear dynamics. Neither the statistics nor the dynamics are precisely known. Thus, our objective is to reconcile the two in a parsimonious manner. To this end, we formulate optimization problems to identify the dynamics and directionality of input excitation in order to explain and complete available covariance data. For problem sizes that general-purpose solvers cannot handle, we develop customized optimization algorithms based on alternating direction methods. The solution to the optimization problem provides information about critical directions that have maximal effect in bringing model and statistics in agreement. In Part II, we employ our modeling framework to account for statistical signatures of turbulent channel flow using low-complexity stochastic dynamical models. We demonstrate that white-in-time stochastic forcing is not sufficient to explain turbulent flow statistics and develop models for colored-in-time forcing of the linearized Navier-Stokes equations. We also examine the efficacy of stochastically forced linearized NS equations and their parabolized equivalents in the receptivity analysis of velocity fluctuations to external sources of excitation as well as capturing the effect of the slowly-varying base flow on streamwise streaks and Tollmien-Schlichting waves. In Part III, we develop a model-based approach to design surface actuation of turbulent channel flow in the form of streamwise traveling waves. This approach is capable of identifying the drag reducing trends of traveling waves in a simulation-free manner. We also use the stochastically forced linearized NS equations to examine the Reynolds number independent effects of spanwise wall oscillations on drag reduction in turbulent channel flows. This allows us to extend the predictive capability of our simulation-free approach to high Reynolds numbers.

  15. Self-Organization by Stochastic Reconnection: The Mechanism Underlying CMEs/Flares

    NASA Astrophysics Data System (ADS)

    Antiochos, S. K.; Knizhnik, K. J.; DeVore, C. R.

    2017-12-01

    The largest explosions in the solar system are the giant CMEs/flares that produce the most dangerous space weather at Earth, yet may also have been essential for the origin of life. The root cause of CMEs/flares is that the lowest-lying magnetic field lines in the Sun's corona undergo the continual buildup of stress and free energy that can be released only through explosive ejection. We perform the first MHD simulations of a coronal-photospheric magnetic system that is driven by random photospheric convective flows and has a realistic geometry for the coronal field. Furthermore, our simulations accurately preserve the key constraint of magnetic helicity. We find that even though small-scale stress is injected randomly throughout the corona, the net result of "stochastic" coronal reconnection is a coherent stretching of the lowest-lying field lines. This highly counter-intuitive demonstration of self-organization - magnetic stress builds up locally rather than spreading out to a minimum energy state - is the fundamental mechanism responsible for the Sun's magnetic explosions and is likely to be a mechanism that is ubiquitous throughout space and laboratory plasmas. This work was supported in part by the NASA LWS and SR Programs.

  16. Kinetics of autocatalysis in small systems

    NASA Astrophysics Data System (ADS)

    Arslan, Erdem; Laurenzi, Ian J.

    2008-01-01

    Autocatalysis is a ubiquitous chemical process that drives a plethora of biological phenomena, including the self-propagation of prions etiological to the Creutzfeldt-Jakob disease and bovine spongiform encephalopathy. To explain the dynamics of these systems, we have solved the chemical master equation for the irreversible autocatalytic reaction A +B→2A. This solution comprises the first closed form expression describing the probabilistic time evolution of the populations of autocatalytic and noncatalytic molecules from an arbitrary initial state. Grand probability distributions are likewise presented for autocatalysis in the equilibrium limit (A+B⇌2A), allowing for the first mechanistic comparison of this process with chemical isomerization (B⇌A) in small systems. Although the average population of autocatalytic (i.e., prion) molecules largely conforms to the predictions of the classical "rate law" approach in time and the law of mass action at equilibrium, thermodynamic differences between the entropies of isomerization and autocatalysis are revealed, suggesting a "mechanism dependence" of state variables for chemical reaction processes. These results demonstrate the importance of chemical mechanism and molecularity in the development of stochastic processes for chemical systems and the relationship between the stochastic approach to chemical kinetics and nonequilibrium thermodynamics.

  17. Neural practice effect during cross-modal selective attention: Supra-modal and modality-specific effects.

    PubMed

    Xia, Jing; Zhang, Wei; Jiang, Yizhou; Li, You; Chen, Qi

    2018-05-16

    Practice and experiences gradually shape the central nervous system, from the synaptic level to large-scale neural networks. In natural multisensory environment, even when inundated by streams of information from multiple sensory modalities, our brain does not give equal weight to different modalities. Rather, visual information more frequently receives preferential processing and eventually dominates consciousness and behavior, i.e., visual dominance. It remains unknown, however, the supra-modal and modality-specific practice effect during cross-modal selective attention, and moreover whether the practice effect shows similar modality preferences as the visual dominance effect in the multisensory environment. To answer the above two questions, we adopted a cross-modal selective attention paradigm in conjunction with the hybrid fMRI design. Behaviorally, visual performance significantly improved while auditory performance remained constant with practice, indicating that visual attention more flexibly adapted behavior with practice than auditory attention. At the neural level, the practice effect was associated with decreasing neural activity in the frontoparietal executive network and increasing activity in the default mode network, which occurred independently of the modality attended, i.e., the supra-modal mechanisms. On the other hand, functional decoupling between the auditory and the visual system was observed with the progress of practice, which varied as a function of the modality attended. The auditory system was functionally decoupled with both the dorsal and ventral visual stream during auditory attention while was decoupled only with the ventral visual stream during visual attention. To efficiently suppress the irrelevant visual information with practice, auditory attention needs to additionally decouple the auditory system from the dorsal visual stream. The modality-specific mechanisms, together with the behavioral effect, thus support the visual dominance model in terms of the practice effect during cross-modal selective attention. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Multisensory perceptual learning of temporal order: audiovisual learning transfers to vision but not audition.

    PubMed

    Alais, David; Cass, John

    2010-06-23

    An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question. Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes. The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order may be optimised to object-centered rather than viewer-centered constraints.

  19. Cross-modality PET/CT and contrast-enhanced CT imaging for pancreatic cancer

    PubMed Central

    Zhang, Jian; Zuo, Chang-Jing; Jia, Ning-Yang; Wang, Jian-Hua; Hu, Sheng-Ping; Yu, Zhong-Fei; Zheng, Yuan; Zhang, An-Yu; Feng, Xiao-Yuan

    2015-01-01

    AIM: To explore the diagnostic value of the cross-modality fusion images provided by positron emission tomography/computed tomography (PET/CT) and contrast-enhanced CT (CECT) for pancreatic cancer (PC). METHODS: Data from 70 patients with pancreatic lesions who underwent CECT and PET/CT examinations at our hospital from August 2010 to October 2012 were analyzed. PET/CECT for the cross-modality image fusion was obtained using TureD software. The diagnostic efficiencies of PET/CT, CECT and PET/CECT were calculated and compared with each other using a χ2 test. P < 0.05 was considered to indicate statistical significance. RESULTS: Of the total 70 patients, 50 had PC and 20 had benign lesions. The differences in the sensitivity, negative predictive value (NPV), and accuracy between CECT and PET/CECT in detecting PC were statistically significant (P < 0.05 for each). In 15 of the 31 patients with PC who underwent a surgical operation, peripancreatic vessel invasion was verified. The differences in the sensitivity, positive predictive value, NPV, and accuracy of CECT vs PET/CT and PET/CECT vs PET/CT in diagnosing peripancreatic vessel invasion were statistically significant (P < 0.05 for each). In 19 of the 31 patients with PC who underwent a surgical operation, regional lymph node metastasis was verified by postsurgical histology. There was no statistically significant difference among the three methods in detecting regional lymph node metastasis (P > 0.05 for each). In 17 of the 50 patients with PC confirmed by histology or clinical follow-up, distant metastasis was confirmed. The differences in the sensitivity and NPV between CECT and PET/CECT in detecting distant metastasis were statistically significant (P < 0.05 for each). CONCLUSION: Cross-modality image fusion of PET/CT and CECT is a convenient and effective method that can be used to diagnose and stage PC, compensating for the defects of PET/CT and CECT when they are conducted individually. PMID:25780297

  20. One bout of open skill exercise improves cross-modal perception and immediate memory in healthy older adults who habitually exercise.

    PubMed

    O'Brien, Jessica; Ottoboni, Giovanni; Tessari, Alessia; Setti, Annalisa

    2017-01-01

    One single bout of exercise can be associated with positive effects on cognition, due to physiological changes associated with muscular activity, increased arousal, and training of cognitive skills during exercise. While the positive effects of life-long physical activity on cognitive ageing are well demonstrated, it is not well established whether one bout of exercise is sufficient to register such benefits in older adults. The aim of this study was to test the effect of one bout of exercise on two cognitive processes essential to daily life and known to decline with ageing: audio-visual perception and immediate memory. Fifty-eight older adults took part in a quasi-experimental design study and were divided into three groups based on their habitual activity (open skill exercise (mean age = 69.65, SD = 5.64), closed skill exercise, N = 18, 94% female; sedentary activity-control group, N = 21, 62% female). They were then tested before and after their activity (duration between 60 and 80 minutes). Results showed improvement in sensitivity in audio-visual perception in the open skill group and improvements in one of the measures of immediate memory in both exercise groups, after controlling for baseline differences including global cognition and health. These findings indicate that immediate benefits for cross-modal perception and memory can be obtained after open skill exercise. However, improvements after closed skill exercise may be limited to memory benefits. Perceptual benefits are likely to be associated with arousal, while memory benefits may be due to the training effects provided by task requirements during exercise. The respective role of qualitative and quantitative differences between these activities in terms of immediate cognitive benefits should be further investigated. Importantly, the present results present the first evidence for a modulation of cross-modal perception by exercise, providing a plausible avenue for rehabilitation of cross-modal perception deficits, which are emerging as a significant contributor to functional decline in ageing.

  1. Manipulating Bodily Presence Affects Cross-Modal Spatial Attention: A Virtual-Reality-Based ERP Study.

    PubMed

    Harjunen, Ville J; Ahmed, Imtiaj; Jacucci, Giulio; Ravaja, Niklas; Spapé, Michiel M

    2017-01-01

    Earlier studies have revealed cross-modal visuo-tactile interactions in endogenous spatial attention. The current research used event-related potentials (ERPs) and virtual reality (VR) to identify how the visual cues of the perceiver's body affect visuo-tactile interaction in endogenous spatial attention and at what point in time the effect takes place. A bimodal oddball task with lateralized tactile and visual stimuli was presented in two VR conditions, one with and one without visible hands, and one VR-free control with hands in view. Participants were required to silently count one type of stimulus and ignore all other stimuli presented in irrelevant modality or location. The presence of hands was found to modulate early and late components of somatosensory and visual evoked potentials. For sensory-perceptual stages, the presence of virtual or real hands was found to amplify attention-related negativity on the somatosensory N140 and cross-modal interaction in somatosensory and visual P200. For postperceptual stages, an amplified N200 component was obtained in somatosensory and visual evoked potentials, indicating increased response inhibition in response to non-target stimuli. The effect of somatosensory, but not visual, N200 enhanced when the virtual hands were present. The findings suggest that bodily presence affects sustained cross-modal spatial attention between vision and touch and that this effect is specifically present in ERPs related to early- and late-sensory processing, as well as response inhibition, but do not affect later attention and memory-related P3 activity. Finally, the experiments provide commeasurable scenarios for the estimation of the signal and noise ratio to quantify effects related to the use of a head mounted display (HMD). However, despite valid a-priori reasons for fearing signal interference due to a HMD, we observed no significant drop in the robustness of our ERP measurements.

  2. Manipulating Bodily Presence Affects Cross-Modal Spatial Attention: A Virtual-Reality-Based ERP Study

    PubMed Central

    Harjunen, Ville J.; Ahmed, Imtiaj; Jacucci, Giulio; Ravaja, Niklas; Spapé, Michiel M.

    2017-01-01

    Earlier studies have revealed cross-modal visuo-tactile interactions in endogenous spatial attention. The current research used event-related potentials (ERPs) and virtual reality (VR) to identify how the visual cues of the perceiver’s body affect visuo-tactile interaction in endogenous spatial attention and at what point in time the effect takes place. A bimodal oddball task with lateralized tactile and visual stimuli was presented in two VR conditions, one with and one without visible hands, and one VR-free control with hands in view. Participants were required to silently count one type of stimulus and ignore all other stimuli presented in irrelevant modality or location. The presence of hands was found to modulate early and late components of somatosensory and visual evoked potentials. For sensory-perceptual stages, the presence of virtual or real hands was found to amplify attention-related negativity on the somatosensory N140 and cross-modal interaction in somatosensory and visual P200. For postperceptual stages, an amplified N200 component was obtained in somatosensory and visual evoked potentials, indicating increased response inhibition in response to non-target stimuli. The effect of somatosensory, but not visual, N200 enhanced when the virtual hands were present. The findings suggest that bodily presence affects sustained cross-modal spatial attention between vision and touch and that this effect is specifically present in ERPs related to early- and late-sensory processing, as well as response inhibition, but do not affect later attention and memory-related P3 activity. Finally, the experiments provide commeasurable scenarios for the estimation of the signal and noise ratio to quantify effects related to the use of a head mounted display (HMD). However, despite valid a-priori reasons for fearing signal interference due to a HMD, we observed no significant drop in the robustness of our ERP measurements. PMID:28275346

  3. Learning multisensory representations for auditory-visual transfer of sequence category knowledge: a probabilistic language of thought approach.

    PubMed

    Yildirim, Ilker; Jacobs, Robert A

    2015-06-01

    If a person is trained to recognize or categorize objects or events using one sensory modality, the person can often recognize or categorize those same (or similar) objects and events via a novel modality. This phenomenon is an instance of cross-modal transfer of knowledge. Here, we study the Multisensory Hypothesis which states that people extract the intrinsic, modality-independent properties of objects and events, and represent these properties in multisensory representations. These representations underlie cross-modal transfer of knowledge. We conducted an experiment evaluating whether people transfer sequence category knowledge across auditory and visual domains. Our experimental data clearly indicate that we do. We also developed a computational model accounting for our experimental results. Consistent with the probabilistic language of thought approach to cognitive modeling, our model formalizes multisensory representations as symbolic "computer programs" and uses Bayesian inference to learn these representations. Because the model demonstrates how the acquisition and use of amodal, multisensory representations can underlie cross-modal transfer of knowledge, and because the model accounts for subjects' experimental performances, our work lends credence to the Multisensory Hypothesis. Overall, our work suggests that people automatically extract and represent objects' and events' intrinsic properties, and use these properties to process and understand the same (and similar) objects and events when they are perceived through novel sensory modalities.

  4. The sense of agency is action-effect causality perception based on cross-modal grouping.

    PubMed

    Kawabe, Takahiro; Roseboom, Warrick; Nishida, Shin'ya

    2013-07-22

    Sense of agency, the experience of controlling external events through one's actions, stems from contiguity between action- and effect-related signals. Here we show that human observers link their action- and effect-related signals using a computational principle common to cross-modal sensory grouping. We first report that the detection of a delay between tactile and visual stimuli is enhanced when both stimuli are synchronized with separate auditory stimuli (experiment 1). This occurs because the synchronized auditory stimuli hinder the potential grouping between tactile and visual stimuli. We subsequently demonstrate an analogous effect on observers' key press as an action and a sensory event. This change is associated with a modulation in sense of agency; namely, sense of agency, as evaluated by apparent compressions of action-effect intervals (intentional binding) or subjective causality ratings, is impaired when both participant's action and its putative visual effect events are synchronized with auditory tones (experiments 2 and 3). Moreover, a similar role of action-effect grouping in determining sense of agency is demonstrated when the additional signal is presented in the modality identical to an effect event (experiment 4). These results are consistent with the view that sense of agency is the result of general processes of causal perception and that cross-modal grouping plays a central role in these processes.

  5. Neural mechanisms underlying sound-induced visual motion perception: An fMRI study.

    PubMed

    Hidaka, Souta; Higuchi, Satomi; Teramoto, Wataru; Sugita, Yoichi

    2017-07-01

    Studies of crossmodal interactions in motion perception have reported activation in several brain areas, including those related to motion processing and/or sensory association, in response to multimodal (e.g., visual and auditory) stimuli that were both in motion. Recent studies have demonstrated that sounds can trigger illusory visual apparent motion to static visual stimuli (sound-induced visual motion: SIVM): A visual stimulus blinking at a fixed location is perceived to be moving laterally when an alternating left-right sound is also present. Here, we investigated brain activity related to the perception of SIVM using a 7T functional magnetic resonance imaging technique. Specifically, we focused on the patterns of neural activities in SIVM and visually induced visual apparent motion (VIVM). We observed shared activations in the middle occipital area (V5/hMT), which is thought to be involved in visual motion processing, for SIVM and VIVM. Moreover, as compared to VIVM, SIVM resulted in greater activation in the superior temporal area and dominant functional connectivity between the V5/hMT area and the areas related to auditory and crossmodal motion processing. These findings indicate that similar but partially different neural mechanisms could be involved in auditory-induced and visually-induced motion perception, and neural signals in auditory, visual, and, crossmodal motion processing areas closely and directly interact in the perception of SIVM. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. The sense of agency is action–effect causality perception based on cross-modal grouping

    PubMed Central

    Kawabe, Takahiro; Roseboom, Warrick; Nishida, Shin'ya

    2013-01-01

    Sense of agency, the experience of controlling external events through one's actions, stems from contiguity between action- and effect-related signals. Here we show that human observers link their action- and effect-related signals using a computational principle common to cross-modal sensory grouping. We first report that the detection of a delay between tactile and visual stimuli is enhanced when both stimuli are synchronized with separate auditory stimuli (experiment 1). This occurs because the synchronized auditory stimuli hinder the potential grouping between tactile and visual stimuli. We subsequently demonstrate an analogous effect on observers' key press as an action and a sensory event. This change is associated with a modulation in sense of agency; namely, sense of agency, as evaluated by apparent compressions of action–effect intervals (intentional binding) or subjective causality ratings, is impaired when both participant's action and its putative visual effect events are synchronized with auditory tones (experiments 2 and 3). Moreover, a similar role of action–effect grouping in determining sense of agency is demonstrated when the additional signal is presented in the modality identical to an effect event (experiment 4). These results are consistent with the view that sense of agency is the result of general processes of causal perception and that cross-modal grouping plays a central role in these processes. PMID:23740784

  7. The effect of early visual deprivation on the neural bases of multisensory processing.

    PubMed

    Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte

    2015-06-01

    Developmental vision is deemed to be necessary for the maturation of multisensory cortical circuits. Thus far, this has only been investigated in animal studies, which have shown that congenital visual deprivation markedly reduces the capability of neurons to integrate cross-modal inputs. The present study investigated the effect of transient congenital visual deprivation on the neural mechanisms of multisensory processing in humans. We used functional magnetic resonance imaging to compare responses of visual and auditory cortical areas to visual, auditory and audio-visual stimulation in cataract-reversal patients and normally sighted controls. The results showed that cataract-reversal patients, unlike normally sighted controls, did not exhibit multisensory integration in auditory areas. Furthermore, cataract-reversal patients, but not normally sighted controls, exhibited lower visual cortical processing within visual cortex during audio-visual stimulation than during visual stimulation. These results indicate that congenital visual deprivation affects the capability of cortical areas to integrate cross-modal inputs in humans, possibly because visual processing is suppressed during cross-modal stimulation. Arguably, the lack of vision in the first months after birth may result in a reorganization of visual cortex, including the suppression of noisy visual input from the deprived retina in order to reduce interference during auditory processing. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Perceived Odor-Taste Congruence Influences Intensity and Pleasantness Differently.

    PubMed

    Amsellem, Sherlley; Ohla, Kathrin

    2016-10-01

    The role of congruence in cross-modal interactions has received little attention. In most experiments involving cross-modal pairs, congruence is conceived of as a binary process according to which cross-modal pairs are categorized as perceptually and/or semantically matching or mismatching. The present study investigated whether odor-taste congruence can be perceived gradually and whether congruence impacts other facets of subjective experience, that is, intensity, pleasantness, and familiarity. To address these questions, we presented food odorants (chicken, orange, and 3 mixtures of the 2) and tastants (savory-salty and sour-sweet) in pairs varying in congruence. Participants were to report the perceived congruence of the pairs along with intensity, pleasantness, and familiarity. We found that participants could perceive distinct congruence levels, thereby favoring a multilevel account of congruence perception. In addition, familiarity and pleasantness followed the same pattern as the congruence while intensity was highest for the most congruent and the most incongruent pairs whereas intensities of the intermediary-congruent pairs were reduced. Principal component analysis revealed that pleasantness and familiarity form one dimension of the phenomenological experience of odor-taste pairs that was orthogonal to intensity. The results bear implications for the understanding the behavioral underpinnings of perseverance of habitual food choices. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Perceptual effects in auralization of virtual rooms

    NASA Astrophysics Data System (ADS)

    Kleiner, Mendel; Larsson, Pontus; Vastfjall, Daniel; Torres, Rendell R.

    2002-05-01

    By using various types of binaural simulation (or ``auralization'') of physical environments, it is now possible to study basic perceptual issues relevant to room acoustics, as well to simulate the acoustic conditions found in concert halls and other auditoria. Binaural simulation of physical spaces in general is also important to virtual reality systems. This presentation will begin with an overview of the issues encountered in the auralization of room and other environments. We will then discuss the influence of various approximations in room modeling, in particular, edge- and surface scattering, on the perceived room response. Finally, we will discuss cross-modal effects, such as the influence of visual cues on the perception of auditory cues, and the influence of cross-modal effects on the judgement of ``perceived presence'' and the rating of room acoustic quality.

  10. Cross-modal associations in synaesthesia: Vowel colours in the ear of the beholder.

    PubMed

    Moos, Anja; Smith, Rachel; Miller, Sam R; Simmons, David R

    2014-01-01

    Human speech conveys many forms of information, but for some exceptional individuals (synaesthetes), listening to speech sounds can automatically induce visual percepts such as colours. In this experiment, grapheme-colour synaesthetes and controls were asked to assign colours, or shades of grey, to different vowel sounds. We then investigated whether the acoustic content of these vowel sounds influenced participants' colour and grey-shade choices. We found that both colour and grey-shade associations varied systematically with vowel changes. The colour effect was significant for both participant groups, but significantly stronger and more consistent for synaesthetes. Because not all vowel sounds that we used are "translatable" into graphemes, we conclude that acoustic-phonetic influences co-exist with established graphemic influences in the cross-modal correspondences of both synaesthetes and non-synaesthetes.

  11. A Cross-Modal Perspective on the Relationships between Imagery and Working Memory

    PubMed Central

    Likova, Lora T.

    2013-01-01

    Mapping the distinctions and interrelationships between imagery and working memory (WM) remains challenging. Although each of these major cognitive constructs is defined and treated in various ways across studies, most accept that both imagery and WM involve a form of internal representation available to our awareness. In WM, there is a further emphasis on goal-oriented, active maintenance, and use of this conscious representation to guide voluntary action. Multicomponent WM models incorporate representational buffers, such as the visuo-spatial sketchpad, plus central executive functions. If there is a visuo-spatial “sketchpad” for WM, does imagery involve the same representational buffer? Alternatively, does WM employ an imagery-specific representational mechanism to occupy our awareness? Or do both constructs utilize a more generic “projection screen” of an amodal nature? To address these issues, in a cross-modal fMRI study, I introduce a novel Drawing-Based Memory Paradigm, and conceptualize drawing as a complex behavior that is readily adaptable from the visual to non-visual modalities (such as the tactile modality), which opens intriguing possibilities for investigating cross-modal learning and plasticity. Blindfolded participants were trained through our Cognitive-Kinesthetic Method (Likova, 2010a, 2012) to draw complex objects guided purely by the memory of felt tactile images. If this WM task had been mediated by transfer of the felt spatial configuration to the visual imagery mechanism, the response-profile in visual cortex would be predicted to have the “top-down” signature of propagation of the imagery signal downward through the visual hierarchy. Remarkably, the pattern of cross-modal occipital activation generated by the non-visual memory drawing was essentially the inverse of this typical imagery signature. The sole visual hierarchy activation was isolated to the primary visual area (V1), and accompanied by deactivation of the entire extrastriate cortex, thus ’cutting-off’ any signal propagation from/to V1 through the visual hierarchy. The implications of these findings for the debate on the interrelationships between the core cognitive constructs of WM and imagery and the nature of internal representations are evaluated. PMID:23346061

  12. "Multisensory brand search: How the meaning of sounds guides consumers' visual attention": Correction to Knoeferle et al. (2016).

    PubMed

    2017-03-01

    Reports an error in "Multisensory brand search: How the meaning of sounds guides consumers' visual attention" by Klemens M. Knoeferle, Pia Knoeferle, Carlos Velasco and Charles Spence ( Journal of Experimental Psychology: Applied , 2016[Jun], Vol 22[2], 196-210). In the article, under Experiment 2, Design and Stimuli, the set number of target products and visual distractors reported in the second paragraph should be 20 and 13, respectively: "On each trial, the 16 products shown in the display were randomly selected from a set of 20 products belonging to different categories. Out of the set of 20 products, seven were potential targets, whereas the other 13 were used as visual distractors only throughout the experiment (since they were not linked to specific usage or consumption sounds)." Consequently, Appendix A in the supplemental materials has been updated. (The following abstract of the original article appeared in record 2016-28876-002.) Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. Reading in the dark: neural correlates and cross-modal plasticity for learning to read entire words without visual experience.

    PubMed

    Sigalov, Nadine; Maidenbaum, Shachar; Amedi, Amir

    2016-03-01

    Cognitive neuroscience has long attempted to determine the ways in which cortical selectivity develops, and the impact of nature vs. nurture on it. Congenital blindness (CB) offers a unique opportunity to test this question as the brains of blind individuals develop without visual experience. Here we approach this question through the reading network. Several areas in the visual cortex have been implicated as part of the reading network, and one of the main ones among them is the VWFA, which is selective to the form of letters and words. But what happens in the CB brain? On the one hand, it has been shown that cross-modal plasticity leads to the recruitment of occipital areas, including the VWFA, for linguistic tasks. On the other hand, we have recently demonstrated VWFA activity for letters in contrast to other visual categories when the information is provided via other senses such as touch or audition. Which of these tasks is more dominant? By which mechanism does the CB brain process reading? Using fMRI and visual-to-auditory sensory substitution which transfers the topographical features of the letters we compare reading with semantic and scrambled conditions in a group of CB. We found activation in early auditory and visual cortices during the early processing phase (letter), while the later phase (word) showed VWFA and bilateral dorsal-intraparietal activations for words. This further supports the notion that many visual regions in general, even early visual areas, also maintain a predilection for task processing even when the modality is variable and in spite of putative lifelong linguistic cross-modal plasticity. Furthermore, we find that the VWFA is recruited preferentially for letter and word form, while it was not recruited, and even exhibited deactivation, for an immediately subsequent semantic task suggesting that despite only short sensory substitution experience orthographic task processing can dominate semantic processing in the VWFA. On a wider scope, this implies that at least in some cases cross-modal plasticity which enables the recruitment of areas for new tasks may be dominated by sensory independent task specific activation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. β-Diversity, Community Assembly, and Ecosystem Functioning.

    PubMed

    Mori, Akira S; Isbell, Forest; Seidl, Rupert

    2018-05-25

    Evidence is increasing for positive effects of α-diversity on ecosystem functioning. We highlight here the crucial role of β-diversity - a hitherto underexplored facet of biodiversity - for a better process-level understanding of biodiversity change and its consequences for ecosystems. A focus on β-diversity has the potential to improve predictions of natural and anthropogenic influences on diversity and ecosystem functioning. However, linking the causes and consequences of biodiversity change is complex because species assemblages in nature are shaped by many factors simultaneously, including disturbance, environmental heterogeneity, deterministic niche factors, and stochasticity. Because variability and change are ubiquitous in ecosystems, acknowledging these inherent properties of nature is an essential step for further advancing scientific knowledge of biodiversity-ecosystem functioning in theory and practice. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Nonclassical Kinetics of Clonal yet Heterogeneous Enzymes.

    PubMed

    Park, Seong Jun; Song, Sanggeun; Jeong, In-Chun; Koh, Hye Ran; Kim, Ji-Hyun; Sung, Jaeyoung

    2017-07-06

    Enzyme-to-enzyme variation in the catalytic rate is ubiquitous among single enzymes created from the same genetic information, which persists over the lifetimes of living cells. Despite advances in single-enzyme technologies, the lack of an enzyme reaction model accounting for the heterogeneous activity of single enzymes has hindered a quantitative understanding of the nonclassical stochastic outcome of single enzyme systems. Here we present a new statistical kinetics and exactly solvable models for clonal yet heterogeneous enzymes with possibly nonergodic state dynamics and state-dependent reactivity, which enable a quantitative understanding of modern single-enzyme experimental results for the mean and fluctuation in the number of product molecules created by single enzymes. We also propose a new experimental measure of the heterogeneity and nonergodicity for a system of enzymes.

  16. Taming the Wild: A Unified Analysis of Hogwild!-Style Algorithms.

    PubMed

    De Sa, Christopher; Zhang, Ce; Olukotun, Kunle; Ré, Christopher

    2015-12-01

    Stochastic gradient descent (SGD) is a ubiquitous algorithm for a variety of machine learning problems. Researchers and industry have developed several techniques to optimize SGD's runtime performance, including asynchronous execution and reduced precision. Our main result is a martingale-based analysis that enables us to capture the rich noise models that may arise from such techniques. Specifically, we use our new analysis in three ways: (1) we derive convergence rates for the convex case (Hogwild!) with relaxed assumptions on the sparsity of the problem; (2) we analyze asynchronous SGD algorithms for non-convex matrix problems including matrix completion; and (3) we design and analyze an asynchronous SGD algorithm, called Buckwild!, that uses lower-precision arithmetic. We show experimentally that our algorithms run efficiently for a variety of problems on modern hardware.

  17. Concurrent enhancement of percolation and synchronization in adaptive networks

    PubMed Central

    Eom, Young-Ho; Boccaletti, Stefano; Caldarelli, Guido

    2016-01-01

    Co-evolutionary adaptive mechanisms are not only ubiquitous in nature, but also beneficial for the functioning of a variety of systems. We here consider an adaptive network of oscillators with a stochastic, fitness-based, rule of connectivity, and show that it self-organizes from fragmented and incoherent states to connected and synchronized ones. The synchronization and percolation are associated to abrupt transitions, and they are concurrently (and significantly) enhanced as compared to the non-adaptive case. Finally we provide evidence that only partial adaptation is sufficient to determine these enhancements. Our study, therefore, indicates that inclusion of simple adaptive mechanisms can efficiently describe some emergent features of networked systems’ collective behaviors, and suggests also self-organized ways to control synchronization and percolation in natural and social systems. PMID:27251577

  18. Information jet: Handling noisy big data from weakly disconnected network

    NASA Astrophysics Data System (ADS)

    Aurongzeb, Deeder

    Sudden aggregation (information jet) of large amount of data is ubiquitous around connected social networks, driven by sudden interacting and non-interacting events, network security threat attacks, online sales channel etc. Clustering of information jet based on time series analysis and graph theory is not new but little work is done to connect them with particle jet statistics. We show pre-clustering based on context can element soft network or network of information which is critical to minimize time to calculate results from noisy big data. We show difference between, stochastic gradient boosting and time series-graph clustering. For disconnected higher dimensional information jet, we use Kallenberg representation theorem (Kallenberg, 2005, arXiv:1401.1137) to identify and eliminate jet similarities from dense or sparse graph.

  19. Delayed excitatory and inhibitory feedback shape neural information transmission

    NASA Astrophysics Data System (ADS)

    Chacron, Maurice J.; Longtin, André; Maler, Leonard

    2005-11-01

    Feedback circuitry with conduction and synaptic delays is ubiquitous in the nervous system. Yet the effects of delayed feedback on sensory processing of natural signals are poorly understood. This study explores the consequences of delayed excitatory and inhibitory feedback inputs on the processing of sensory information. We show, through numerical simulations and theory, that excitatory and inhibitory feedback can alter the firing frequency response of stochastic neurons in opposite ways by creating dynamical resonances, which in turn lead to information resonances (i.e., increased information transfer for specific ranges of input frequencies). The resonances are created at the expense of decreased information transfer in other frequency ranges. Using linear response theory for stochastically firing neurons, we explain how feedback signals shape the neural transfer function for a single neuron as a function of network size. We also find that balanced excitatory and inhibitory feedback can further enhance information tuning while maintaining a constant mean firing rate. Finally, we apply this theory to in vivo experimental data from weakly electric fish in which the feedback loop can be opened. We show that it qualitatively predicts the observed effects of inhibitory feedback. Our study of feedback excitation and inhibition reveals a possible mechanism by which optimal processing may be achieved over selected frequency ranges.

  20. Analysis and Reduction of Complex Networks Under Uncertainty.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghanem, Roger G

    2014-07-31

    This effort was a collaboration with Youssef Marzouk of MIT, Omar Knio of Duke University (at the time at Johns Hopkins University) and Habib Najm of Sandia National Laboratories. The objective of this effort was to develop the mathematical and algorithmic capacity to analyze complex networks under uncertainty. Of interest were chemical reaction networks and smart grid networks. The statements of work for USC focused on the development of stochastic reduced models for uncertain networks. The USC team was led by Professor Roger Ghanem and consisted of one graduate student and a postdoc. The contributions completed by the USC teammore » consisted of 1) methodology and algorithms to address the eigenvalue problem, a problem of significance in the stability of networks under stochastic perturbations, 2) methodology and algorithms to characterize probability measures on graph structures with random flows. This is an important problem in characterizing random demand (encountered in smart grid) and random degradation (encountered in infrastructure systems), as well as modeling errors in Markov Chains (with ubiquitous relevance !). 3) methodology and algorithms for treating inequalities in uncertain systems. This is an important problem in the context of models for material failure and network flows under uncertainty where conditions of failure or flow are described in the form of inequalities between the state variables.« less

  1. Is cross-modal integration of emotional expressions independent of attentional resources?

    PubMed

    Vroomen, J; Driver, J; de Gelder, B

    2001-12-01

    In this study, we examined whether integration of visual and auditory information about emotions requires limited attentional resources. Subjects judged whether a voice expressed happiness or fear, while trying to ignore a concurrently presented static facial expression. As an additional task, the subjects had to add two numbers together rapidly (Experiment 1), count the occurrences of a target digit in a rapid serial visual presentation (Experiment 2), or judge the pitch of a tone as high or low (Experiment 3). The visible face had an impact on judgments of the emotion of the heard voice in all the experiments. This cross-modal effect was independent of whether or not the subjects performed a demanding additional task. This suggests that integration of visual and auditory information about emotions may be a mandatory process, unconstrained by attentional resources.

  2. Cross-modal associations in synaesthesia: Vowel colours in the ear of the beholder

    PubMed Central

    Moos, Anja; Smith, Rachel; Miller, Sam R.; Simmons, David R.

    2014-01-01

    Human speech conveys many forms of information, but for some exceptional individuals (synaesthetes), listening to speech sounds can automatically induce visual percepts such as colours. In this experiment, grapheme–colour synaesthetes and controls were asked to assign colours, or shades of grey, to different vowel sounds. We then investigated whether the acoustic content of these vowel sounds influenced participants' colour and grey-shade choices. We found that both colour and grey-shade associations varied systematically with vowel changes. The colour effect was significant for both participant groups, but significantly stronger and more consistent for synaesthetes. Because not all vowel sounds that we used are “translatable” into graphemes, we conclude that acoustic–phonetic influences co-exist with established graphemic influences in the cross-modal correspondences of both synaesthetes and non-synaesthetes. PMID:25469218

  3. Perceptual learning in temporal discrimination: asymmetric cross-modal transfer from audition to vision.

    PubMed

    Bratzke, Daniel; Seifried, Tanja; Ulrich, Rolf

    2012-08-01

    This study assessed possible cross-modal transfer effects of training in a temporal discrimination task from vision to audition as well as from audition to vision. We employed a pretest-training-post-test design including a control group that performed only the pretest and the post-test. Trained participants showed better discrimination performance with their trained interval than the control group. This training effect transferred to the other modality only for those participants who had been trained with auditory stimuli. The present study thus demonstrates for the first time that training on temporal discrimination within the auditory modality can transfer to the visual modality but not vice versa. This finding represents a novel illustration of auditory dominance in temporal processing and is consistent with the notion that time is primarily encoded in the auditory system.

  4. Phonological encoding in speech-sound disorder: evidence from a cross-modal priming experiment.

    PubMed

    Munson, Benjamin; Krause, Miriam O P

    2017-05-01

    Psycholinguistic models of language production provide a framework for determining the locus of language breakdown that leads to speech-sound disorder (SSD) in children. To examine whether children with SSD differ from their age-matched peers with typical speech and language development (TD) in the ability phonologically to encode lexical items that have been accessed from memory. Thirty-six children (18 with TD, 18 with SSD) viewed pictures while listening to interfering words (IW) or a non-linguistic auditory stimulus presented over headphones either 150 ms before, concurrent with or 150 ms after picture presentation. The phonological similarity of the IW and the pictures' names varied. Picture-naming latency, accuracy and duration were tallied. All children named pictures more quickly in the presence of an IW identical to the picture's name than in the other conditions. At the +150 ms stimulus onset asynchrony, pictures were named more quickly when the IW shared phonemes with the picture's name than when they were phonologically unrelated to the picture's name. The size of this effect was similar for children with SSD and children with TD. Variation in the magnitude of inhibition and facilitation on cross-modal priming tasks across children was more strongly affected by the size of the expressive and receptive lexicons than by speech-production accuracy. Results suggest that SSD is not associated with reduced phonological encoding ability, at least as it is reflected by cross-modal naming tasks. © 2016 Royal College of Speech and Language Therapists.

  5. Crossmodal integration enhances neural representation of task-relevant features in audiovisual face perception.

    PubMed

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Liu, Yongjian; Liang, Changhong; Sun, Pei

    2015-02-01

    Previous studies have shown that audiovisual integration improves identification performance and enhances neural activity in heteromodal brain areas, for example, the posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG). Furthermore, it has also been demonstrated that attention plays an important role in crossmodal integration. In this study, we considered crossmodal integration in audiovisual facial perception and explored its effect on the neural representation of features. The audiovisual stimuli in the experiment consisted of facial movie clips that could be classified into 2 gender categories (male vs. female) or 2 emotion categories (crying vs. laughing). The visual/auditory-only stimuli were created from these movie clips by removing the auditory/visual contents. The subjects needed to make a judgment about the gender/emotion category for each movie clip in the audiovisual, visual-only, or auditory-only stimulus condition as functional magnetic resonance imaging (fMRI) signals were recorded. The neural representation of the gender/emotion feature was assessed using the decoding accuracy and the brain pattern-related reproducibility indices, obtained by a multivariate pattern analysis method from the fMRI data. In comparison to the visual-only and auditory-only stimulus conditions, we found that audiovisual integration enhanced the neural representation of task-relevant features and that feature-selective attention might play a role of modulation in the audiovisual integration. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. FMRI investigation of cross-modal interactions in beat perception: Audition primes vision, but not vice versa

    PubMed Central

    Grahn, Jessica A.; Henry, Molly J.; McAuley, J. Devin

    2011-01-01

    How we measure time and integrate temporal cues from different sensory modalities are fundamental questions in neuroscience. Sensitivity to a “beat” (such as that routinely perceived in music) differs substantially between auditory and visual modalities. Here we examined beat sensitivity in each modality, and examined cross-modal influences, using functional magnetic resonance imaging (fMRI) to characterize brain activity during perception of auditory and visual rhythms. In separate fMRI sessions, participants listened to auditory sequences or watched visual sequences. The order of auditory and visual sequence presentation was counterbalanced so that cross-modal order effects could be investigated. Participants judged whether sequences were speeding up or slowing down, and the pattern of tempo judgments was used to derive a measure of sensitivity to an implied beat. As expected, participants were less sensitive to an implied beat in visual sequences than in auditory sequences. However, visual sequences produced a stronger sense of beat when preceded by auditory sequences with identical temporal structure. Moreover, increases in brain activity were observed in the bilateral putamen for visual sequences preceded by auditory sequences when compared to visual sequences without prior auditory exposure. No such order-dependent differences (behavioral or neural) were found for the auditory sequences. The results provide further evidence for the role of the basal ganglia in internal generation of the beat and suggest that an internal auditory rhythm representation may be activated during visual rhythm perception. PMID:20858544

  7. Cross-modal attention influences auditory contrast sensitivity: Decreasing visual load improves auditory thresholds for amplitude- and frequency-modulated sounds.

    PubMed

    Ciaramitaro, Vivian M; Chow, Hiu Mei; Eglington, Luke G

    2017-03-01

    We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation-two consecutive intervals of streams of visual letters-and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources). We hypothesized that if attention is a shared resource across vision and audition, an easier visual task should free up more attentional resources for auditory processing on an unrelated task, hence improving auditory thresholds. Auditory detection thresholds were lower-that is, auditory sensitivity was improved-for both amplitude- and frequency-modulated sounds when observers engaged in a less demanding (compared to a more demanding) visual task. In accord with previous work, our findings suggest that visual-task demands can influence the processing of auditory information on an unrelated concurrent task, providing support for shared attentional resources. More importantly, our results suggest that attending to information in a different modality, cross-modal attention, can influence basic auditory contrast sensitivity functions, highlighting potential similarities between basic mechanisms for visual and auditory attention.

  8. Distinct Olfactory Cross-Modal Effects on the Human Motor System

    PubMed Central

    Rossi, Simone; De Capua, Alberto; Pasqualetti, Patrizio; Ulivelli, Monica; Falzarano, Vincenzo; Bartalini, Sabina; Passero, Stefano; Nuti, Daniele

    2008-01-01

    Background Converging evidence indicates that action observation and action-related sounds activate cross-modally the human motor system. Since olfaction, the most ancestral sense, may have behavioural consequences on human activities, we causally investigated by transcranial magnetic stimulation (TMS) whether food odour could additionally facilitate the human motor system during the observation of grasping objects with alimentary valence, and the degree of specificity of these effects. Methodology/Principal Findings In a repeated-measure block design, carried out on 24 healthy individuals participating to three different experiments, we show that sniffing alimentary odorants immediately increases the motor potentials evoked in hand muscles by TMS of the motor cortex. This effect was odorant-specific and was absent when subjects were presented with odorants including a potentially noxious trigeminal component. The smell-induced corticospinal facilitation of hand muscles during observation of grasping was an additive effect which superimposed to that induced by the mere observation of grasping actions for food or non-food objects. The odour-induced motor facilitation took place only in case of congruence between the sniffed odour and the observed grasped food, and specifically involved the muscle acting as prime mover for hand/fingers shaping in the observed action. Conclusions/Significance Complex olfactory cross-modal effects on the human corticospinal system are physiologically demonstrable. They are odorant-specific and, depending on the experimental context, muscle- and action-specific as well. This finding implies potential new diagnostic and rehabilitative applications. PMID:18301777

  9. Visual cortex activation in late-onset, Braille naive blind individuals: an fMRI study during semantic and phonological tasks with heard words.

    PubMed

    Burton, Harold; McLaren, Donald G

    2006-01-09

    Visual cortex activity in the blind has been shown in Braille literate people, which raise the question of whether Braille literacy influences cross-modal reorganization. We used fMRI to examine visual cortex activation during semantic and phonological tasks with auditory presentation of words in two late-onset blind individuals who lacked Braille literacy. Multiple visual cortical regions were activated in the Braille naive individuals. Positive BOLD responses were noted in lower tier visuotopic (e.g., V1, V2, VP, and V3) and several higher tier visual areas (e.g., V4v, V8, and BA 37). Activity was more extensive and cross-correlation magnitudes were greater during the semantic compared to the phonological task. These results with Braille naive individuals plausibly suggest that visual deprivation alone induces visual cortex reorganization. Cross-modal reorganization of lower tier visual areas may be recruited by developing skills in attending to selected non-visual inputs (e.g., Braille literacy, enhanced auditory skills). Such learning might strengthen remote connections with multisensory cortical areas. Of necessity, the Braille naive participants must attend to auditory stimulation for language. We hypothesize that learning to attend to non-visual inputs probably strengthens the remaining active synapses following visual deprivation, and thereby, increases cross-modal activation of lower tier visual areas when performing highly demanding non-visual tasks of which reading Braille is just one example.

  10. Visual cortex activation in late-onset, Braille naive blind individuals: An fMRI study during semantic and phonological tasks with heard words

    PubMed Central

    Burton, Harold; McLaren, Donald G.

    2013-01-01

    Visual cortex activity in the blind has been shown in Braille literate people, which raise the question of whether Braille literacy influences cross-modal reorganization. We used fMRI to examine visual cortex activation during semantic and phonological tasks with auditory presentation of words in two late-onset blind individuals who lacked Braille literacy. Multiple visual cortical regions were activated in the Braille naive individuals. Positive BOLD responses were noted in lower tier visuotopic (e.g., V1, V2, VP, and V3) and several higher tier visual areas (e.g., V4v, V8, and BA 37). Activity was more extensive and cross-correlation magnitudes were greater during the semantic compared to the phonological task. These results with Braille naive individuals plausibly suggest that visual deprivation alone induces visual cortex reorganization. Cross-modal reorganization of lower tier visual areas may be recruited by developing skills in attending to selected non-visual inputs (e.g., Braille literacy, enhanced auditory skills). Such learning might strengthen remote connections with multisensory cortical areas. Of necessity, the Braille naive participants must attend to auditory stimulation for language. We hypothesize that learning to attend to non-visual inputs probably strengthens the remaining active synapses following visual deprivation, and thereby, increases cross-modal activation of lower tier visual areas when performing highly demanding non-visual tasks of which reading Braille is just one example. PMID:16198053

  11. Dorsolateral prefrontal cortex bridges bilateral primary somatosensory cortices during cross-modal working memory.

    PubMed

    Zhao, Di; Ku, Yixuan

    2018-05-01

    Neural activity in the dorsolateral prefrontal cortex (DLPFC) has been suggested to integrate information from distinct sensory areas. However, how the DLPFC interacts with the bilateral primary somatosensory cortices (SIs) in tactile-visual cross-modal working memory has not yet been established. In the present study, we applied single-pulse transcranial magnetic stimulation (sp-TMS) over the contralateral DLPFC and bilateral SIs of human participants at various time points, while they performed a tactile-visual delayed matching-to-sample task with a 2-second delay. sp-TMS over the contralateral DLPFC or the contralateral SI at either an sensory encoding stage [i.e. 100 ms after the onset of a vibrotactile sample stimulus (200-ms duration)] or an early maintenance stage (i.e. 300 ms after the onset), significantly impaired the accuracy of task performance; sp-TMS over the contralateral DLPFC or the ipsilateral SI at a late maintenance stage (1600 ms and 1900 ms) also significantly disrupted the performance. Furthermore, at 300 ms after the onset of the vibrotactile sample stimulus, there was a significant correlation between the deteriorating effects of sp-TMS over the contralateral SI and the contralateral DLPFC. These results imply that the DLPFC and the bilateral SIs play causal roles at distinctive stages during cross-modal working memory, while the contralateral DLPFC communicates with the contralateral SI in the early delay, and cooperates with the ipsilateral SI in the late delay. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Overestimation of threat from neutral faces and voices in social anxiety.

    PubMed

    Peschard, Virginie; Philippot, Pierre

    2017-12-01

    Social anxiety (SA) is associated with a tendency to interpret social information in a more threatening manner. Most of the research in SA has focused on unimodal exploration (mostly based on facial expressions), thus neglecting the ubiquity of cross-modality. To fill this gap, the present study sought to explore whether SA influences the interpretation of facial and vocal expressions presented separately or jointly. Twenty-five high socially anxious (HSA) and 29 low socially anxious (LSA) participants completed a forced two-choice emotion identification task consisting of angry and neutral expressions conveyed by faces, voices or combined faces and voices. Participants had to identify the emotion (angry or neutral) of the presented cues as quickly and precisely as possible. Our results showed that, compared to LSA, HSA individuals show a higher propensity to misattribute anger to neutral expressions independent of cue modality and despite preserved decoding accuracy. We also found a cross-modal facilitation effect at the level of accuracy (i.e., higher accuracy in the bimodal condition compared to unimodal ones). However, such effect was not moderated by SA. Although the HSA group showed clinical cut-off scores at the Liebowitz Social Anxiety Scale, one limitation is that we did not administer diagnostic interviews. Upcoming studies may want to test whether these results can be generalized to a clinical population. These findings highlight the usefulness of a cross-modal perspective to probe the specificity of biases in SA. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Cross-modal Association between Auditory and Visuospatial Information in Mandarin Tone Perception in Noise by Native and Non-native Perceivers.

    PubMed

    Hannah, Beverly; Wang, Yue; Jongman, Allard; Sereno, Joan A; Cao, Jiguo; Nie, Yunlong

    2017-01-01

    Speech perception involves multiple input modalities. Research has indicated that perceivers establish cross-modal associations between auditory and visuospatial events to aid perception. Such intermodal relations can be particularly beneficial for speech development and learning, where infants and non-native perceivers need additional resources to acquire and process new sounds. This study examines how facial articulatory cues and co-speech hand gestures mimicking pitch contours in space affect non-native Mandarin tone perception. Native English as well as Mandarin perceivers identified tones embedded in noise with either congruent or incongruent Auditory-Facial (AF) and Auditory-FacialGestural (AFG) inputs. Native Mandarin results showed the expected ceiling-level performance in the congruent AF and AFG conditions. In the incongruent conditions, while AF identification was primarily auditory-based, AFG identification was partially based on gestures, demonstrating the use of gestures as valid cues in tone identification. The English perceivers' performance was poor in the congruent AF condition, but improved significantly in AFG. While the incongruent AF identification showed some reliance on facial information, incongruent AFG identification relied more on gestural than auditory-facial information. These results indicate positive effects of facial and especially gestural input on non-native tone perception, suggesting that cross-modal (visuospatial) resources can be recruited to aid auditory perception when phonetic demands are high. The current findings may inform patterns of tone acquisition and development, suggesting how multi-modal speech enhancement principles may be applied to facilitate speech learning.

  14. Cross-modal integration of multimodal courtship signals in a wolf spider.

    PubMed

    Kozak, Elizabeth C; Uetz, George W

    2016-11-01

    Cross-modal integration, i.e., cognitive binding of information transmitted in more than one signal mode, is important in animal communication, especially in complex, noisy environments in which signals of many individuals may overlap. Males of the brush-legged wolf spider Schizocosa ocreata (Hentz) use multimodal communication (visual and vibratory signals) in courtship. Because females may be courted by multiple males at the same time, they must evaluate co-occurring male signals originating from separate locations. Moreover, due to environmental complexity, individual components of male signals may be occluded, altering detection of sensory modes by females. We used digital multimodal playback to investigate the effect of spatial and temporal disparity of visual and vibratory components of male courtship signals on female mate choice. Females were presented with male courtship signals with components that varied in spatial location or temporal synchrony. Females responded to spatially disparate signal components separated by ≥90° as though they were separate sources, but responded to disparate signals separated by ≤45° as though they originated from a single source. Responses were seen as evidence for cross-modal integration. Temporal disparity (asynchrony) in signal modes also affected female receptivity. Females responded more to male signals when visual and vibratory modes were in synchrony than either out-of-synch or interleaved/alternated. These findings are consistent with those seen in both humans and other vertebrates and provide insight into how animals overcome communication challenges inherent in a complex environment.

  15. Cross-modal reorganization in cochlear implant users: Auditory cortex contributes to visual face processing.

    PubMed

    Stropahl, Maren; Plotz, Karsten; Schönfeld, Rüdiger; Lenarz, Thomas; Sandmann, Pascale; Yovel, Galit; De Vos, Maarten; Debener, Stefan

    2015-11-01

    There is converging evidence that the auditory cortex takes over visual functions during a period of auditory deprivation. A residual pattern of cross-modal take-over may prevent the auditory cortex to adapt to restored sensory input as delivered by a cochlear implant (CI) and limit speech intelligibility with a CI. The aim of the present study was to investigate whether visual face processing in CI users activates auditory cortex and whether this has adaptive or maladaptive consequences. High-density electroencephalogram data were recorded from CI users (n=21) and age-matched normal hearing controls (n=21) performing a face versus house discrimination task. Lip reading and face recognition abilities were measured as well as speech intelligibility. Evaluation of event-related potential (ERP) topographies revealed significant group differences over occipito-temporal scalp regions. Distributed source analysis identified significantly higher activation in the right auditory cortex for CI users compared to NH controls, confirming visual take-over. Lip reading skills were significantly enhanced in the CI group and appeared to be particularly better after a longer duration of deafness, while face recognition was not significantly different between groups. However, auditory cortex activation in CI users was positively related to face recognition abilities. Our results confirm a cross-modal reorganization for ecologically valid visual stimuli in CI users. Furthermore, they suggest that residual takeover, which can persist even after adaptation to a CI is not necessarily maladaptive. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Cross-modal Association between Auditory and Visuospatial Information in Mandarin Tone Perception in Noise by Native and Non-native Perceivers

    PubMed Central

    Hannah, Beverly; Wang, Yue; Jongman, Allard; Sereno, Joan A.; Cao, Jiguo; Nie, Yunlong

    2017-01-01

    Speech perception involves multiple input modalities. Research has indicated that perceivers establish cross-modal associations between auditory and visuospatial events to aid perception. Such intermodal relations can be particularly beneficial for speech development and learning, where infants and non-native perceivers need additional resources to acquire and process new sounds. This study examines how facial articulatory cues and co-speech hand gestures mimicking pitch contours in space affect non-native Mandarin tone perception. Native English as well as Mandarin perceivers identified tones embedded in noise with either congruent or incongruent Auditory-Facial (AF) and Auditory-FacialGestural (AFG) inputs. Native Mandarin results showed the expected ceiling-level performance in the congruent AF and AFG conditions. In the incongruent conditions, while AF identification was primarily auditory-based, AFG identification was partially based on gestures, demonstrating the use of gestures as valid cues in tone identification. The English perceivers’ performance was poor in the congruent AF condition, but improved significantly in AFG. While the incongruent AF identification showed some reliance on facial information, incongruent AFG identification relied more on gestural than auditory-facial information. These results indicate positive effects of facial and especially gestural input on non-native tone perception, suggesting that cross-modal (visuospatial) resources can be recruited to aid auditory perception when phonetic demands are high. The current findings may inform patterns of tone acquisition and development, suggesting how multi-modal speech enhancement principles may be applied to facilitate speech learning. PMID:29255435

  17. Music-color associations are mediated by emotion.

    PubMed

    Palmer, Stephen E; Schloss, Karen B; Xu, Zoe; Prado-León, Lilia R

    2013-05-28

    Experimental evidence demonstrates robust cross-modal matches between music and colors that are mediated by emotional associations. US and Mexican participants chose colors that were most/least consistent with 18 selections of classical orchestral music by Bach, Mozart, and Brahms. In both cultures, faster music in the major mode produced color choices that were more saturated, lighter, and yellower whereas slower, minor music produced the opposite pattern (choices that were desaturated, darker, and bluer). There were strong correlations (0.89 < r < 0.99) between the emotional associations of the music and those of the colors chosen to go with the music, supporting an emotional mediation hypothesis in both cultures. Additional experiments showed similarly robust cross-modal matches from emotionally expressive faces to colors and from music to emotionally expressive faces. These results provide further support that music-to-color associations are mediated by common emotional associations.

  18. Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli

    PubMed Central

    Störmer, Viola S.; McDonald, John J.; Hillyard, Steven A.

    2009-01-01

    The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex. PMID:20007778

  19. Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli.

    PubMed

    Störmer, Viola S; McDonald, John J; Hillyard, Steven A

    2009-12-29

    The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex.

  20. Pathways From Toddler Information Processing to Adolescent Lexical Proficiency.

    PubMed

    Rose, Susan A; Feldman, Judith F; Jankowski, Jeffery J

    2015-01-01

    This study examined the relation of 3-year core information-processing abilities to lexical growth and development. The core abilities covered four domains-memory, representational competence (cross-modal transfer), processing speed, and attention. Lexical proficiency was assessed at 3 and 13 years with the Peabody Picture Vocabulary Test (PPVT) and verbal fluency. The sample (N = 128) consisted of 43 preterms (< 1750 g) and 85 full-terms. Structural equation modeling indicated concurrent relations of toddler information processing and language proficiency and, independent of stability in language, direct predictive links between (a) 3-year cross-modal ability and 13-year PPVT and (b) 3-year processing speed and both 13-year measures, PPVT and verbal fluency. Thus, toddler information processing was related to growth in lexical proficiency from 3 to 13 years. © 2015 The Authors. Child Development © 2015 Society for Research in Child Development, Inc.

  1. Mental Imagery Induces Cross-Modal Sensory Plasticity and Changes Future Auditory Perception.

    PubMed

    Berger, Christopher C; Ehrsson, H Henrik

    2018-04-01

    Can what we imagine in our minds change how we perceive the world in the future? A continuous process of multisensory integration and recalibration is responsible for maintaining a correspondence between the senses (e.g., vision, touch, audition) and, ultimately, a stable and coherent perception of our environment. This process depends on the plasticity of our sensory systems. The so-called ventriloquism aftereffect-a shift in the perceived localization of sounds presented alone after repeated exposure to spatially mismatched auditory and visual stimuli-is a clear example of this type of plasticity in the audiovisual domain. In a series of six studies with 24 participants each, we investigated an imagery-induced ventriloquism aftereffect in which imagining a visual stimulus elicits the same frequency-specific auditory aftereffect as actually seeing one. These results demonstrate that mental imagery can recalibrate the senses and induce the same cross-modal sensory plasticity as real sensory stimuli.

  2. Cross-modal discrepancies in coarticulation and the integration of speech information: the McGurk effect with mismatched vowels.

    PubMed

    Green, K P; Gerdeman, A

    1995-12-01

    Two experiments examined the impact of a discrepancy in vowel quality between the auditory and visual modalities on the perception of a syllable-initial consonant. One experiment examined the effect of such a discrepancy on the McGurk effect by cross-dubbing auditory /bi/ tokens onto visual /ga/ articulations (and vice versa). A discrepancy in vowel category significantly reduced the magnitude of the McGurk effect and changed the pattern of responses. A 2nd experiment investigated the effect of such a discrepancy on the speeded classification of the initial consonant. Mean reaction times to classify the tokens increased when the vowel information was discrepant between the 2 modalities but not when the vowel information was consistent. These experiments indicate that the perceptual system is sensitive to cross-modal discrepancies in the coarticulatory information between a consonant and its following vowel during phonetic perception.

  3. Learning Across Senses: Cross-Modal Effects in Multisensory Statistical Learning

    PubMed Central

    Mitchel, Aaron D.; Weiss, Daniel J.

    2014-01-01

    It is currently unknown whether statistical learning is supported by modality-general or modality-specific mechanisms. One issue within this debate concerns the independence of learning in one modality from learning in other modalities. In the present study, the authors examined the extent to which statistical learning across modalities is independent by simultaneously presenting learners with auditory and visual streams. After establishing baseline rates of learning for each stream independently, they systematically varied the amount of audiovisual correspondence across 3 experiments. They found that learners were able to segment both streams successfully only when the boundaries of the audio and visual triplets were in alignment. This pattern of results suggests that learners are able to extract multiple statistical regularities across modalities provided that there is some degree of cross-modal coherence. They discuss the implications of their results in light of recent claims that multisensory statistical learning is guided by modality-independent mechanisms. PMID:21574745

  4. MEG demonstrates a supra-additive response to facial and vocal emotion in the right superior temporal sulcus.

    PubMed

    Hagan, Cindy C; Woods, Will; Johnson, Sam; Calder, Andrew J; Green, Gary G R; Young, Andrew W

    2009-11-24

    An influential neural model of face perception suggests that the posterior superior temporal sulcus (STS) is sensitive to those aspects of faces that produce transient visual changes, including facial expression. Other researchers note that recognition of expression involves multiple sensory modalities and suggest that the STS also may respond to crossmodal facial signals that change transiently. Indeed, many studies of audiovisual (AV) speech perception show STS involvement in AV speech integration. Here we examine whether these findings extend to AV emotion. We used magnetoencephalography to measure the neural responses of participants as they viewed and heard emotionally congruent fear and minimally congruent neutral face and voice stimuli. We demonstrate significant supra-additive responses (i.e., where AV > [unimodal auditory + unimodal visual]) in the posterior STS within the first 250 ms for emotionally congruent AV stimuli. These findings show a role for the STS in processing crossmodal emotive signals.

  5. Kinesthetic alexia due to left parietal lobe lesions.

    PubMed

    Ihori, Nami; Kawamura, Mitsuru; Araki, Shigeo; Kawachi, Juro

    2002-01-01

    To investigate the neuropsychological mechanisms of kinesthetic alexia, we asked 7 patients who showed kinesthetic alexia with preserved visual reading after damage to the left parietal region to perform tasks consisting of kinesthetic written reproduction (writing down the same letter as the kinesthetic stimulus), kinesthetic reading aloud, visual written reproduction (copying letters), and visual reading aloud of hiragana (Japanese phonograms). We compared the performance in these tasks and the lesion sites in each patient. The results suggested that deficits in any one of the following functions might cause kinesthetic alexia: (1) the retrieval of kinesthetic images (motor engrams) of characters from kinesthetic stimuli, (2) kinesthetic images themselves, (3) access to cross-modal association from kinesthetic images, and (4) cross-modal association itself (retrieval of auditory and visual images from kinesthetic images of characters). Each of these factors seemed to be related to different lesion sites in the left parietal lobe. Copyright 2002 S. Karger AG, Basel

  6. Ground cross-modal impedance as a tool for analyzing ground/plate interaction and ground wave propagation.

    PubMed

    Grau, L; Laulagnet, B

    2015-05-01

    An analytical approach is investigated to model ground-plate interaction based on modal decomposition and the two-dimensional Fourier transform. A finite rectangular plate subjected to flexural vibration is coupled with the ground and modeled with the Kirchhoff hypothesis. A Navier equation represents the stratified ground, assumed infinite in the x- and y-directions and free at the top surface. To obtain an analytical solution, modal decomposition is applied to the structure and a Fourier Transform is applied to the ground. The result is a new tool for analyzing ground-plate interaction to resolve this problem: ground cross-modal impedance. It allows quantifying the added-stiffness, added-mass, and added-damping from the ground to the structure. Similarity with the parallel acoustic problem is highlighted. A comparison between the theory and the experiment shows good matching. Finally, specific cases are investigated, notably the influence of layer depth on plate vibration.

  7. A Psychological Experiment on the Correspondence between Colors and Voiced Vowels in Non-synesthetes'

    NASA Astrophysics Data System (ADS)

    Miyahara, Tomoko; Koda, Ai; Sekiguchi, Rikuko; Amemiya, Toshihiko

    In this study, we investigated the nature of cross-modal associations between colors and vowels. In Experiment 1, we examined the patterns of synesthetic correspondence between colors and vowels in a perceptual similarity experiment. The results were as follows: red was chosen for /a/, yellow was chosen for /i/, and blue was chosen for /o/ significantly more than any other vowels. Interestingly this pattern of correspondence is similar to the pattern of colored hearing reported by synesthetes. In Experiment 2, we investigated the robustness of these cross-modal associations using an implicit association test (IAT). A clear congruence effect was found. Participants responded faster in congruent conditions (/i/ and yellow, /o/ and blue) than in incongruent conditions (/i/ and blue, /o/ and yellow). This result suggests that the weak synesthesia between vowels and colors in non-synesthtes is not the fact of mere conscious choice, but reflects some underlying implicit associations.

  8. Music–color associations are mediated by emotion

    PubMed Central

    Palmer, Stephen E.; Schloss, Karen B.; Xu, Zoe; Prado-León, Lilia R.

    2013-01-01

    Experimental evidence demonstrates robust cross-modal matches between music and colors that are mediated by emotional associations. US and Mexican participants chose colors that were most/least consistent with 18 selections of classical orchestral music by Bach, Mozart, and Brahms. In both cultures, faster music in the major mode produced color choices that were more saturated, lighter, and yellower whereas slower, minor music produced the opposite pattern (choices that were desaturated, darker, and bluer). There were strong correlations (0.89 < r < 0.99) between the emotional associations of the music and those of the colors chosen to go with the music, supporting an emotional mediation hypothesis in both cultures. Additional experiments showed similarly robust cross-modal matches from emotionally expressive faces to colors and from music to emotionally expressive faces. These results provide further support that music-to-color associations are mediated by common emotional associations. PMID:23671106

  9. When visual perception causes feeling: enhanced cross-modal processing in grapheme-color synesthesia.

    PubMed

    Weiss, Peter H; Zilles, Karl; Fink, Gereon R

    2005-12-01

    In synesthesia, stimulation of one sensory modality (e.g., hearing) triggers a percept in another, non-stimulated sensory modality (e.g., vision). Likewise, perception of a form (e.g., a letter) may induce a color percept (i.e., grapheme-color synesthesia). To date, the neural mechanisms underlying synesthesia remain to be elucidated. We disclosed by fMRI, while controlling for surface color processing, enhanced activity in the left intraparietal cortex during the experience of grapheme-color synesthesia (n = 9). In contrast, the perception of surface color per se activated the color centers in the fusiform gyrus bilaterally. The data support theoretical accounts that grapheme-color synesthesia may originate from enhanced cross-modal binding of form and color. A mismatch of surface color and grapheme induced synesthetically felt color additionally activated the left dorsolateral prefrontal cortex (DLPFC). This suggests that cognitive control processes become active to resolve the perceptual conflict resulting from synesthesia.

  10. Coding stimulus amplitude by correlated neural activity

    NASA Astrophysics Data System (ADS)

    Metzen, Michael G.; Ávila-Åkerberg, Oscar; Chacron, Maurice J.

    2015-04-01

    While correlated activity is observed ubiquitously in the brain, its role in neural coding has remained controversial. Recent experimental results have demonstrated that correlated but not single-neuron activity can encode the detailed time course of the instantaneous amplitude (i.e., envelope) of a stimulus. These have furthermore demonstrated that such coding required and was optimal for a nonzero level of neural variability. However, a theoretical understanding of these results is still lacking. Here we provide a comprehensive theoretical framework explaining these experimental findings. Specifically, we use linear response theory to derive an expression relating the correlation coefficient to the instantaneous stimulus amplitude, which takes into account key single-neuron properties such as firing rate and variability as quantified by the coefficient of variation. The theoretical prediction was in excellent agreement with numerical simulations of various integrate-and-fire type neuron models for various parameter values. Further, we demonstrate a form of stochastic resonance as optimal coding of stimulus variance by correlated activity occurs for a nonzero value of noise intensity. Thus, our results provide a theoretical explanation of the phenomenon by which correlated but not single-neuron activity can code for stimulus amplitude and how key single-neuron properties such as firing rate and variability influence such coding. Correlation coding by correlated but not single-neuron activity is thus predicted to be a ubiquitous feature of sensory processing for neurons responding to weak input.

  11. Visually induced gains in pitch discrimination: Linking audio-visual processing with auditory abilities.

    PubMed

    Møller, Cecilie; Højlund, Andreas; Bærentsen, Klaus B; Hansen, Niels Chr; Skewes, Joshua C; Vuust, Peter

    2018-05-01

    Perception is fundamentally a multisensory experience. The principle of inverse effectiveness (PoIE) states how the multisensory gain is maximal when responses to the unisensory constituents of the stimuli are weak. It is one of the basic principles underlying multisensory processing of spatiotemporally corresponding crossmodal stimuli that are well established at behavioral as well as neural levels. It is not yet clear, however, how modality-specific stimulus features influence discrimination of subtle changes in a crossmodally corresponding feature belonging to another modality. Here, we tested the hypothesis that reliance on visual cues to pitch discrimination follow the PoIE at the interindividual level (i.e., varies with varying levels of auditory-only pitch discrimination abilities). Using an oddball pitch discrimination task, we measured the effect of varying visually perceived vertical position in participants exhibiting a wide range of pitch discrimination abilities (i.e., musicians and nonmusicians). Visual cues significantly enhanced pitch discrimination as measured by the sensitivity index d', and more so in the crossmodally congruent than incongruent condition. The magnitude of gain caused by compatible visual cues was associated with individual pitch discrimination thresholds, as predicted by the PoIE. This was not the case for the magnitude of the congruence effect, which was unrelated to individual pitch discrimination thresholds, indicating that the pitch-height association is robust to variations in auditory skills. Our findings shed light on individual differences in multisensory processing by suggesting that relevant multisensory information that crucially aids some perceivers' performance may be of less importance to others, depending on their unisensory abilities.

  12. Arousal Rules: An Empirical Investigation into the Aesthetic Experience of Cross-Modal Perception with Emotional Visual Music

    PubMed Central

    Lee, Irene Eunyoung; Latchoumane, Charles-Francois V.; Jeong, Jaeseung

    2017-01-01

    Emotional visual music is a promising tool for the study of aesthetic perception in human psychology; however, the production of such stimuli and the mechanisms of auditory-visual emotion perception remain poorly understood. In Experiment 1, we suggested a literature-based, directive approach to emotional visual music design, and inspected the emotional meanings thereof using the self-rated psychometric and electroencephalographic (EEG) responses of the viewers. A two-dimensional (2D) approach to the assessment of emotion (the valence-arousal plane) with frontal alpha power asymmetry EEG (as a proposed index of valence) validated our visual music as an emotional stimulus. In Experiment 2, we used our synthetic stimuli to investigate possible underlying mechanisms of affective evaluation mechanisms in relation to audio and visual integration conditions between modalities (namely congruent, complementation, or incongruent combinations). In this experiment, we found that, when arousal information between auditory and visual modalities was contradictory [for example, active (+) on the audio channel but passive (−) on the video channel], the perceived emotion of cross-modal perception (visual music) followed the channel conveying the stronger arousal. Moreover, we found that an enhancement effect (heightened and compacted in subjects' emotional responses) in the aesthetic perception of visual music might occur when the two channels contained contradictory arousal information and positive congruency in valence and texture/control. To the best of our knowledge, this work is the first to propose a literature-based directive production of emotional visual music prototypes and the validations thereof for the study of cross-modally evoked aesthetic experiences in human subjects. PMID:28421007

  13. Functionally segregated neural substrates for arbitrary audiovisual paired-association learning.

    PubMed

    Tanabe, Hiroki C; Honda, Manabu; Sadato, Norihiro

    2005-07-06

    To clarify the neural substrates and their dynamics during crossmodal association learning, we conducted functional magnetic resonance imaging (MRI) during audiovisual paired-association learning of delayed matching-to-sample tasks. Thirty subjects were involved in the study; 15 performed an audiovisual paired-association learning task, and the remainder completed a control visuo-visual task. Each trial consisted of the successive presentation of a pair of stimuli. Subjects were asked to identify predefined audiovisual or visuo-visual pairs by trial and error. Feedback for each trial was given regardless of whether the response was correct or incorrect. During the delay period, several areas showed an increase in the MRI signal as learning proceeded: crossmodal activity increased in unimodal areas corresponding to visual or auditory areas, and polymodal responses increased in the occipitotemporal junction and parahippocampal gyrus. This pattern was not observed in the visuo-visual intramodal paired-association learning task, suggesting that crossmodal associations might be formed by binding unimodal sensory areas via polymodal regions. In both the audiovisual and visuo-visual tasks, the MRI signal in the superior temporal sulcus (STS) in response to the second stimulus and feedback peaked during the early phase of learning and then decreased, indicating that the STS might be key to the creation of paired associations, regardless of stimulus type. In contrast to the activity changes in the regions discussed above, there was constant activity in the frontoparietal circuit during the delay period in both tasks, implying that the neural substrates for the formation and storage of paired associates are distinct from working memory circuits.

  14. Cross-modal metaphorical mapping of spoken emotion words onto vertical space.

    PubMed

    Montoro, Pedro R; Contreras, María José; Elosúa, María Rosa; Marmolejo-Ramos, Fernando

    2015-01-01

    From the field of embodied cognition, previous studies have reported evidence of metaphorical mapping of emotion concepts onto a vertical spatial axis. Most of the work on this topic has used visual words as the typical experimental stimuli. However, to our knowledge, no previous study has examined the association between affect and vertical space using a cross-modal procedure. The current research is a first step toward the study of the metaphorical mapping of emotions onto vertical space by means of an auditory to visual cross-modal paradigm. In the present study, we examined whether auditory words with an emotional valence can interact with the vertical visual space according to a 'positive-up/negative-down' embodied metaphor. The general method consisted in the presentation of a spoken word denoting a positive/negative emotion prior to the spatial localization of a visual target in an upper or lower position. In Experiment 1, the spoken words were passively heard by the participants and no reliable interaction between emotion concepts and bodily simulated space was found. In contrast, Experiment 2 required more active listening of the auditory stimuli. A metaphorical mapping of affect and space was evident but limited to the participants engaged in an emotion-focused task. Our results suggest that the association of affective valence and vertical space is not activated automatically during speech processing since an explicit semantic and/or emotional evaluation of the emotionally valenced stimuli was necessary to obtain an embodied effect. The results are discussed within the framework of the embodiment hypothesis.

  15. Cross-modal metaphorical mapping of spoken emotion words onto vertical space

    PubMed Central

    Montoro, Pedro R.; Contreras, María José; Elosúa, María Rosa; Marmolejo-Ramos, Fernando

    2015-01-01

    From the field of embodied cognition, previous studies have reported evidence of metaphorical mapping of emotion concepts onto a vertical spatial axis. Most of the work on this topic has used visual words as the typical experimental stimuli. However, to our knowledge, no previous study has examined the association between affect and vertical space using a cross-modal procedure. The current research is a first step toward the study of the metaphorical mapping of emotions onto vertical space by means of an auditory to visual cross-modal paradigm. In the present study, we examined whether auditory words with an emotional valence can interact with the vertical visual space according to a ‘positive-up/negative-down’ embodied metaphor. The general method consisted in the presentation of a spoken word denoting a positive/negative emotion prior to the spatial localization of a visual target in an upper or lower position. In Experiment 1, the spoken words were passively heard by the participants and no reliable interaction between emotion concepts and bodily simulated space was found. In contrast, Experiment 2 required more active listening of the auditory stimuli. A metaphorical mapping of affect and space was evident but limited to the participants engaged in an emotion-focused task. Our results suggest that the association of affective valence and vertical space is not activated automatically during speech processing since an explicit semantic and/or emotional evaluation of the emotionally valenced stimuli was necessary to obtain an embodied effect. The results are discussed within the framework of the embodiment hypothesis. PMID:26322007

  16. [Research on activity evolution of cerebral cortex and hearing rehabilitation of congenitally deaf children after cochlear implant].

    PubMed

    Wang, X J; Liang, M J; Zhang, J P; Huang, H; Zheng, Y Q

    2017-11-05

    Objective: There is a significant difference in the hearing rehabilitation between the congenitally deaf children after cochlear implant(CI). The intrinsic mechanism that affects the hearing rehabilitation in patients was discussed from the perspective of evoked EEG source activity. Method: Firstly, we collected the ERP data from 23 patients and 10 control group children during 0, 3, 6, 9 and 12 months after CI. According to the hearing rehabilitation during 12 months after CI, the patients were divided into two groups: rehabilitation of "the good" and "the poor". Then we used sLORETA to show the changes in the groups of patients' cerebral cortex and compared with the control group. Result: Cross-modal reorganization of cerebral cortex exists in the congenitally deaf children. The cross-modal reorganization gradually degraded and the activity of the relevant cortex followed by normally after CI. There was a statistically significant difference( P < 0.05) in the temporal lobe and the associated cortex around parietal lobe between "the good" and "the poor" groups after 12 months. Conclusion: The normalization of the cross-modal reorganization in patients reflects the hearing rehabilitation after CI, especially the normalization of the activity of the temporal lobe and the associated cortex around parietal lobe, which influences the rehabilitation effect of the auditory function to some extent. This research demonstrated the detection of the mechanism has important significance for the hearing recovery training and evaluation of the hearing rehabilitation after CI. Copyright© by the Editorial Department of Journal of Clinical Otorhinolaryngology Head and Neck Surgery.

  17. Perceiving similarity and comprehending metaphor.

    PubMed

    Marks, L E; Hammeal, R J; Bornstein, M H

    1987-01-01

    We conducted a series of 3 experiments to assess the comprehension of 4 types of cross-modal (synesthetic) similarities in nearly 500 3.5-13.5-year-old children and more than 100 adults. We tested both perceptual and verbal (metaphoric) modes. Children of all ages and adults matched pitch to brightness and loudness to brightness, thereby showing that even very young children recognize perceptual similarities between hearing and vision. Children did not consistently recognize similarity between pitch and size until about age 11. This difference in developmental timetables is compatible with the view that pitch-brightness and loudness-brightness similarities are intrinsic characteristics of perception (characteristics based, perhaps, on common sensory codes), whereas pitch-size similarity may be learned (perhaps through association of size with resonance properties). In a parallel verbal task, even 4-year-old children showed at least some capacity to translate meanings metaphorically from one modality to another (e.g., rating "low pitched" as dim and "high pitched" as bright). But not all literal meanings produced metaphoric equivalents in the youngest children (e.g., rating "sunlight" brighter but not louder than "moonlight"). Improvements with age in making metaphoric translations of synesthetic expressions paralleled increasing differentiation of meanings along literal dimensions and increasing capacity to integrate meanings of components in compound expressions. We postulate that perceptual knowledge about objects and events is represented in terms of locations in a multidimensional space; cross-modal similarities imply that the space is also multimodal. Verbal processes later gain access to this graded perceptual knowledge, thus permitting the interpretation of synesthetic metaphors according to the rules of cross-modal perception.

  18. On the spatial specificity of audiovisual crossmodal exogenous cuing effects.

    PubMed

    Lee, Jae; Spence, Charles

    2017-06-01

    It is generally-accepted that the presentation of an auditory cue will direct an observer's spatial attention to the region of space from where it originates and therefore facilitate responses to visual targets presented there rather than from a different position within the cued hemifield. However, to date, there has been surprisingly limited evidence published in support of such within-hemifield crossmodal exogenous spatial cuing effects. Here, we report two experiments designed to investigate within- and between-hemifield spatial cuing effects in the case of audiovisual exogenous covert orienting. Auditory cues were presented from one of four frontal loudspeakers (two on either side of central fixation). There were eight possible visual target locations (one above and another below each of the loudspeakers). The auditory cues were evenly separated laterally by 30° in Experiment 1, and by 10° in Experiment 2. The potential cue and target locations were separated vertically by approximately 19° in Experiment 1, and by 4° in Experiment 2. On each trial, the participants made a speeded elevation (i.e., up vs. down) discrimination response to the visual target following the presentation of a spatially-nonpredictive auditory cue. Within-hemifield spatial cuing effects were observed only when the auditory cues were presented from the inner locations. Between-hemifield spatial cuing effects were observed in both experiments. Taken together, these results demonstrate that crossmodal exogenous shifts of spatial attention depend on the eccentricity of both the cue and target in a way that has not been made explicit by previous research. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. The contribution of visual areas to speech comprehension: a PET study in cochlear implants patients and normal-hearing subjects.

    PubMed

    Giraud, Anne Lise; Truy, Eric

    2002-01-01

    Early visual cortex can be recruited by meaningful sounds in the absence of visual information. This occurs in particular in cochlear implant (CI) patients whose dependency on visual cues in speech comprehension is increased. Such cross-modal interaction mirrors the response of early auditory cortex to mouth movements (speech reading) and may reflect the natural expectancy of the visual counterpart of sounds, lip movements. Here we pursue the hypothesis that visual activations occur specifically in response to meaningful sounds. We performed PET in both CI patients and controls, while subjects listened either to their native language or to a completely unknown language. A recruitment of early visual cortex, the left posterior inferior temporal gyrus (ITG) and the left superior parietal cortex was observed in both groups. While no further activation occurred in the group of normal-hearing subjects, CI patients additionally recruited the right perirhinal/fusiform and mid-fusiform, the right temporo-occipito-parietal (TOP) junction and the left inferior prefrontal cortex (LIPF, Broca's area). This study confirms a participation of visual cortical areas in semantic processing of speech sounds. Observation of early visual activation in normal-hearing subjects shows that auditory-to-visual cross-modal effects can also be recruited under natural hearing conditions. In cochlear implant patients, speech activates the mid-fusiform gyrus in the vicinity of the so-called face area. This suggests that specific cross-modal interaction involving advanced stages in the visual processing hierarchy develops after cochlear implantation and may be the correlate of increased usage of lip-reading.

  20. Arousal Rules: An Empirical Investigation into the Aesthetic Experience of Cross-Modal Perception with Emotional Visual Music.

    PubMed

    Lee, Irene Eunyoung; Latchoumane, Charles-Francois V; Jeong, Jaeseung

    2017-01-01

    Emotional visual music is a promising tool for the study of aesthetic perception in human psychology; however, the production of such stimuli and the mechanisms of auditory-visual emotion perception remain poorly understood. In Experiment 1, we suggested a literature-based, directive approach to emotional visual music design, and inspected the emotional meanings thereof using the self-rated psychometric and electroencephalographic (EEG) responses of the viewers. A two-dimensional (2D) approach to the assessment of emotion (the valence-arousal plane) with frontal alpha power asymmetry EEG (as a proposed index of valence) validated our visual music as an emotional stimulus. In Experiment 2, we used our synthetic stimuli to investigate possible underlying mechanisms of affective evaluation mechanisms in relation to audio and visual integration conditions between modalities (namely congruent, complementation, or incongruent combinations). In this experiment, we found that, when arousal information between auditory and visual modalities was contradictory [for example, active (+) on the audio channel but passive (-) on the video channel], the perceived emotion of cross-modal perception (visual music) followed the channel conveying the stronger arousal. Moreover, we found that an enhancement effect (heightened and compacted in subjects' emotional responses) in the aesthetic perception of visual music might occur when the two channels contained contradictory arousal information and positive congruency in valence and texture/control. To the best of our knowledge, this work is the first to propose a literature-based directive production of emotional visual music prototypes and the validations thereof for the study of cross-modally evoked aesthetic experiences in human subjects.

  1. Fast transfer of crossmodal time interval training.

    PubMed

    Chen, Lihan; Zhou, Xiaolin

    2014-06-01

    Sub-second time perception is essential for many important sensory and perceptual tasks including speech perception, motion perception, motor coordination, and crossmodal interaction. This study investigates to what extent the ability to discriminate sub-second time intervals acquired in one sensory modality can be transferred to another modality. To this end, we used perceptual classification of visual Ternus display (Ternus in Psychol Forsch 7:81-136, 1926) to implicitly measure participants' interval perception in pre- and posttests and implemented an intra- or crossmodal sub-second interval discrimination training protocol in between the tests. The Ternus display elicited either an "element motion" or a "group motion" percept, depending on the inter-stimulus interval between the two visual frames. The training protocol required participants to explicitly compare the interval length between a pair of visual, auditory, or tactile stimuli with a standard interval or to implicitly perceive the length of visual, auditory, or tactile intervals by completing a non-temporal task (discrimination of auditory pitch or tactile intensity). Results showed that after fast explicit training of interval discrimination (about 15 min), participants improved their ability to categorize the visual apparent motion in Ternus displays, although the training benefits were mild for visual timing training. However, the benefits were absent for implicit interval training protocols. This finding suggests that the timing ability in one modality can be rapidly acquired and used to improve timing-related performance in another modality and that there may exist a central clock for sub-second temporal processing, although modality-specific perceptual properties may constrain the functioning of this clock.

  2. 75 FR 1115 - Invitation for Public Comment on Strategic Research Direction, Research Priority Areas and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-08

    ..., truly multimodal transportation system that provides the traveling public and U.S. businesses with safe... pursuits at that time. The Department is now pursuing a more cross-modal, collaborative and strategic...

  3. Atypical white-matter microstructure in congenitally deaf adults: A region of interest and tractography study using diffusion-tensor imaging.

    PubMed

    Karns, Christina M; Stevens, Courtney; Dow, Mark W; Schorr, Emily M; Neville, Helen J

    2017-01-01

    Considerable research documents the cross-modal reorganization of auditory cortices as a consequence of congenital deafness, with remapped functions that include visual and somatosensory processing of both linguistic and nonlinguistic information. Structural changes accompany this cross-modal neuroplasticity, but precisely which specific structural changes accompany congenital and early deafness and whether there are group differences in hemispheric asymmetries remain to be established. Here, we used diffusion tensor imaging (DTI) to examine microstructural white matter changes accompanying cross-modal reorganization in 23 deaf adults who were genetically, profoundly, and congenitally deaf, having learned sign language from infancy with 26 hearing controls who participated in our previous fMRI studies of cross-modal neuroplasticity. In contrast to prior literature using a whole-brain approach, we introduce a semiautomatic method for demarcating auditory regions in which regions of interest (ROIs) are defined on the normalized white matter skeleton for all participants, projected into each participants native space, and manually constrained to anatomical boundaries. White-matter ROIs were left and right Heschl's gyrus (HG), left and right anterior superior temporal gyrus (aSTG), left and right posterior superior temporal gyrus (pSTG), as well as one tractography-defined region in the splenium of the corpus callosum connecting homologous left and right superior temporal regions (pCC). Within these regions, we measured fractional anisotropy (FA), radial diffusivity (RD), axial diffusivity (AD), and white-matter volume. Congenitally deaf adults had reduced FA and volume in white matter structures underlying bilateral HG, aSTG, pSTG, and reduced FA in pCC. In HG and pCC, this reduction in FA corresponded with increased RD, but differences in aSTG and pSTG could not be localized to alterations in RD or AD. Direct statistical tests of hemispheric asymmetries in these differences indicated the most prominent effects in pSTG, where the largest differences between groups occurred in the right hemisphere. Other regions did not show significant hemispheric asymmetries in group differences. Taken together, these results indicate that atypical white matter microstructure and reduced volume underlies regions of superior temporal primary and association auditory cortex and introduce a robust method for quantifying volumetric and white matter microstructural differences that can be applied to future studies of special populations. Published by Elsevier B.V.

  4. Origin and Consequences of the Relationship between Protein Mean and Variance

    PubMed Central

    Vallania, Francesco Luigi Massimo; Sherman, Marc; Goodwin, Zane; Mogno, Ilaria; Cohen, Barak Alon; Mitra, Robi David

    2014-01-01

    Cell-to-cell variance in protein levels (noise) is a ubiquitous phenomenon that can increase fitness by generating phenotypic differences within clonal populations of cells. An important challenge is to identify the specific molecular events that control noise. This task is complicated by the strong dependence of a protein's cell-to-cell variance on its mean expression level through a power-law like relationship (σ2∝μ1.69). Here, we dissect the nature of this relationship using a stochastic model parameterized with experimentally measured values. This framework naturally recapitulates the power-law like relationship (σ2∝μ1.6) and accurately predicts protein variance across the yeast proteome (r2 = 0.935). Using this model we identified two distinct mechanisms by which protein variance can be increased. Variables that affect promoter activation, such as nucleosome positioning, increase protein variance by changing the exponent of the power-law relationship. In contrast, variables that affect processes downstream of promoter activation, such as mRNA and protein synthesis, increase protein variance in a mean-dependent manner following the power-law. We verified our findings experimentally using an inducible gene expression system in yeast. We conclude that the power-law-like relationship between noise and protein mean is due to the kinetics of promoter activation. Our results provide a framework for understanding how molecular processes shape stochastic variation across the genome. PMID:25062021

  5. Cross-modal work helps OMC improve the safety of commercial transportation

    DOT National Transportation Integrated Search

    1997-01-01

    This article describes the Commercial Vehicle Information System (CVIS), designed to deploy a national safety program for the U.S. commercial trucking fleet. CVIS is built around a safety analysis algorithm called SafeStat which constructs a profile ...

  6. Experimental Test of the Differential Fluctuation Theorem and a Generalized Jarzynski Equality for Arbitrary Initial States

    NASA Astrophysics Data System (ADS)

    Hoang, Thai M.; Pan, Rui; Ahn, Jonghoon; Bang, Jaehoon; Quan, H. T.; Li, Tongcang

    2018-02-01

    Nonequilibrium processes of small systems such as molecular machines are ubiquitous in biology, chemistry, and physics but are often challenging to comprehend. In the past two decades, several exact thermodynamic relations of nonequilibrium processes, collectively known as fluctuation theorems, have been discovered and provided critical insights. These fluctuation theorems are generalizations of the second law and can be unified by a differential fluctuation theorem. Here we perform the first experimental test of the differential fluctuation theorem using an optically levitated nanosphere in both underdamped and overdamped regimes and in both spatial and velocity spaces. We also test several theorems that can be obtained from it directly, including a generalized Jarzynski equality that is valid for arbitrary initial states, and the Hummer-Szabo relation. Our study experimentally verifies these fundamental theorems and initiates the experimental study of stochastic energetics with the instantaneous velocity measurement.

  7. Time-resolved observation of thermally activated rupture of a capillary-condensed water nanobridge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bak, Wan; Sung, Baekman; Kim, Jongwoo

    2015-01-05

    The capillary-condensed liquid bridge is one of the most ubiquitous forms of liquid in nature and contributes significantly to adhesion and friction of biological molecules as well as microscopic objects. Despite its important role in nanoscience and technology, the rupture process of the bridge is not well understood and needs more experimental works. Here, we report real-time observation of rupture of a capillary-condensed water nanobridge in ambient condition. During slow and stepwise stretch of the nanobridge, we measured the activation time for rupture, or the latency time required for the bridge breakup. By statistical analysis of the time-resolved distribution ofmore » activation time, we show that rupture is a thermally activated stochastic process and follows the Poisson statistics. In particular, from the Arrhenius law that the rupture rate satisfies, we estimate the position-dependent activation energies for the capillary-bridge rupture.« less

  8. Leveraging social system networks in ubiquitous high-data-rate health systems.

    PubMed

    Massey, Tammara; Marfia, Gustavo; Stoelting, Adam; Tomasi, Riccardo; Spirito, Maurizio A; Sarrafzadeh, Majid; Pau, Giovanni

    2011-05-01

    Social system networks with high data rates and limited storage will discard data if the system cannot connect and upload the data to a central server. We address the challenge of limited storage capacity in mobile health systems during network partitions with a heuristic that achieves efficiency in storage capacity by modifying the granularity of the medical data during long intercontact periods. Patterns in the connectivity, reception rate, distance, and location are extracted from the social system network and leveraged in the global algorithm and online heuristic. In the global algorithm, the stochastic nature of the data is modeled with maximum likelihood estimation based on the distribution of the reception rates. In the online heuristic, the correlation between system position and the reception rate is combined with patterns in human mobility to estimate the intracontact and intercontact time. The online heuristic performs well with a low data loss of 2.1%-6.1%.

  9. Multiscale Modeling of Virus Entry via Receptor-Mediated Endocytosis

    NASA Astrophysics Data System (ADS)

    Liu, Jin

    2012-11-01

    Virus infections are ubiquitous and remain major threats to human health worldwide. Viruses are intracellular parasites and must enter host cells to initiate infection. Receptor-mediated endocytosis is the most common entry pathway taken by viruses, the whole process is highly complex and dictated by various events, such as virus motions, membrane deformations, receptor diffusion and ligand-receptor reactions, occurring at multiple length and time scales. We develop a multiscale model for virus entry through receptor-mediated endocytosis. The binding of virus to cell surface is based on a mesoscale three dimensional stochastic adhesion model, the internalization (endocytosis) of virus and cellular membrane deformation is based on the discretization of Helfrich Hamiltonian in a curvilinear space using Monte Carlo method. The multiscale model is based on the combination of these two models. We will implement this model to study the herpes simplex virus entry into B78 cells and compare the model predictions with experimental measurements.

  10. On the emergence of a generalised Gamma distribution. Application to traded volume in financial markets

    NASA Astrophysics Data System (ADS)

    Duarte Queirós, S. M.

    2005-08-01

    This letter reports on a stochastic dynamical scenario whose associated stationary probability density function is exactly a generalised form, with a power law instead of exponencial decay, of the ubiquitous Gamma distribution. This generalisation, also known as F-distribution, was empirically proposed for the first time to adjust for high-frequency stock traded volume distributions in financial markets and verified in experiments with granular material. The dynamical assumption presented herein is based on local temporal fluctuations of the average value of the observable under study. This proposal is related to superstatistics and thus to the current nonextensive statistical mechanics framework. For the specific case of stock traded volume, we connect the local fluctuations in the mean stock traded volume with the typical herding behaviour presented by financial traders. Last of all, NASDAQ 1 and 2 minute stock traded volume sequences and probability density functions are numerically reproduced.

  11. "Turn Up the Taste": Assessing the Role of Taste Intensity and Emotion in Mediating Crossmodal Correspondences between Basic Tastes and Pitch.

    PubMed

    Wang, Qian Janice; Wang, Sheila; Spence, Charles

    2016-05-01

    People intuitively match basic tastes to sounds of different pitches, and the matches that they make tend to be consistent across individuals. It is, though, not altogether clear what governs such crossmodal mappings between taste and auditory pitch. Here, we assess whether variations in taste intensity influence the matching of taste to pitch as well as the role of emotion in mediating such crossmodal correspondences. Participants were presented with 5 basic tastants at 3 concentrations. In Experiment 1, the participants rated the tastants in terms of their emotional arousal and valence/pleasantness, and selected a musical note (from 19 possible pitches ranging from C2 to C8) and loudness that best matched each tastant. In Experiment 2, the participants made emotion ratings and note matches in separate blocks of trials, then made emotion ratings for all 19 notes. Overall, the results of the 2 experiments revealed that both taste quality and concentration exerted a significant effect on participants' loudness selection, taste intensity rating, and valence and arousal ratings. Taste quality, not concentration levels, had a significant effect on participants' choice of pitch, but a significant positive correlation was observed between individual perceived taste intensity and pitch choice. A significant and strong correlation was also demonstrated between participants' valence assessments of tastants and their valence assessments of the best-matching musical notes. These results therefore provide evidence that: 1) pitch-taste correspondences are primarily influenced by taste quality, and to a lesser extent, by perceived intensity; and 2) such correspondences may be mediated by valence/pleasantness. © The Author 2016. Published by Oxford University Press.

  12. The Neural Basis of Taste-visual Modal Conflict Control in Appetitive and Aversive Gustatory Context.

    PubMed

    Xiao, Xiao; Dupuis-Roy, Nicolas; Jiang, Jun; Du, Xue; Zhang, Mingmin; Zhang, Qinglin

    2018-02-21

    The functional magnetic resonance imaging (fMRI) technique was used to investigate brain activations related to conflict control in a taste-visual cross-modal pairing task. On each trial, participants had to decide whether the taste of a gustatory stimulus matched or did not match the expected taste of the food item depicted in an image. There were four conditions: Negative match (NM; sour gustatory stimulus and image of sour food), negative mismatch (NMM; sour gustatory stimulus and image of sweet food), positive match (PM; sweet gustatory stimulus and image of sweet food), positive mismatch (PMM; sweet gustatory stimulus and image of sour food). Blood oxygenation level-dependent (BOLD) contrasts between the NMM and the NM conditions revealed an increased activity in the middle frontal gyrus (MFG) (BA 6), the lingual gyrus (LG) (BA 18), and the postcentral gyrus. Furthermore, the NMM minus NM BOLD differences observed in the MFG were correlated with the NMM minus NM differences in response time. These activations were specifically associated with conflict control during the aversive gustatory stimulation. BOLD contrasts between the PMM and the PM condition revealed no significant positive activation, which supported the hypothesis that the human brain is especially sensitive to aversive stimuli. Altogether, these results suggest that the MFG is associated with the taste-visual cross-modal conflict control. A possible role of the LG as an information conflict detector at an early perceptual stage is further discussed, along with a possible involvement of the postcentral gyrus in the processing of the taste-visual cross-modal sensory contrast. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  13. Increasing Working Memory Load Reduces Processing of Cross-Modal Task-Irrelevant Stimuli Even after Controlling for Task Difficulty and Executive Capacity

    PubMed Central

    Simon, Sharon S.; Tusch, Erich S.; Holcomb, Phillip J.; Daffner, Kirk R.

    2016-01-01

    The classic account of the load theory (LT) of attention suggests that increasing cognitive load leads to greater processing of task-irrelevant stimuli due to competition for limited executive resource that reduces the ability to actively maintain current processing priorities. Studies testing this hypothesis have yielded widely divergent outcomes. The inconsistent results may, in part, be related to variability in executive capacity (EC) and task difficulty across subjects in different studies. Here, we used a cross-modal paradigm to investigate whether augmented working memory (WM) load leads to increased early distracter processing, and controlled for the potential confounders of EC and task difficulty. Twenty-three young subjects were engaged in a primary visual WM task, under high and low load conditions, while instructed to ignore irrelevant auditory stimuli. Demands of the high load condition were individually titrated to make task difficulty comparable across subjects with differing EC. Event-related potentials (ERPs) were used to measure neural activity in response to stimuli presented in both the task relevant modality (visual) and task-irrelevant modality (auditory). Behavioral results indicate that the load manipulation and titration procedure of the primary visual task were successful. ERPs demonstrated that in response to visual target stimuli, there was a load-related increase in the posterior slow wave, an index of sustained attention and effort. Importantly, under high load, there was a decrease of the auditory N1 in response to distracters, a marker of early auditory processing. These results suggest that increased WM load is associated with enhanced attentional engagement and protection from distraction in a cross-modal setting, even after controlling for task difficulty and EC. Our findings challenge the classic LT and offer support for alternative models. PMID:27536226

  14. “Turn Up the Taste”: Assessing the Role of Taste Intensity and Emotion in Mediating Crossmodal Correspondences between Basic Tastes and Pitch

    PubMed Central

    Wang, Sheila; Spence, Charles

    2016-01-01

    People intuitively match basic tastes to sounds of different pitches, and the matches that they make tend to be consistent across individuals. It is, though, not altogether clear what governs such crossmodal mappings between taste and auditory pitch. Here, we assess whether variations in taste intensity influence the matching of taste to pitch as well as the role of emotion in mediating such crossmodal correspondences. Participants were presented with 5 basic tastants at 3 concentrations. In Experiment 1, the participants rated the tastants in terms of their emotional arousal and valence/pleasantness, and selected a musical note (from 19 possible pitches ranging from C2 to C8) and loudness that best matched each tastant. In Experiment 2, the participants made emotion ratings and note matches in separate blocks of trials, then made emotion ratings for all 19 notes. Overall, the results of the 2 experiments revealed that both taste quality and concentration exerted a significant effect on participants’ loudness selection, taste intensity rating, and valence and arousal ratings. Taste quality, not concentration levels, had a significant effect on participants’ choice of pitch, but a significant positive correlation was observed between individual perceived taste intensity and pitch choice. A significant and strong correlation was also demonstrated between participants’ valence assessments of tastants and their valence assessments of the best-matching musical notes. These results therefore provide evidence that: 1) pitch–taste correspondences are primarily influenced by taste quality, and to a lesser extent, by perceived intensity; and 2) such correspondences may be mediated by valence/pleasantness. PMID:26873934

  15. Increasing Working Memory Load Reduces Processing of Cross-Modal Task-Irrelevant Stimuli Even after Controlling for Task Difficulty and Executive Capacity.

    PubMed

    Simon, Sharon S; Tusch, Erich S; Holcomb, Phillip J; Daffner, Kirk R

    2016-01-01

    The classic account of the load theory (LT) of attention suggests that increasing cognitive load leads to greater processing of task-irrelevant stimuli due to competition for limited executive resource that reduces the ability to actively maintain current processing priorities. Studies testing this hypothesis have yielded widely divergent outcomes. The inconsistent results may, in part, be related to variability in executive capacity (EC) and task difficulty across subjects in different studies. Here, we used a cross-modal paradigm to investigate whether augmented working memory (WM) load leads to increased early distracter processing, and controlled for the potential confounders of EC and task difficulty. Twenty-three young subjects were engaged in a primary visual WM task, under high and low load conditions, while instructed to ignore irrelevant auditory stimuli. Demands of the high load condition were individually titrated to make task difficulty comparable across subjects with differing EC. Event-related potentials (ERPs) were used to measure neural activity in response to stimuli presented in both the task relevant modality (visual) and task-irrelevant modality (auditory). Behavioral results indicate that the load manipulation and titration procedure of the primary visual task were successful. ERPs demonstrated that in response to visual target stimuli, there was a load-related increase in the posterior slow wave, an index of sustained attention and effort. Importantly, under high load, there was a decrease of the auditory N1 in response to distracters, a marker of early auditory processing. These results suggest that increased WM load is associated with enhanced attentional engagement and protection from distraction in a cross-modal setting, even after controlling for task difficulty and EC. Our findings challenge the classic LT and offer support for alternative models.

  16. Visual Information Present in Infragranular Layers of Mouse Auditory Cortex.

    PubMed

    Morrill, Ryan J; Hasenstaub, Andrea R

    2018-03-14

    The cerebral cortex is a major hub for the convergence and integration of signals from across the sensory modalities; sensory cortices, including primary regions, are no exception. Here we show that visual stimuli influence neural firing in the auditory cortex of awake male and female mice, using multisite probes to sample single units across multiple cortical layers. We demonstrate that visual stimuli influence firing in both primary and secondary auditory cortex. We then determine the laminar location of recording sites through electrode track tracing with fluorescent dye and optogenetic identification using layer-specific markers. Spiking responses to visual stimulation occur deep in auditory cortex and are particularly prominent in layer 6. Visual modulation of firing rate occurs more frequently at areas with secondary-like auditory responses than those with primary-like responses. Auditory cortical responses to drifting visual gratings are not orientation-tuned, unlike visual cortex responses. The deepest cortical layers thus appear to be an important locus for cross-modal integration in auditory cortex. SIGNIFICANCE STATEMENT The deepest layers of the auditory cortex are often considered its most enigmatic, possessing a wide range of cell morphologies and atypical sensory responses. Here we show that, in mouse auditory cortex, these layers represent a locus of cross-modal convergence, containing many units responsive to visual stimuli. Our results suggest that this visual signal conveys the presence and timing of a stimulus rather than specifics about that stimulus, such as its orientation. These results shed light on both how and what types of cross-modal information is integrated at the earliest stages of sensory cortical processing. Copyright © 2018 the authors 0270-6474/18/382854-09$15.00/0.

  17. Developmental and cross-modal plasticity in deafness: evidence from the P1 and N1 event related potentials in cochlear implanted children.

    PubMed

    Sharma, Anu; Campbell, Julia; Cardon, Garrett

    2015-02-01

    Cortical development is dependent on extrinsic stimulation. As such, sensory deprivation, as in congenital deafness, can dramatically alter functional connectivity and growth in the auditory system. Cochlear implants ameliorate deprivation-induced delays in maturation by directly stimulating the central nervous system, and thereby restoring auditory input. The scenario in which hearing is lost due to deafness and then reestablished via a cochlear implant provides a window into the development of the central auditory system. Converging evidence from electrophysiologic and brain imaging studies of deaf animals and children fitted with cochlear implants has allowed us to elucidate the details of the time course for auditory cortical maturation under conditions of deprivation. Here, we review how the P1 cortical auditory evoked potential (CAEP) provides useful insight into sensitive period cut-offs for development of the primary auditory cortex in deaf children fitted with cochlear implants. Additionally, we present new data on similar sensitive period dynamics in higher-order auditory cortices, as measured by the N1 CAEP in cochlear implant recipients. Furthermore, cortical re-organization, secondary to sensory deprivation, may take the form of compensatory cross-modal plasticity. We provide new case-study evidence that cross-modal re-organization, in which intact sensory modalities (i.e., vision and somatosensation) recruit cortical regions associated with deficient sensory modalities (i.e., auditory) in cochlear implanted children may influence their behavioral outcomes with the implant. Improvements in our understanding of developmental neuroplasticity in the auditory system should lead to harnessing central auditory plasticity for superior clinical technique. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. How visual timing and form information affect speech and non-speech processing.

    PubMed

    Kim, Jeesun; Davis, Chris

    2014-10-01

    Auditory speech processing is facilitated when the talker's face/head movements are seen. This effect is typically explained in terms of visual speech providing form and/or timing information. We determined the effect of both types of information on a speech/non-speech task (non-speech stimuli were spectrally rotated speech). All stimuli were presented paired with the talker's static or moving face. Two types of moving face stimuli were used: full-face versions (both spoken form and timing information available) and modified face versions (only timing information provided by peri-oral motion available). The results showed that the peri-oral timing information facilitated response time for speech and non-speech stimuli compared to a static face. An additional facilitatory effect was found for full-face versions compared to the timing condition; this effect only occurred for speech stimuli. We propose the timing effect was due to cross-modal phase resetting; the form effect to cross-modal priming. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Preservation of crossmodal selective attention in healthy aging

    PubMed Central

    Hugenschmidt, Christina E.; Peiffer, Ann M.; McCoy, Thomas P.; Hayasaka, Satoru; Laurienti, Paul J.

    2010-01-01

    The goal of the present study was to determine if older adults benefited from attention to a specific sensory modality in a voluntary attention task and evidenced changes in voluntary or involuntary attention when compared to younger adults. Suppressing and enhancing effects of voluntary attention were assessed using two cued forced-choice tasks, one that asked participants to localize and one that asked them to categorize visual and auditory targets. Involuntary attention was assessed using the same tasks, but with no attentional cues. The effects of attention were evaluated using traditional comparisons of means and Cox proportional hazards models. All analyses showed that older adults benefited behaviorally from selective attention in both visual and auditory conditions, including robust suppressive effects of attention. Of note, the performance of the older adults was commensurate with that of younger adults in almost all analyses, suggesting that older adults can successfully engage crossmodal attention processes. Thus, age-related increases in distractibility across sensory modalities are likely due to mechanisms other than deficits in attentional processing. PMID:19404621

  20. Improving visual spatial working memory in younger and older adults: effects of cross-modal cues.

    PubMed

    Curtis, Ashley F; Turner, Gary R; Park, Norman W; Murtha, Susan J E

    2017-11-06

    Spatially informative auditory and vibrotactile (cross-modal) cues can facilitate attention but little is known about how similar cues influence visual spatial working memory (WM) across the adult lifespan. We investigated the effects of cues (spatially informative or alerting pre-cues vs. no cues), cue modality (auditory vs. vibrotactile vs. visual), memory array size (four vs. six items), and maintenance delay (900 vs. 1800 ms) on visual spatial location WM recognition accuracy in younger adults (YA) and older adults (OA). We observed a significant interaction between spatially informative pre-cue type, array size, and delay. OA and YA benefitted equally from spatially informative pre-cues, suggesting that attentional orienting prior to WM encoding, regardless of cue modality, is preserved with age.  Contrary to predictions, alerting pre-cues generally impaired performance in both age groups, suggesting that maintaining a vigilant state of arousal by facilitating the alerting attention system does not help visual spatial location WM.

  1. Does working memory capacity predict cross-modally induced failures of awareness?

    PubMed

    Kreitz, Carina; Furley, Philip; Simons, Daniel J; Memmert, Daniel

    2016-01-01

    People often fail to notice unexpected stimuli when they are focusing attention on another task. Most studies of this phenomenon address visual failures induced by visual attention tasks (inattentional blindness). Yet, such failures also occur within audition (inattentional deafness), and people can even miss unexpected events in one sensory modality when focusing attention on tasks in another modality. Such cross-modal failures are revealing because they suggest the existence of a common, central resource limitation. And, such central limits might be predicted from individual differences in cognitive capacity. We replicated earlier evidence, establishing substantial rates of inattentional deafness during a visual task and inattentional blindness during an auditory task. However, neither individual working memory capacity nor the ability to perform the primary task predicted noticing in either modality. Thus, individual differences in cognitive capacity did not predict failures of awareness even though the failures presumably resulted from central resource limitations. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Cross-modal orienting of visual attention.

    PubMed

    Hillyard, Steven A; Störmer, Viola S; Feng, Wenfeng; Martinez, Antigona; McDonald, John J

    2016-03-01

    This article reviews a series of experiments that combined behavioral and electrophysiological recording techniques to explore the hypothesis that salient sounds attract attention automatically and facilitate the processing of visual stimuli at the sound's location. This cross-modal capture of visual attention was found to occur even when the attracting sound was irrelevant to the ongoing task and was non-predictive of subsequent events. A slow positive component in the event-related potential (ERP) that was localized to the visual cortex was found to be closely coupled with the orienting of visual attention to a sound's location. This neural sign of visual cortex activation was predictive of enhanced perceptual processing and was paralleled by a desynchronization (blocking) of the ongoing occipital alpha rhythm. Further research is needed to determine the nature of the relationship between the slow positive ERP evoked by the sound and the alpha desynchronization and to understand how these electrophysiological processes contribute to improved visual-perceptual processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Cross-modal extinction in a boy with severely autistic behaviour and high verbal intelligence.

    PubMed

    Bonneh, Yoram S; Belmonte, Matthew K; Pei, Francesca; Iversen, Portia E; Kenet, Tal; Akshoomoff, Natacha; Adini, Yael; Simon, Helen J; Moore, Christopher I; Houde, John F; Merzenich, Michael M

    2008-07-01

    Anecdotal reports from individuals with autism suggest a loss of awareness to stimuli from one modality in the presence of stimuli from another. Here we document such a case in a detailed study of A.M., a 13-year-old boy with autism in whom significant autistic behaviours are combined with an uneven IQ profile of superior verbal and low performance abilities. Although A.M.'s speech is often unintelligible, and his behaviour is dominated by motor stereotypies and impulsivity, he can communicate by typing or pointing independently within a letter board. A series of experiments using simple and highly salient visual, auditory, and tactile stimuli demonstrated a hierarchy of cross-modal extinction, in which auditory information extinguished other modalities at various levels of processing. A.M. also showed deficits in shifting and sustaining attention. These results provide evidence for monochannel perception in autism and suggest a general pattern of winner-takes-all processing in which a stronger stimulus-driven representation dominates behaviour, extinguishing weaker representations.

  4. Seeing a haptically explored face: visual facial-expression aftereffect from haptic adaptation to a face.

    PubMed

    Matsumiya, Kazumichi

    2013-10-01

    Current views on face perception assume that the visual system receives only visual facial signals. However, I show that the visual perception of faces is systematically biased by adaptation to a haptically explored face. Recently, face aftereffects (FAEs; the altered perception of faces after adaptation to a face) have been demonstrated not only in visual perception but also in haptic perception; therefore, I combined the two FAEs to examine whether the visual system receives face-related signals from the haptic modality. I found that adaptation to a haptically explored facial expression on a face mask produced a visual FAE for facial expression. This cross-modal FAE was not due to explicitly imaging a face, response bias, or adaptation to local features. Furthermore, FAEs transferred from vision to haptics. These results indicate that visual face processing depends on substrates adapted by haptic faces, which suggests that face processing relies on shared representation underlying cross-modal interactions.

  5. Is 9 louder than 1? Audiovisual cross-modal interactions between number magnitude and judged sound loudness.

    PubMed

    Alards-Tomalin, Doug; Walker, Alexander C; Shaw, Joshua D M; Leboe-McGowan, Launa C

    2015-09-01

    The cross-modal impact of number magnitude (i.e. Arabic digits) on perceived sound loudness was examined. Participants compared a target sound's intensity level against a previously heard reference sound (which they judged as quieter or louder). Paired with each target sound was a task irrelevant Arabic digit that varied in magnitude, being either small (1, 2, 3) or large (7, 8, 9). The degree to which the sound and the digit were synchronized was manipulated, with the digit and sound occurring simultaneously in Experiment 1, and the digit preceding the sound in Experiment 2. Firstly, when target sounds and digits occurred simultaneously, sounds paired with large digits were categorized as loud more frequently than sounds paired with small digits. Secondly, when the events were separated, number magnitude ceased to bias sound intensity judgments. In Experiment 3, the events were still separated, however the participants held the number in short-term memory. In this instance the bias returned. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Rapid modulation of spoken word recognition by visual primes.

    PubMed

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J

    2016-02-01

    In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics.

  7. Rapid modulation of spoken word recognition by visual primes

    PubMed Central

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J.

    2015-01-01

    In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics. PMID:26516296

  8. Interidentity memory transfer in dissociative identity disorder.

    PubMed

    Kong, Lauren L; Allen, John J B; Glisky, Elizabeth L

    2008-08-01

    Controversy surrounding dissociative identity disorder (DID) has focused on conflicting findings regarding the validity and nature of interidentity amnesia, illustrating the need for objective methods of examining amnesia that can discriminate between explicit and implicit memory transfer. In the present study, the authors used a cross-modal manipulation designed to mitigate implicit memory effects. Explicit memory transfer between identities was examined in 7 DID participants and 34 matched control participants. After words were presented to one identity auditorily, the authors tested another identity for memory of those words in the visual modality using an exclusion paradigm. Despite self-reported interidentity amnesia, memory for experimental stimuli transferred between identities. DID patients showed no superior ability to compartmentalize information, as would be expected with interidentity amnesia. The cross-modal nature of the test makes it unlikely that memory transfer was implicit. These findings demonstrate that subjective reports of interidentity amnesia are not necessarily corroborated by objective tests of explicit memory transfer. Copyright (c) 2008 APA, all rights reserved.

  9. Suppression and Working Memory in Auditory Comprehension of L2 Narratives: Evidence from Cross-Modal Priming.

    PubMed

    Wu, Shiyu; Ma, Zheng

    2016-10-01

    Using a cross-modal priming task, the present study explores whether Chinese-English bilinguals process goal related information during auditory comprehension of English narratives like native speakers. Results indicate that English native speakers adopted both mechanisms of suppression and enhancement to modulate the activation of goals and keep track of the "causal path" in narrative events and that L1 speakers with higher working memory (WM) capacity are more skilled at attenuating interference. L2 speakers, however, experienced the phenomenon of "facilitation-without-inhibition." Their difficulty in suppressing irrelevant information was related to their performance in the test of working memory capacity. For the L2 group with greater working memory capacity, the effects of both enhancement and suppression were found. These findings are discussed in light of a landscape model of L2 text comprehension which highlights the need for WM to be incorporated into comprehensive models of L2 processing as well as theories of SLA.

  10. Image recovery by removing stochastic artefacts identified as local asymmetries

    NASA Astrophysics Data System (ADS)

    Osterloh, K.; Bücherl, T.; Zscherpel, U.; Ewert, U.

    2012-04-01

    Stochastic artefacts are frequently encountered in digital radiography and tomography with neutrons. Most obviously, they are caused by ubiquitous scattered radiation hitting the CCD-sensor. They appear as scattered dots and, at higher frequency of occurrence, they may obscure the image. Some of these dotted interferences vary with time, however, a large portion of them remains persistent so the problem cannot be resolved by collecting stacks of images and to merge them to a median image. The situation becomes even worse in computed tomography (CT) where each artefact causes a circular pattern in the reconstructed plane. Therefore, these stochastic artefacts have to be removed completely and automatically while leaving the original image content untouched. A simplified image acquisition and artefact removal tool was developed at BAM and is available to interested users. Furthermore, an algorithm complying with all the requirements mentioned above was developed that reliably removes artefacts that could even exceed the size of a single pixel without affecting other parts of the image. It consists of an iterative two-step algorithm adjusting pixel values within a 3 × 3 matrix inside of a 5 × 5 kernel and the centre pixel only within a 3 × 3 kernel, resp. It has been applied to thousands of images obtained from the NECTAR facility at the FRM II in Garching, Germany, without any need of a visual control. In essence, the procedure consists of identifying and tackling asymmetric intensity distributions locally with recording each treatment of a pixel. Searching for the local asymmetry with subsequent correction rather than replacing individually identified pixels constitutes the basic idea of the algorithm. The efficiency of the proposed algorithm is demonstrated with a severely spoiled example of neutron radiography and tomography as compared with median filtering, the most convenient alternative approach by visual check, histogram and power spectra analysis.

  11. Floral Morphogenesis: Stochastic Explorations of a Gene Network Epigenetic Landscape

    PubMed Central

    Aldana, Maximino; Benítez, Mariana; Cortes-Poza, Yuriria; Espinosa-Soto, Carlos; Hartasánchez, Diego A.; Lotto, R. Beau; Malkin, David; Escalera Santos, Gerardo J.; Padilla-Longoria, Pablo

    2008-01-01

    In contrast to the classical view of development as a preprogrammed and deterministic process, recent studies have demonstrated that stochastic perturbations of highly non-linear systems may underlie the emergence and stability of biological patterns. Herein, we address the question of whether noise contributes to the generation of the stereotypical temporal pattern in gene expression during flower development. We modeled the regulatory network of organ identity genes in the Arabidopsis thaliana flower as a stochastic system. This network has previously been shown to converge to ten fixed-point attractors, each with gene expression arrays that characterize inflorescence cells and primordial cells of sepals, petals, stamens, and carpels. The network used is binary, and the logical rules that govern its dynamics are grounded in experimental evidence. We introduced different levels of uncertainty in the updating rules of the network. Interestingly, for a level of noise of around 0.5–10%, the system exhibited a sequence of transitions among attractors that mimics the sequence of gene activation configurations observed in real flowers. We also implemented the gene regulatory network as a continuous system using the Glass model of differential equations, that can be considered as a first approximation of kinetic-reaction equations, but which are not necessarily equivalent to the Boolean model. Interestingly, the Glass dynamics recover a temporal sequence of attractors, that is qualitatively similar, although not identical, to that obtained using the Boolean model. Thus, time ordering in the emergence of cell-fate patterns is not an artifact of synchronous updating in the Boolean model. Therefore, our model provides a novel explanation for the emergence and robustness of the ubiquitous temporal pattern of floral organ specification. It also constitutes a new approach to understanding morphogenesis, providing predictions on the population dynamics of cells with different genetic configurations during development. PMID:18978941

  12. Auditory Emotional Cues Enhance Visual Perception

    ERIC Educational Resources Information Center

    Zeelenberg, Rene; Bocanegra, Bruno R.

    2010-01-01

    Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…

  13. Motor Skill Learning in Children with Developmental Coordination Disorder

    ERIC Educational Resources Information Center

    Bo, Jin; Lee, Chi-Mei

    2013-01-01

    Children with Developmental Coordination Disorder (DCD) are characterized as having motor difficulties and learning impairment that may last well into adolescence and adulthood. Although behavioral deficits have been identified in many domains such as visuo-spatial processing, kinesthetic perception, and cross-modal sensory integration, recent…

  14. Is Phonological Encoding in Naming Influenced by Literacy?

    ERIC Educational Resources Information Center

    Ventura, Paulo; Kolinsky, Regine; Querido, Jose-Luis; Fernandes, Sandra; Morais, Jose

    2007-01-01

    We examined phonological priming in illiterate adults, using a cross-modal picture-word interference task. Participants named pictures while hearing distractor words at different Stimulus Onset Asynchronies (SOAs). Ex-illiterates and university students were also tested. We specifically assessed the ability of the three populations to use…

  15. Bistable dynamics of a levitated nanoparticle (Presentation Recording)

    NASA Astrophysics Data System (ADS)

    Ricci, Francesco; Spasenovic, M.; Rica, Raúl A.; Novotny, Lukas; Quidant, Romain

    2015-08-01

    Bistable systems are ubiquitous in nature. Classical examples in chemistry and biology include relaxation kinetics in chemical reactions [1] and stochastic resonance processes such as neuron firing [2,3]. Likewise, bistable systems play a key role in signal processing and information handling at the nanoscale, giving rise to intriguing applications such as optical switches [4], coherent signal amplification [5,6] and weak forces detection [5]. The interest and applicability of bistable systems are intimately connected with the complexity of their dynamics, typically due to the presence of a large number of parameters and nonlinearities. Appropriate modeling is therefore challenging. Alternatively, the possibility to experimentally recreate bistable systems in a clean and controlled way has recently become very appealing, but elusive and complicated. With this aim, we combined optical tweezers with a novel active feedback-cooling scheme to develop a well-defined opto-mechanical platform reaching unprecedented performances in terms of Q-factor, frequency stability and force sensitivity [7,8]. Our experimental system consists of a single nanoparticle levitated in high vacuum with optical tweezers, which behaves as a non-linear (Duffing) oscillator under appropriate conditions. Here, we prove it to be an ideal tool for a deep study of bistability. We demonstrate bistability of the nanoparticle by noise activated switching between two oscillation states, discussing our results in terms of a double-well potential model. We also show the flexibility of our system in shaping the potential at will, in order to meet the conditions prescribed by any bistable system that could therefore then be simulated with our setup. References [1] T. Amemiya, T. Ohmori, M. Nakaiwa, T. Yamamoto, and T. Yamaguchi, "Modeling of Nonlinear Chemical Reaction Systems and Two-Parameter Stochastic Resonance," J. Biol. Phys. 25 (1999) 73 [2] F. Moss, L. M. Ward, and W. G. Sannita, "Stochastic resonance and sensory information processing: a tutorial and review of application" Clinical neurophysiology 115 (2004) 267 [3] M. Platkov, and M. Gruebele, "Periodic and stochastic thermal modulation of protein folding kinetics" J. Chem. Phys. 141 (2014) 035103 [4] T. Tanabe, M. Notomi, S. Mitsugi, A. Shinya and E. Kuramochi. "Fast bistable all-optical switch and memory on a silicon photonic crystal on-chip". Opt. Lett., 30 (2005) 2575 [5] R. L. Badzey and P. Mohanty. "Coherent signal amplification in bistable nanomechanical oscillators by stochastic resonance" Nature, 437 (2005) 995 [6] W. J. Venstra, H. J. R. Westra, and H. S. J. van der Zant. "Stochastic switching of cantilever motion," Nature Communications, 4 (2013) 3624 [7] J. Gieseler, B. Deutsch, R. Quidant, and L. Novotny "Subkelvin parametric feedback cooling of a Laser-Trapped nanoparticle" Phys. Rev. Lett. 109 (2012) 103603 [8] J. Gieseler, M. Spasenović, L. Novotny, and R. Quidant, "Nonlinear Mode Coupling and Synchronization of a Vacuum-Trapped Nanoparticle," Phys. Rev. Lett. 112 (2014) 103603

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halsted, Michelle; Wilmoth, Jared L.; Briggs, Paige A.

    Microbial communities are incredibly complex systems that dramatically and ubiquitously influence our lives. They help to shape our climate and environment, impact agriculture, drive business, and have a tremendous bearing on healthcare and physical security. Spatial confinement, as well as local variations in physical and chemical properties, affects development and interactions within microbial communities that occupy critical niches in the environment. Recent work has demonstrated the use of silicon based microwell arrays, combined with parylene lift-off techniques, to perform both deterministic and stochastic assembly of microbial communities en masse, enabling the high-throughput screening of microbial communities for their response tomore » growth in confined environments under different conditions. The implementation of a transparent microwell array platform can expand and improve the imaging modalities that can be used to characterize these assembled communities. In this paper, the fabrication and characterization of a next generation transparent microwell array is described. The transparent arrays, comprised of SU-8 patterned on a glass coverslip, retain the ability to use parylene lift-off by integrating a low temperature atomic layer deposition of silicon dioxide into the fabrication process. This silicon dioxide layer prevents adhesion of the parylene material to the patterned SU-8, facilitating dry lift-off, and maintaining the ability to easily assemble microbial communities within the microwells. These transparent microwell arrays can screen numerous community compositions using continuous, high resolution, imaging. Finally, the utility of the design was successfully demonstrated through the stochastic seeding and imaging of green fluorescent protein expressing Escherichia coli using both fluorescence and brightfield microscopies.« less

  17. Morphological Decomposition and Semantic Integration in Word Processing

    ERIC Educational Resources Information Center

    Meunier, Fanny; Longtin, Catherine-Marie

    2007-01-01

    In the present study, we looked at cross-modal priming effects produced by auditory presentation of morphologically complex pseudowords in order to investigate semantic integration during the processing of French morphologically complex items. In Experiment 1, we used as primes pseudowords consisting of a non-interpretable combination of roots and…

  18. Deconstructing the McGurk-MacDonald Illusion

    ERIC Educational Resources Information Center

    Soto-Faraco, Salvador; Alsius, Agnes

    2009-01-01

    Cross-modal illusions such as the McGurk-MacDonald effect have been used to illustrate the automatic, encapsulated nature of multisensory integration. This characterization is based in the widespread assumption that the illusory percept arising from intersensory conflict reflects only the end-product of the multisensory integration process, with…

  19. The Transitive-Unaccusative Alternation: A Cross-Modal Priming Study

    ERIC Educational Resources Information Center

    Fadlon, Julie

    2016-01-01

    The relationship between different linguistic manifestations of an eventuality-denoting concept, referred to in the literature as diatheses or voices, is well-studied in theoretical linguistics. Among researchers studying this phenomenon, it is widely agreed that there is a systematic relationship between the various diatheses of a concept.…

  20. Cross-Modal Facilitation in Speech Prosody

    ERIC Educational Resources Information Center

    Foxton, Jessica M.; Riviere, Louis-David; Barone, Pascal

    2010-01-01

    Speech prosody has traditionally been considered solely in terms of its auditory features, yet correlated visual features exist, such as head and eyebrow movements. This study investigated the extent to which visual prosodic features are able to affect the perception of the auditory features. Participants were presented with videos of a speaker…

  1. Computer Aided Training of Cognitive Processing Strategies with Developmentally Handicapped Adults.

    ERIC Educational Resources Information Center

    Ryba, Kenneth A.; And Others

    1985-01-01

    Correlational results involving 60 developmentally handicaped adults indicated that a computerized cross-modal memory game had a highly significant relationship with most cognitive and motor coordination measures. Computer aided training was not effective in improving overall cognitive functioning. There was no evidence of cognitive skills being…

  2. Cross-Modal Face Identity Aftereffects and Their Relation to Priming

    ERIC Educational Resources Information Center

    Hills, Peter J.; Elward, Rachael L.; Lewis, Michael B.

    2010-01-01

    We tested the magnitude of the face identity aftereffect following adaptation to different modes of adaptors in four experiments. The perceptual midpoint between two morphed famous faces was measured pre- and post-adaptation. Significant aftereffects were observed for visual (faces) and nonvisual adaptors (voices and names) but not nonspecific…

  3. The utility of visual analogs of central auditory tests in the differential diagnosis of (central) auditory processing disorder and attention deficit hyperactivity disorder.

    PubMed

    Bellis, Teri James; Billiet, Cassie; Ross, Jody

    2011-09-01

    Cacace and McFarland (2005) have suggested that the addition of cross-modal analogs will improve the diagnostic specificity of (C)APD (central auditory processing disorder) by ensuring that deficits observed are due to the auditory nature of the stimulus and not to supra-modal or other confounds. Others (e.g., Musiek et al, 2005) have expressed concern about the use of such analogs in diagnosing (C)APD given the uncertainty as to the degree to which cross-modal measures truly are analogous and emphasize the nonmodularity of the CANs (central auditory nervous system) and its function, which precludes modality specificity of (C)APD. To date, no studies have examined the clinical utility of cross-modal (e.g., visual) analogs of central auditory tests in the differential diagnosis of (C)APD. This study investigated performance of children diagnosed with (C)APD, children diagnosed with ADHD (attention deficit hyperactivity disorder), and typically developing children on three diagnostic tests of central auditory function and their corresponding visual analogs. The study sought to determine whether deficits observed in the (C)APD group were restricted to the auditory modality and the degree to which the addition of visual analogs aids in the ability to differentiate among groups. An experimental repeated measures design was employed. Participants consisted of three groups of right-handed children (normal control, n=10; ADHD, n=10; (C)APD, n=7) with normal and symmetrical hearing sensitivity, normal or corrected-to-normal visual acuity, and no family or personal history of disorders unrelated to their primary diagnosis. Participants in Groups 2 and 3 met current diagnostic criteria for ADHD and (C)APD. Visual analogs of three tests in common clinical use for the diagnosis of (C)APD were used (Dichotic Digits [Musiek, 1983]; Frequency Patterns [Pinheiro and Ptacek, 1971]; and Duration Patterns [Pinheiro and Musiek, 1985]). Participants underwent two 1 hr test sessions separated by at least 1 wk. Order of sessions (auditory, visual) and tests within each session were counterbalanced across participants. ANCOVAs (analyses of covariance) were used to examine effects of group, modality, and laterality (Dichotic/Dichoptic Digits) or response condition (auditory and visual patterning). In addition, planned univariate ANCOVAs were used to examine effects of group on intratest comparison measures (REA, HLD [Humming-Labeling Differential]). Children with both ADHD and (C)APD performed more poorly overall than typically developing children on all tasks, with the (C)APD group exhibiting the poorest performance on the auditory and visual patterns tests but the ADHD and (C)APD group performing similarly on the Dichotic/Dichoptic Digits task. However, each of the auditory and visual intratest comparison measures, when taken individually, was able to distinguish the (C)APD group from both the normal control and ADHD groups, whose performance did not differ from one another. Results underscore the importance of intratest comparison measures in the interpretation of central auditory tests (American Speech-Language-Hearing Association [ASHA], 2005 ; American Academy of Audiology [AAA], 2010). Results also support the "non-modular" view of (C)APD in which cross-modal deficits would be predicted based on shared neuroanatomical substrates. Finally, this study demonstrates that auditory tests alone are sufficient to distinguish (C)APD from supra-modal disorders, with cross-modal analogs adding little if anything to the differential diagnostic process. American Academy of Audiology.

  4. Dysgranular Retrosplenial Cortex Lesions in Rats Disrupt Cross-Modal Object Recognition

    ERIC Educational Resources Information Center

    Hindley, Emma L.; Nelson, Andrew J. D.; Aggleton, John P.; Vann, Seralynne D.

    2014-01-01

    The retrosplenial cortex supports navigation, with one role thought to be the integration of different spatial cue types. This hypothesis was extended by examining the integration of nonspatial cues. Rats with lesions in either the dysgranular subregion of retrosplenial cortex (area 30) or lesions in both the granular and dysgranular subregions…

  5. Phonological Encoding in Speech-Sound Disorder: Evidence from a Cross-Modal Priming Experiment

    ERIC Educational Resources Information Center

    Munson, Benjamin; Krause, Miriam O. P.

    2017-01-01

    Background: Psycholinguistic models of language production provide a framework for determining the locus of language breakdown that leads to speech-sound disorder (SSD) in children. Aims: To examine whether children with SSD differ from their age-matched peers with typical speech and language development (TD) in the ability phonologically to…

  6. Pathways from Toddler Information Processing to Adolescent Lexical Proficiency

    ERIC Educational Resources Information Center

    Rose, Susan A.; Feldman, Judith F.; Jankowski, Jeffery J.

    2015-01-01

    This study examined the relation of 3-year core information-processing abilities to lexical growth and development. The core abilities covered four domains--memory, representational competence (cross-modal transfer), processing speed, and attention. Lexical proficiency was assessed at 3 and 13 years with the Peabody Picture Vocabulary Test (PPVT)…

  7. The Colors of Anger, Envy, Fear, and Jealously: A Cross-Cultural Study.

    ERIC Educational Resources Information Center

    Hupka, Ralph B.; And Others

    1997-01-01

    Studies to what extent emotion words--anger, envy, fear, and jealousy--reminded samples of Americans, Germans, Mexicans, Poles and Russians, of 12 terms of color. Responses from 661 undergraduates suggest that cross-modal associations originate in universal human experiences and in culture-specific variables, such as language, mythology, and…

  8. Grammatical Processing of Spoken Language in Child and Adult Language Learners

    ERIC Educational Resources Information Center

    Felser, Claudia; Clahsen, Harald

    2009-01-01

    This article presents a selective overview of studies that have investigated auditory language processing in children and late second-language (L2) learners using online methods such as event-related potentials (ERPs), eye-movement monitoring, or the cross-modal priming paradigm. Two grammatical phenomena are examined in detail, children's and…

  9. Cross-Modal Interactions during Perception of Audiovisual Speech and Nonspeech Signals: An fMRI Study

    ERIC Educational Resources Information Center

    Hertrich, Ingo; Dietrich, Susanne; Ackermann, Hermann

    2011-01-01

    During speech communication, visual information may interact with the auditory system at various processing stages. Most noteworthy, recent magnetoencephalography (MEG) data provided first evidence for early and preattentive phonetic/phonological encoding of the visual data stream--prior to its fusion with auditory phonological features [Hertrich,…

  10. Spatial Metaphor in Language Can Promote the Development of Cross-Modal Mappings in Children

    ERIC Educational Resources Information Center

    Shayan, Shakila; Ozturk, Ozge; Bowerman, Melissa; Majid, Asifa

    2014-01-01

    Pitch is often described metaphorically: for example, Farsi and Turkish speakers use a "thickness" metaphor (low sounds are "thick" and high sounds are "thin"), while German and English speakers use a height metaphor ("low", "high"). This study examines how child and adult speakers of Farsi,…

  11. Cross-Modal Attention-Switching Is Impaired in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Reed, Phil; McCarthy, Julia

    2012-01-01

    This investigation aimed to determine if children with ASD are impaired in their ability to switch attention between different tasks, and whether performance is further impaired when required to switch across two separate modalities (visual and auditory). Eighteen children with ASD (9-13 years old) were compared with 18 typically-developing…

  12. Behold the Voice of Wrath: Cross-Modal Modulation of Visual Attention by Anger Prosody

    ERIC Educational Resources Information Center

    Brosch, Tobias; Grandjean, Didier; Sander, David; Scherer, Klaus R.

    2008-01-01

    Emotionally relevant stimuli are prioritized in human information processing. It has repeatedly been shown that selective spatial attention is modulated by the emotional content of a stimulus. Until now, studies investigating this phenomenon have only examined "within-modality" effects, most frequently using pictures of emotional stimuli to…

  13. Infant Information Processing in Relation to Six-Year Cognitive Outcomes.

    ERIC Educational Resources Information Center

    Rose, Susan A.; And Others

    1992-01-01

    Infants' visual recognition memory (VRM) at seven months was associated with their general intelligence, language proficiency, reading and quantitative skills, and perceptual organization at six years. Infants' VRM, object permanence, and cross-modal transfer of perceptions at one year were related to their IQ and several outcomes at six years.…

  14. Eye Closure Reduces the Cross-Modal Memory Impairment Caused by Auditory Distraction

    ERIC Educational Resources Information Center

    Perfect, Timothy J.; Andrade, Jackie; Eagan, Irene

    2011-01-01

    Eyewitnesses instructed to close their eyes during retrieval recall more correct and fewer incorrect visual and auditory details. This study tested whether eye closure causes these effects through a reduction in environmental distraction. Sixty participants watched a staged event before verbally answering questions about it in the presence of…

  15. Crossmodal Semantic Priming by Naturalistic Sounds and Spoken Words Enhances Visual Sensitivity

    ERIC Educational Resources Information Center

    Chen, Yi-Chuan; Spence, Charles

    2011-01-01

    We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when…

  16. Supramodality Effects in Visual and Haptic Spatial Processes

    ERIC Educational Resources Information Center

    Cattaneo, Zaira; Vecchi, Tomaso

    2008-01-01

    In this article, the authors investigated unimodal and cross-modal processes in spatial working memory. A number of locations had to be memorized within visual or haptic matrices according to different experimental conditions known to be critical in accounting for the effects of perception on imagery. Results reveal that some characteristics of…

  17. "Like Me": A Foundation for Social Cognition

    ERIC Educational Resources Information Center

    Meltzoff, Andrew N.

    2007-01-01

    Infants represent the acts of others and their own acts in commensurate terms. They can recognize cross-modal equivalences between acts they see others perform and their own felt bodily movements. This recognition of self-other equivalences in action gives rise to interpreting others as having similar psychological states such as perceptions and…

  18. Gesture and Metaphor Comprehension: Electrophysiological Evidence of Cross-Modal Coordination by Audiovisual Stimulation

    ERIC Educational Resources Information Center

    Cornejo, Carlos; Simonetti, Franco; Ibanez, Agustin; Aldunate, Nerea; Ceric, Francisco; Lopez, Vladimir; Nunez, Rafael E.

    2009-01-01

    In recent years, studies have suggested that gestures influence comprehension of linguistic expressions, for example, eliciting an N400 component in response to a speech/gesture mismatch. In this paper, we investigate the role of gestural information in the understanding of metaphors. Event related potentials (ERPs) were recorded while…

  19. Got Rhythm...For Better and for Worse. Cross-Modal Effects of Auditory Rhythm on Visual Word Recognition

    ERIC Educational Resources Information Center

    Brochard, Renaud; Tassin, Maxime; Zagar, Daniel

    2013-01-01

    The present research aimed to investigate whether, as previously observed with pictures, background auditory rhythm would also influence visual word recognition. In a lexical decision task, participants were presented with bisyllabic visual words, segmented into two successive groups of letters, while an irrelevant strongly metric auditory…

  20. Cross-Modal Bilingualism: Language Contact as Evidence of Linguistic Transfer in Sign Bilingual Education

    ERIC Educational Resources Information Center

    Menendez, Bruno

    2010-01-01

    New positive attitudes towards language interaction in the realm of bilingualism open new horizons for sign bilingual education. Plaza-Pust and Morales-Lopez have innovatively reconceptualised a new cross-disciplinary approach to sign bilingualism, based on both sociolinguistics and psycholinguistics. According to this framework, cross-modal…

  1. Intramodal and Intermodal Functioning of Normal and LD Children

    ERIC Educational Resources Information Center

    Heath, Earl J.; Early, George H.

    1973-01-01

    Assessed were the abilities of 50 normal 5-to 9-year-old children and 30 learning disabled 7-to 9-year-old children to recognize temporal patterns presented visually and auditorially (intramodal abilities) and to vocally produce the patterns whether presentation was visual or auditory (intramodal and cross-modal abilities). (MC)

  2. Effects of Language Comprehension on Visual Processing--MEG Dissociates Early Perceptual and Late N400 Effects

    ERIC Educational Resources Information Center

    Hirschfeld, Gerrit; Zwitserlood, Pienie; Dobel, Christian

    2011-01-01

    We investigated whether and when information conveyed by spoken language impacts on the processing of visually presented objects. In contrast to traditional views, grounded-cognition posits direct links between language comprehension and perceptual processing. We used a magnetoencephalographic cross-modal priming paradigm to disentangle these…

  3. Cross-Modal Perception of Noise-in-Music: Audiences Generate Spiky Shapes in Response to Auditory Roughness in a Novel Electroacoustic Concert Setting

    PubMed Central

    Liew, Kongmeng; Lindborg, PerMagnus; Rodrigues, Ruth; Styles, Suzy J.

    2018-01-01

    Noise has become integral to electroacoustic music aesthetics. In this paper, we define noise as sound that is high in auditory roughness, and examine its effect on cross-modal mapping between sound and visual shape in participants. In order to preserve the ecological validity of contemporary music aesthetics, we developed Rama, a novel interface, for presenting experimentally controlled blocks of electronically generated sounds that varied systematically in roughness, and actively collected data from audience interaction. These sounds were then embedded as musical drones within the overall sound design of a multimedia performance with live musicians, Audience members listened to these sounds, and collectively voted to create the shape of a visual graphic, presented as part of the audio–visual performance. The results of the concert setting were replicated in a controlled laboratory environment to corroborate the findings. Results show a consistent effect of auditory roughness on shape design, with rougher sounds corresponding to spikier shapes. We discuss the implications, as well as evaluate the audience interface. PMID:29515494

  4. Human and animal sounds influence recognition of body language.

    PubMed

    Van den Stock, Jan; Grèzes, Julie; de Gelder, Beatrice

    2008-11-25

    In naturalistic settings emotional events have multiple correlates and are simultaneously perceived by several sensory systems. Recent studies have shown that recognition of facial expressions is biased towards the emotion expressed by a simultaneously presented emotional expression in the voice even if attention is directed to the face only. So far, no study examined whether this phenomenon also applies to whole body expressions, although there is no obvious reason why this crossmodal influence would be specific for faces. Here we investigated whether perception of emotions expressed in whole body movements is influenced by affective information provided by human and by animal vocalizations. Participants were instructed to attend to the action displayed by the body and to categorize the expressed emotion. The results indicate that recognition of body language is biased towards the emotion expressed by the simultaneously presented auditory information, whether it consist of human or of animal sounds. Our results show that a crossmodal influence from auditory to visual emotional information obtains for whole body video images with the facial expression blanked and includes human as well as animal sounds.

  5. An overture to overeating: The cross-modal effects of acoustic pitch on food preferences and serving behavior.

    PubMed

    Lowe, Michael; Ringler, Christine; Haws, Kelly

    2018-04-01

    Billions of dollars are spent annually with the aim of enticing consumers to purchase food. Yet despite the prevalence of such advertising, little is known about how the actual sensation of this advertising media affects consumer behavior, including consequential choices regarding food. This paper explores the effect of acoustic pitch in food advertising, demonstrating in two studies, including a field study in a live retail environment, how the perception of pitch in advertising can impact food desirability and decisions regarding serving size. In study 1, a field study, pitch affects actual serving sizes and purchase behavior in a live, self-serve retail setting, with low pitch leading to larger serving sizes. Study 2 demonstrates how low pitch increases desire for a food product among hungry consumers, and that this effect is mediated by perceptions of size and how filling consumers believe the product will be. We discuss these results in the context of cross-modal correspondence and mental imagery. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Sounds Activate Visual Cortex and Improve Visual Discrimination

    PubMed Central

    Störmer, Viola S.; Martinez, Antigona; McDonald, John J.; Hillyard, Steven A.

    2014-01-01

    A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. PMID:25031419

  7. Cross-modal detection using various temporal and spatial configurations.

    PubMed

    Schirillo, James A

    2011-01-01

    To better understand temporal and spatial cross-modal interactions, two signal detection experiments were conducted in which an auditory target was sometimes accompanied by an irrelevant flash of light. In the first, a psychometric function for detecting a unisensory auditory target in varying signal-to-noise ratios (SNRs) was derived. Then auditory target detection was measured while an irrelevant light was presented with light/sound stimulus onset asynchronies (SOAs) between 0 and ±700 ms. When the light preceded the sound by 100 ms or was coincident, target detection (d') improved for low SNR conditions. In contrast, for larger SOAs (350 and 700 ms), the behavioral gain resulted from a change in both d' and response criterion (β). However, when the light followed the sound, performance changed little. In the second experiment, observers detected multimodal target sounds at eccentricities of ±8°, and ±24°. Sensitivity benefits occurred at both locations, with a larger change at the more peripheral location. Thus, both temporal and spatial factors affect signal detection measures, effectively parsing sensory and decision-making processes.

  8. Time-compressed spoken word primes crossmodally enhance processing of semantically congruent visual targets.

    PubMed

    Mahr, Angela; Wentura, Dirk

    2014-02-01

    Findings from three experiments support the conclusion that auditory primes facilitate the processing of related targets. In Experiments 1 and 2, we employed a crossmodal Stroop color identification task with auditory color words (as primes) and visual color patches (as targets). Responses were faster for congruent priming, in comparison to neutral or incongruent priming. This effect also emerged for different levels of time compression of the auditory primes (to 30 % and 10 % of the original length; i.e., 120 and 40 ms) and turned out to be even more pronounced under high-perceptual-load conditions (Exps. 1 and 2). In Experiment 3, target-present or -absent decisions for brief target displays had to be made, thereby ruling out response-priming processes as a cause of the congruency effects. Nevertheless, target detection (d') was increased by congruent primes (30 % compression) in comparison to incongruent or neutral primes. Our results suggest semantic object-based auditory-visual interactions, which rapidly increase the denoted target object's salience. This would apply, in particular, to complex visual scenes.

  9. Vocal and visual stimulation, congruence and lateralization affect brain oscillations in interspecies emotional positive and negative interactions.

    PubMed

    Balconi, Michela; Vanutelli, Maria Elide

    2016-01-01

    The present research explored the effect of cross-modal integration of emotional cues (auditory and visual (AV)) compared with only visual (V) emotional cues in observing interspecies interactions. The brain activity was monitored when subjects processed AV and V situations, which represented an emotional (positive or negative), interspecies (human-animal) interaction. Congruence (emotionally congruous or incongruous visual and auditory patterns) was also modulated. electroencephalography brain oscillations (from delta to beta) were analyzed and the cortical source localization (by standardized Low Resolution Brain Electromagnetic Tomography) was applied to the data. Frequency band (mainly low-frequency delta and theta) showed a significant brain activity increasing in response to negative compared to positive interactions within the right hemisphere. Moreover, differences were found based on stimulation type, with an increased effect for AV compared with V. Finally, delta band supported a lateralized right dorsolateral prefrontal cortex (DLPFC) activity in response to negative and incongruous interspecies interactions, mainly for AV. The contribution of cross-modality, congruence (incongruous patterns), and lateralization (right DLPFC) in response to interspecies emotional interactions was discussed at light of a "negative lateralized effect."

  10. Cross-modal Associations between Real Tastes and Colors.

    PubMed

    Saluja, Supreet; Stevenson, Richard J

    2018-06-02

    People make reliable and consistent matches between taste and color. However, in contrast to other cross-modal correspondences, all of the research to date has used only taste words (and often color words too), potentially limiting our understanding of how taste-color matches arise. Here, participants sampled the five basic tastes, at three concentration steps, and selected their best matching color from a color-wheel. This test was repeated, and in addition, participants evaluated the valence of the taste and their color choice, as well as the qualities/intensities of the taste stimuli. Participants were then presented with taste names and asked to generate the best matching color name, as well as reporting how they made their earlier choices. Color selections were reliable and consistent, and closely followed those based on taste word matches obtained in this and prior studies. Most participants reported basing their color choices on their associated taste-object (often foods). There was marked similarity in valence between taste and color choices, and the saturation of color choices was related to tastant concentration. We discuss what drives color-taste pairings, with learning suggested as one possible mechanism.

  11. Haptic guidance of overt visual attention.

    PubMed

    List, Alexandra; Iordanescu, Lucica; Grabowecky, Marcia; Suzuki, Satoru

    2014-11-01

    Research has shown that information accessed from one sensory modality can influence perceptual and attentional processes in another modality. Here, we demonstrated a novel crossmodal influence of haptic-shape information on visual attention. Participants visually searched for a target object (e.g., an orange) presented among distractor objects, fixating the target as quickly as possible. While searching for the target, participants held (never viewed and out of sight) an item of a specific shape in their hands. In two experiments, we demonstrated that the time for the eyes to reach a target-a measure of overt visual attention-was reduced when the shape of the held item (e.g., a sphere) was consistent with the shape of the visual target (e.g., an orange), relative to when the held shape was unrelated to the target (e.g., a hockey puck) or when no shape was held. This haptic-to-visual facilitation occurred despite the fact that the held shapes were not predictive of the visual targets' shapes, suggesting that the crossmodal influence occurred automatically, reflecting shape-specific haptic guidance of overt visual attention.

  12. Cross-Modal Perception of Noise-in-Music: Audiences Generate Spiky Shapes in Response to Auditory Roughness in a Novel Electroacoustic Concert Setting.

    PubMed

    Liew, Kongmeng; Lindborg, PerMagnus; Rodrigues, Ruth; Styles, Suzy J

    2018-01-01

    Noise has become integral to electroacoustic music aesthetics. In this paper, we define noise as sound that is high in auditory roughness, and examine its effect on cross-modal mapping between sound and visual shape in participants. In order to preserve the ecological validity of contemporary music aesthetics, we developed Rama , a novel interface, for presenting experimentally controlled blocks of electronically generated sounds that varied systematically in roughness, and actively collected data from audience interaction. These sounds were then embedded as musical drones within the overall sound design of a multimedia performance with live musicians, Audience members listened to these sounds, and collectively voted to create the shape of a visual graphic, presented as part of the audio-visual performance. The results of the concert setting were replicated in a controlled laboratory environment to corroborate the findings. Results show a consistent effect of auditory roughness on shape design, with rougher sounds corresponding to spikier shapes. We discuss the implications, as well as evaluate the audience interface.

  13. Prevailing theories of consciousness are challenged by novel cross-modal associations acquired between subliminal stimuli.

    PubMed

    Scott, Ryan B; Samaha, Jason; Chrisley, Ron; Dienes, Zoltan

    2018-06-01

    While theories of consciousness differ substantially, the 'conscious access hypothesis', which aligns consciousness with the global accessibility of information across cortical regions, is present in many of the prevailing frameworks. This account holds that consciousness is necessary to integrate information arising from independent functions such as the specialist processing required by different senses. We directly tested this account by evaluating the potential for associative learning between novel pairs of subliminal stimuli presented in different sensory modalities. First, pairs of subliminal stimuli were presented and then their association assessed by examining the ability of the first stimulus to prime classification of the second. In Experiments 1-4 the stimuli were word-pairs consisting of a male name preceding either a creative or uncreative profession. Participants were subliminally exposed to two name-profession pairs where one name was paired with a creative profession and the other an uncreative profession. A supraliminal task followed requiring the timed classification of one of those two professions. The target profession was preceded by either the name with which it had been subliminally paired (concordant) or the alternate name (discordant). Experiment 1 presented stimuli auditorily, Experiment 2 visually, and Experiment 3 presented names auditorily and professions visually. All three experiments revealed the same inverse priming effect with concordant test pairs associated with significantly slower classification judgements. Experiment 4 sought to establish if learning would be more efficient with supraliminal stimuli and found evidence that a different strategy is adopted when stimuli are consciously perceived. Finally, Experiment 5 replicated the unconscious cross-modal association achieved in Experiment 3 utilising non-linguistic stimuli. The results demonstrate the acquisition of novel cross-modal associations between stimuli which are not consciously perceived and thus challenge the global access hypothesis and those theories embracing it. Copyright © 2018. Published by Elsevier B.V.

  14. Extending the Body to Virtual Tools Using a Robotic Surgical Interface: Evidence from the Crossmodal Congruency Task

    PubMed Central

    Sengül, Ali; van Elk, Michiel; Rognini, Giulio; Aspell, Jane Elizabeth; Bleuler, Hannes; Blanke, Olaf

    2012-01-01

    The effects of real-world tool use on body or space representations are relatively well established in cognitive neuroscience. Several studies have shown, for example, that active tool use results in a facilitated integration of multisensory information in peripersonal space, i.e. the space directly surrounding the body. However, it remains unknown to what extent similar mechanisms apply to the use of virtual-robotic tools, such as those used in the field of surgical robotics, in which a surgeon may use bimanual haptic interfaces to control a surgery robot at a remote location. This paper presents two experiments in which participants used a haptic handle, originally designed for a commercial surgery robot, to control a virtual tool. The integration of multisensory information related to the virtual-robotic tool was assessed by means of the crossmodal congruency task, in which subjects responded to tactile vibrations applied to their fingers while ignoring visual distractors superimposed on the tip of the virtual-robotic tool. Our results show that active virtual-robotic tool use changes the spatial modulation of the crossmodal congruency effects, comparable to changes in the representation of peripersonal space observed during real-world tool use. Moreover, when the virtual-robotic tools were held in a crossed position, the visual distractors interfered strongly with tactile stimuli that was connected with the hand via the tool, reflecting a remapping of peripersonal space. Such remapping was not only observed when the virtual-robotic tools were actively used (Experiment 1), but also when passively held the tools (Experiment 2). The present study extends earlier findings on the extension of peripersonal space from physical and pointing tools to virtual-robotic tools using techniques from haptics and virtual reality. We discuss our data with respect to learning and human factors in the field of surgical robotics and discuss the use of new technologies in the field of cognitive neuroscience. PMID:23227142

  15. Causal Inference for Cross-Modal Action Selection: A Computational Study in a Decision Making Framework.

    PubMed

    Daemi, Mehdi; Harris, Laurence R; Crawford, J Douglas

    2016-01-01

    Animals try to make sense of sensory information from multiple modalities by categorizing them into perceptions of individual or multiple external objects or internal concepts. For example, the brain constructs sensory, spatial representations of the locations of visual and auditory stimuli in the visual and auditory cortices based on retinal and cochlear stimulations. Currently, it is not known how the brain compares the temporal and spatial features of these sensory representations to decide whether they originate from the same or separate sources in space. Here, we propose a computational model of how the brain might solve such a task. We reduce the visual and auditory information to time-varying, finite-dimensional signals. We introduce controlled, leaky integrators as working memory that retains the sensory information for the limited time-course of task implementation. We propose our model within an evidence-based, decision-making framework, where the alternative plan units are saliency maps of space. A spatiotemporal similarity measure, computed directly from the unimodal signals, is suggested as the criterion to infer common or separate causes. We provide simulations that (1) validate our model against behavioral, experimental results in tasks where the participants were asked to report common or separate causes for cross-modal stimuli presented with arbitrary spatial and temporal disparities. (2) Predict the behavior in novel experiments where stimuli have different combinations of spatial, temporal, and reliability features. (3) Illustrate the dynamics of the proposed internal system. These results confirm our spatiotemporal similarity measure as a viable criterion for causal inference, and our decision-making framework as a viable mechanism for target selection, which may be used by the brain in cross-modal situations. Further, we suggest that a similar approach can be extended to other cognitive problems where working memory is a limiting factor, such as target selection among higher numbers of stimuli and selections among other modality combinations.

  16. Extending the body to virtual tools using a robotic surgical interface: evidence from the crossmodal congruency task.

    PubMed

    Sengül, Ali; van Elk, Michiel; Rognini, Giulio; Aspell, Jane Elizabeth; Bleuler, Hannes; Blanke, Olaf

    2012-01-01

    The effects of real-world tool use on body or space representations are relatively well established in cognitive neuroscience. Several studies have shown, for example, that active tool use results in a facilitated integration of multisensory information in peripersonal space, i.e. the space directly surrounding the body. However, it remains unknown to what extent similar mechanisms apply to the use of virtual-robotic tools, such as those used in the field of surgical robotics, in which a surgeon may use bimanual haptic interfaces to control a surgery robot at a remote location. This paper presents two experiments in which participants used a haptic handle, originally designed for a commercial surgery robot, to control a virtual tool. The integration of multisensory information related to the virtual-robotic tool was assessed by means of the crossmodal congruency task, in which subjects responded to tactile vibrations applied to their fingers while ignoring visual distractors superimposed on the tip of the virtual-robotic tool. Our results show that active virtual-robotic tool use changes the spatial modulation of the crossmodal congruency effects, comparable to changes in the representation of peripersonal space observed during real-world tool use. Moreover, when the virtual-robotic tools were held in a crossed position, the visual distractors interfered strongly with tactile stimuli that was connected with the hand via the tool, reflecting a remapping of peripersonal space. Such remapping was not only observed when the virtual-robotic tools were actively used (Experiment 1), but also when passively held the tools (Experiment 2). The present study extends earlier findings on the extension of peripersonal space from physical and pointing tools to virtual-robotic tools using techniques from haptics and virtual reality. We discuss our data with respect to learning and human factors in the field of surgical robotics and discuss the use of new technologies in the field of cognitive neuroscience.

  17. Cross-modal cueing effects of visuospatial attention on conscious somatosensory perception.

    PubMed

    Doruk, Deniz; Chanes, Lorena; Malavera, Alejandra; Merabet, Lotfi B; Valero-Cabré, Antoni; Fregni, Felipe

    2018-04-01

    The impact of visuospatial attention on perception with supraliminal stimuli and stimuli at the threshold of conscious perception has been previously investigated. In this study, we assess the cross-modal effects of visuospatial attention on conscious perception for near-threshold somatosensory stimuli applied to the face. Fifteen healthy participants completed two sessions of a near-threshold cross-modality cue-target discrimination/conscious detection paradigm. Each trial began with an endogenous visuospatial cue that predicted the location of a weak near-threshold electrical pulse delivered to the right or left cheek with high probability (∼75%). Participants then completed two tasks: first, a forced-choice somatosensory discrimination task (felt once or twice?) and then, a somatosensory conscious detection task (did you feel the stimulus and, if yes, where (left/right)?). Somatosensory discrimination was evaluated with the response reaction times of correctly detected targets, whereas the somatosensory conscious detection was quantified using perceptual sensitivity (d') and response bias (beta). A 2 × 2 repeated measures ANOVA was used for statistical analysis. In the somatosensory discrimination task (1 st task), participants were significantly faster in responding to correctly detected targets (p < 0.001). In the somatosensory conscious detection task (2 nd task), a significant effect of visuospatial attention on response bias (p = 0.008) was observed, suggesting that participants had a less strict criterion for stimuli preceded by spatially valid than invalid visuospatial cues. We showed that spatial attention has the potential to modulate the discrimination and the conscious detection of near-threshold somatosensory stimuli as measured, respectively, by a reduction of reaction times and a shift in response bias toward less conservative responses when the cue predicted stimulus location. A shift in response bias indicates possible effects of spatial attention on internal decision processes. The lack of significant results in perceptual sensitivity (d') could be due to weaker effects of endogenous attention on perception.

  18. Dynamics of Tumor Heterogeneity Derived from Clonal Karyotypic Evolution.

    PubMed

    Laughney, Ashley M; Elizalde, Sergi; Genovese, Giulio; Bakhoum, Samuel F

    2015-08-04

    Numerical chromosomal instability is a ubiquitous feature of human neoplasms. Due to experimental limitations, fundamental characteristics of karyotypic changes in cancer are poorly understood. Using an experimentally inspired stochastic model, based on the potency and chromosomal distribution of oncogenes and tumor suppressor genes, we show that cancer cells have evolved to exist within a narrow range of chromosome missegregation rates that optimizes phenotypic heterogeneity and clonal survival. Departure from this range reduces clonal fitness and limits subclonal diversity. Mapping of the aneuploid fitness landscape reveals a highly favorable, commonly observed, near-triploid state onto which evolving diploid- and tetraploid-derived populations spontaneously converge, albeit at a much lower fitness cost for the latter. Finally, by analyzing 1,368 chromosomal translocation events in five human cancers, we find that karyotypic evolution also shapes chromosomal translocation patterns by selecting for more oncogenic derivative chromosomes. Thus, chromosomal instability can generate the heterogeneity required for Darwinian tumor evolution. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Parsimonious Continuous Time Random Walk Models and Kurtosis for Diffusion in Magnetic Resonance of Biological Tissue

    NASA Astrophysics Data System (ADS)

    Ingo, Carson; Sui, Yi; Chen, Yufen; Parrish, Todd; Webb, Andrew; Ronen, Itamar

    2015-03-01

    In this paper, we provide a context for the modeling approaches that have been developed to describe non-Gaussian diffusion behavior, which is ubiquitous in diffusion weighted magnetic resonance imaging of water in biological tissue. Subsequently, we focus on the formalism of the continuous time random walk theory to extract properties of subdiffusion and superdiffusion through novel simplifications of the Mittag-Leffler function. For the case of time-fractional subdiffusion, we compute the kurtosis for the Mittag-Leffler function, which provides both a connection and physical context to the much-used approach of diffusional kurtosis imaging. We provide Monte Carlo simulations to illustrate the concepts of anomalous diffusion as stochastic processes of the random walk. Finally, we demonstrate the clinical utility of the Mittag-Leffler function as a model to describe tissue microstructure through estimations of subdiffusion and kurtosis with diffusion MRI measurements in the brain of a chronic ischemic stroke patient.

  20. Lévy flight movements prevent extinctions and maximize population abundances in fragile Lotka-Volterra systems.

    PubMed

    Dannemann, Teodoro; Boyer, Denis; Miramontes, Octavio

    2018-04-10

    Multiple-scale mobility is ubiquitous in nature and has become instrumental for understanding and modeling animal foraging behavior. However, the impact of individual movements on the long-term stability of populations remains largely unexplored. We analyze deterministic and stochastic Lotka-Volterra systems, where mobile predators consume scarce resources (prey) confined in patches. In fragile systems (that is, those unfavorable to species coexistence), the predator species has a maximized abundance and is resilient to degraded prey conditions when individual mobility is multiple scaled. Within the Lévy flight model, highly superdiffusive foragers rarely encounter prey patches and go extinct, whereas normally diffusing foragers tend to proliferate within patches, causing extinctions by overexploitation. Lévy flights of intermediate index allow a sustainable balance between patch exploitation and regeneration over wide ranges of demographic rates. Our analytical and simulated results can explain field observations and suggest that scale-free random movements are an important mechanism by which entire populations adapt to scarcity in fragmented ecosystems.

  1. Sensing Size through Clustering in Non-Equilibrium Membranes and the Control of Membrane-Bound Enzymatic Reactions

    PubMed Central

    Vagne, Quentin; Turner, Matthew S.; Sens, Pierre

    2015-01-01

    The formation of dynamical clusters of proteins is ubiquitous in cellular membranes and is in part regulated by the recycling of membrane components. We show, using stochastic simulations and analytic modeling, that the out-of-equilibrium cluster size distribution of membrane components undergoing continuous recycling is strongly influenced by lateral confinement. This result has significant implications for the clustering of plasma membrane proteins whose mobility is hindered by cytoskeletal “corrals” and for protein clustering in cellular organelles of limited size that generically support material fluxes. We show how the confinement size can be sensed through its effect on the size distribution of clusters of membrane heterogeneities and propose that this could be regulated to control the efficiency of membrane-bound reactions. To illustrate this, we study a chain of enzymatic reactions sensitive to membrane protein clustering. The reaction efficiency is found to be a non-monotonic function of the system size, and can be optimal for sizes comparable to those of cellular organelles. PMID:26656912

  2. Nuclear magnetic relaxation by the dipolar EMOR mechanism: General theory with applications to two-spin systems.

    PubMed

    Chang, Zhiwei; Halle, Bertil

    2016-02-28

    In aqueous systems with immobilized macromolecules, including biological tissue, the longitudinal spin relaxation of water protons is primarily induced by exchange-mediated orientational randomization (EMOR) of intra- and intermolecular magnetic dipole-dipole couplings. We have embarked on a systematic program to develop, from the stochastic Liouville equation, a general and rigorous theory that can describe relaxation by the dipolar EMOR mechanism over the full range of exchange rates, dipole coupling strengths, and Larmor frequencies. Here, we present a general theoretical framework applicable to spin systems of arbitrary size with symmetric or asymmetric exchange. So far, the dipolar EMOR theory is only available for a two-spin system with symmetric exchange. Asymmetric exchange, when the spin system is fragmented by the exchange, introduces new and unexpected phenomena. Notably, the anisotropic dipole couplings of non-exchanging spins break the axial symmetry in spin Liouville space, thereby opening up new relaxation channels in the locally anisotropic sites, including longitudinal-transverse cross relaxation. Such cross-mode relaxation operates only at low fields; at higher fields it becomes nonsecular, leading to an unusual inverted relaxation dispersion that splits the extreme-narrowing regime into two sub-regimes. The general dipolar EMOR theory is illustrated here by a detailed analysis of the asymmetric two-spin case, for which we present relaxation dispersion profiles over a wide range of conditions as well as analytical results for integral relaxation rates and time-dependent spin modes in the zero-field and motional-narrowing regimes. The general theoretical framework presented here will enable a quantitative analysis of frequency-dependent water-proton longitudinal relaxation in model systems with immobilized macromolecules and, ultimately, will provide a rigorous link between relaxation-based magnetic resonance image contrast and molecular parameters.

  3. Nuclear magnetic relaxation by the dipolar EMOR mechanism: General theory with applications to two-spin systems

    NASA Astrophysics Data System (ADS)

    Chang, Zhiwei; Halle, Bertil

    2016-02-01

    In aqueous systems with immobilized macromolecules, including biological tissue, the longitudinal spin relaxation of water protons is primarily induced by exchange-mediated orientational randomization (EMOR) of intra- and intermolecular magnetic dipole-dipole couplings. We have embarked on a systematic program to develop, from the stochastic Liouville equation, a general and rigorous theory that can describe relaxation by the dipolar EMOR mechanism over the full range of exchange rates, dipole coupling strengths, and Larmor frequencies. Here, we present a general theoretical framework applicable to spin systems of arbitrary size with symmetric or asymmetric exchange. So far, the dipolar EMOR theory is only available for a two-spin system with symmetric exchange. Asymmetric exchange, when the spin system is fragmented by the exchange, introduces new and unexpected phenomena. Notably, the anisotropic dipole couplings of non-exchanging spins break the axial symmetry in spin Liouville space, thereby opening up new relaxation channels in the locally anisotropic sites, including longitudinal-transverse cross relaxation. Such cross-mode relaxation operates only at low fields; at higher fields it becomes nonsecular, leading to an unusual inverted relaxation dispersion that splits the extreme-narrowing regime into two sub-regimes. The general dipolar EMOR theory is illustrated here by a detailed analysis of the asymmetric two-spin case, for which we present relaxation dispersion profiles over a wide range of conditions as well as analytical results for integral relaxation rates and time-dependent spin modes in the zero-field and motional-narrowing regimes. The general theoretical framework presented here will enable a quantitative analysis of frequency-dependent water-proton longitudinal relaxation in model systems with immobilized macromolecules and, ultimately, will provide a rigorous link between relaxation-based magnetic resonance image contrast and molecular parameters.

  4. Development of transparent microwell arrays for optical monitoring and dissection of microbial communities

    DOE PAGES

    Halsted, Michelle; Wilmoth, Jared L.; Briggs, Paige A.; ...

    2016-09-29

    Microbial communities are incredibly complex systems that dramatically and ubiquitously influence our lives. They help to shape our climate and environment, impact agriculture, drive business, and have a tremendous bearing on healthcare and physical security. Spatial confinement, as well as local variations in physical and chemical properties, affects development and interactions within microbial communities that occupy critical niches in the environment. Recent work has demonstrated the use of silicon based microwell arrays, combined with parylene lift-off techniques, to perform both deterministic and stochastic assembly of microbial communities en masse, enabling the high-throughput screening of microbial communities for their response tomore » growth in confined environments under different conditions. The implementation of a transparent microwell array platform can expand and improve the imaging modalities that can be used to characterize these assembled communities. In this paper, the fabrication and characterization of a next generation transparent microwell array is described. The transparent arrays, comprised of SU-8 patterned on a glass coverslip, retain the ability to use parylene lift-off by integrating a low temperature atomic layer deposition of silicon dioxide into the fabrication process. This silicon dioxide layer prevents adhesion of the parylene material to the patterned SU-8, facilitating dry lift-off, and maintaining the ability to easily assemble microbial communities within the microwells. These transparent microwell arrays can screen numerous community compositions using continuous, high resolution, imaging. Finally, the utility of the design was successfully demonstrated through the stochastic seeding and imaging of green fluorescent protein expressing Escherichia coli using both fluorescence and brightfield microscopies.« less

  5. The impact of Pleistocene glaciation across the range of a widespread European coastal species.

    PubMed

    Wilson, Anthony B; Eigenmann Veraguth, Iris

    2010-10-01

    There is a growing consensus that much of the contemporary phylogeography of northern hemisphere coastal taxa reflects the impact of Pleistocene glaciation, when glaciers covered much of the coastline at higher latitudes and sea levels dropped by as much as 150 m. The genetic signature of postglacial recolonization has been detected in many marine species, but the effects of coastal glaciation are not ubiquitous, leading to suggestions that species may intrinsically differ in their ability to respond to the environmental change associated with glacial cycles. Such variation may indeed have a biological basis, but apparent differences in population structure among taxa may also stem from our heavy reliance on individual mitochondrial loci, which are strongly influenced by stochasticity during coalescence. We investigated the contemporary population genetics of Syngnathus typhle, one of the most widespread European coastal fish species, using a multilocus data set to investigate the influence of Pleistocene glaciation and reduced sea levels on its phylogeography. A strong signal of postglacial recolonization was detected at both the northern and eastern ends of the species' distribution, while southern populations appear to have been relatively unaffected by the last glacial cycle. Patterns of population variation and differentiation at nuclear and mitochondrial loci differ significantly, but simulations indicate that these differences can be explained by the stochastic nature of the coalescent process. These results demonstrate the strength of a multilocus approach to phylogeography and suggest that an overdependence on mitochondrial loci may provide a misleading picture of population-level processes. © 2010 Blackwell Publishing Ltd.

  6. Cardiac Position Sensitivity Study in the Electrocardiographic Forward Problem Using Stochastic Collocation and Boundary Element Methods

    PubMed Central

    Swenson, Darrell J.; Geneser, Sarah E.; Stinstra, Jeroen G.; Kirby, Robert M.; MacLeod, Rob S.

    2012-01-01

    The electrocardiogram (ECG) is ubiquitously employed as a diagnostic and monitoring tool for patients experiencing cardiac distress and/or disease. It is widely known that changes in heart position resulting from, for example, posture of the patient (sitting, standing, lying) and respiration significantly affect the body-surface potentials; however, few studies have quantitatively and systematically evaluated the effects of heart displacement on the ECG. The goal of this study was to evaluate the impact of positional changes of the heart on the ECG in the specific clinical setting of myocardial ischemia. To carry out the necessary comprehensive sensitivity analysis, we applied a relatively novel and highly efficient statistical approach, the generalized polynomial chaos-stochastic collocation method, to a boundary element formulation of the electrocardiographic forward problem, and we drove these simulations with measured epicardial potentials from whole-heart experiments. Results of the analysis identified regions on the body-surface where the potentials were especially sensitive to realistic heart motion. The standard deviation (STD) of ST-segment voltage changes caused by the apex of a normal heart, swinging forward and backward or side-to-side was approximately 0.2 mV. Variations were even larger, 0.3 mV, for a heart exhibiting elevated ischemic potentials. These variations could be large enough to mask or to mimic signs of ischemia in the ECG. Our results suggest possible modifications to ECG protocols that could reduce the diagnostic error related to postural changes in patients possibly suffering from myocardial ischemia. PMID:21909818

  7. Mode Analyses of Gyrokinetic Simulations of Plasma Microturbulence

    NASA Astrophysics Data System (ADS)

    Hatch, David R.

    This thesis presents analysis of the excitation and role of damped modes in gyrokinetic simulations of plasma microturbulence. In order to address this question, mode decompositions are used to analyze gyrokinetic simulation data. A mode decomposition can be constructed by projecting a nonlinearly evolved gyrokinetic distribution function onto a set of linear eigenmodes, or alternatively by constructing a proper orthogonal decomposition of the distribution function. POD decompositions are used to examine the role of damped modes in saturating ion temperature gradient driven turbulence. In order to identify the contribution of different modes to the energy sources and sinks, numerical diagnostics for a gyrokinetic energy quantity were developed for the GENE code. The use of these energy diagnostics in conjunction with POD mode decompositions demonstrates that ITG turbulence saturates largely through dissipation by damped modes at the same perpendicular spatial scales as those of the driving instabilities. This defines a picture of turbulent saturation that is very different from both traditional hydrodynamic scenarios and also many common theories for the saturation of plasma turbulence. POD mode decompositions are also used to examine the role of subdominant modes in causing magnetic stochasticity in electromagnetic gyrokinetic simulations. It is shown that the magnetic stochasticity, which appears to be ubiquitous in electromagnetic microturbulence, is caused largely by subdominant modes with tearing parity. The application of higher-order singular value decomposition (HOSVD) to the full distribution function from gyrokinetic simulations is presented. This is an effort to demonstrate the ability to characterize and extract insight from a very large, complex, and high-dimensional data-set - the 5-D (plus time) gyrokinetic distribution function.

  8. Sight-Reading Expertise: Cross-Modality Integration Investigated Using Eye Tracking

    ERIC Educational Resources Information Center

    Drai-Zerbib, Veronique; Baccino, Thierry; Bigand, Emmanuel

    2012-01-01

    It is often said that experienced musicians are capable of hearing what they read (and vice versa). This suggests that they are able to process and to integrate multimodal information. The present study investigates this issue with an eye-tracking technique. Two groups of musicians chosen on the basis of their level of expertise (experts,…

  9. Scan Patterns Predict Sentence Production in the Cross-Modal Processing of Visual Scenes

    ERIC Educational Resources Information Center

    Coco, Moreno I.; Keller, Frank

    2012-01-01

    Most everyday tasks involve multiple modalities, which raises the question of how the processing of these modalities is coordinated by the cognitive system. In this paper, we focus on the coordination of visual attention and linguistic processing during speaking. Previous research has shown that objects in a visual scene are fixated before they…

  10. The Crossmodal Facilitation of Visual Object Representations by Sound: Evidence from the Backward Masking Paradigm

    ERIC Educational Resources Information Center

    Chen, Yi-Chuan; Spence, Charles

    2011-01-01

    We report a series of experiments designed to demonstrate that the presentation of a sound can facilitate the identification of a concomitantly presented visual target letter in the backward masking paradigm. Two visual letters, serving as the target and its mask, were presented successively at various interstimulus intervals (ISIs). The results…

  11. Phonological and Sensory Short-Term Memory Are Correlates and Both Affected in Developmental Dyslexia

    ERIC Educational Resources Information Center

    Laasonen, Marja; Virsu, Veijo; Oinonen, Suvi; Sandbacka, Mirja; Salakari, Anita; Service, Elisabet

    2012-01-01

    We investigated whether poor short-term memory (STM) in developmental dyslexia affects the processing of sensory stimulus sequences in addition to phonological material. STM for brief binary non-verbal stimuli (light flashes, tone bursts, finger touches, and their crossmodal combinations) was studied in 20 Finnish adults with dyslexia and 24…

  12. Music to My Eyes: Cross-Modal Interactions in the Perception of Emotions in Musical Performance

    ERIC Educational Resources Information Center

    Vines, Bradley W.; Krumhansl, Carol L.; Wanderley, Marcelo M.; Dalca, Ioana M.; Levitin, Daniel J.

    2011-01-01

    We investigate non-verbal communication through expressive body movement and musical sound, to reveal higher cognitive processes involved in the integration of emotion from multiple sensory modalities. Participants heard, saw, or both heard and saw recordings of a Stravinsky solo clarinet piece, performed with three distinct expressive styles:…

  13. Local and Global Cross-Modal Influences between Vision and Hearing, Tasting, Smelling, or Touching

    ERIC Educational Resources Information Center

    Forster, Jens

    2011-01-01

    It is suggested that the distinction between global versus local processing styles exists across sensory modalities. Activation of one-way of processing in one modality should affect processing styles in a different modality. In 12 studies, auditory, haptic, gustatory or olfactory global versus local processing was induced, and participants were…

  14. Examining Lateralized Lexical Ambiguity Processing Using Dichotic and Cross-Modal Tasks

    ERIC Educational Resources Information Center

    Atchley, Ruth Ann; Grimshaw, Gina; Schuster, Jonathan; Gibson, Linzi

    2011-01-01

    The individual roles played by the cerebral hemispheres during the process of language comprehension have been extensively studied in tasks that require individuals to read text (for review see Jung-Beeman, 2005). However, it is not clear whether or not some aspects of the theorized laterality models of semantic comprehension are a result of the…

  15. Specific Patterns of Emotion Recognition from Faces in Children with ASD: Results of a Cross-Modal Matching Paradigm

    ERIC Educational Resources Information Center

    Golan, Ofer; Gordon, Ilanit; Fichman, Keren; Keinan, Giora

    2018-01-01

    Children with ASD show emotion recognition difficulties, as part of their social communication deficits. We examined facial emotion recognition (FER) in intellectually disabled children with ASD and in younger typically developing (TD) controls, matched on mental age. Our emotion-matching paradigm employed three different modalities: facial, vocal…

  16. The Time-Course of Auditory and Visual Distraction Effects in a New Crossmodal Paradigm

    ERIC Educational Resources Information Center

    Bendixen, Alexandra; Grimm, Sabine; Deouell, Leon Y.; Wetzel, Nicole; Madebach, Andreas; Schroger, Erich

    2010-01-01

    Vision often dominates audition when attentive processes are involved (e.g., the ventriloquist effect), yet little is known about the relative potential of the two modalities to initiate a "break through of the unattended". The present study was designed to systematically compare the capacity of task-irrelevant auditory and visual events to…

  17. Evidence for a Specific Cross-Modal Association Deficit in Dyslexia: An Electrophysiological Study of Letter-Speech Sound Processing

    ERIC Educational Resources Information Center

    Froyen, Dries; Willems, Gonny; Blomert, Leo

    2011-01-01

    The phonological deficit theory of dyslexia assumes that degraded speech sound representations might hamper the acquisition of stable letter-speech sound associations necessary for learning to read. However, there is only scarce and mainly indirect evidence for this assumed letter-speech sound association problem. The present study aimed at…

  18. The Real-Time Processing of Sluiced Sentences

    ERIC Educational Resources Information Center

    Poirier, Josee; Wolfinger, Katie; Spellman, Lisa; Shapiro, Lewis P.

    2010-01-01

    Ellipsis refers to an element that is absent from the input but whose meaning can nonetheless be recovered from context. In this cross-modal priming study, we examined the online processing of Sluicing, an ellipsis whose antecedent is an entire clause: "The handyman threw a book to the programmer but I don't know which book" the handyman threw to…

  19. A Quantitative Review of the Effect of Computerized Testing on the Measurement of Social Desirability.

    ERIC Educational Resources Information Center

    Dwight, Stephen A.; Feigelson, Melissa E.

    2000-01-01

    Conducted a meta-analysis to determine the extent to which the computer administration of a measure influences socially desirable responding. Discusses implications of the findings about impression management in terms of how they contribute to the explication of the construct of social desirability and cross-mode equivalence. (Author/SLD)

  20. Functionally Specific Oscillatory Activity Correlates between Visual and Auditory Cortex in the Blind

    ERIC Educational Resources Information Center

    Schepers, Inga M.; Hipp, Joerg F.; Schneider, Till R.; Roder, Brigitte; Engel, Andreas K.

    2012-01-01

    Many studies have shown that the visual cortex of blind humans is activated in non-visual tasks. However, the electrophysiological signals underlying this cross-modal plasticity are largely unknown. Here, we characterize the neuronal population activity in the visual and auditory cortex of congenitally blind humans and sighted controls in a…

  1. The Effects of Cross-Modality and Level of Self-Regulated Learning on Knowledge Acquisition with Smartpads

    ERIC Educational Resources Information Center

    Lee, Hye Yeon; Lee, Hyeon Woo

    2018-01-01

    Recently, there has been a transition from traditional paper or computer-based learning environments to smartpad-based learning environments, which are based on touch and involve various cognitive strategies such as touch operation and note taking. Accordingly, the use of smartpads can provide an effective learning environment through…

  2. ERP Evidence of Early Cross-Modal Links between Auditory Selective Attention and Visuo-Spatial Memory

    ERIC Educational Resources Information Center

    Bomba, Marie D.; Singhal, Anthony

    2010-01-01

    Previous dual-task research pairing complex visual tasks involving non-spatial cognitive processes during dichotic listening have shown effects on the late component (Ndl) of the negative difference selective attention waveform but no effects on the early (Nde) response suggesting that the Ndl, but not the Nde, is affected by non-spatial…

  3. Contributions of Response Set and Semantic Relatedness to Cross-Modal Stroop-Like Picture--Word Interference in Children and Adults

    ERIC Educational Resources Information Center

    Hanauer, John B.; Brooks, Patricia J.

    2005-01-01

    Resistance to interference from irrelevant auditory stimuli undergoes development throughout childhood. To test whether semantic processes account for age-related changes in a Stroop-like picture-word interference effect, children (3-to 12-year-olds) and adults named pictures while listening to words varying in terms of semantic relatedness to the…

  4. Phonological Priming with Nonwords in Children with and without Specific Language Impairment

    ERIC Educational Resources Information Center

    Brooks, Patricia J.; Seiger-Gardner, Liat; Obeid, Rita; MacWhinney, Brian

    2015-01-01

    Purpose: The cross-modal picture-word interference task is used to examine contextual effects on spoken-word production. Previous work has documented lexical-phonological interference in children with specific language impairment (SLI) when a related distractor (e.g., bell) occurs prior to a picture to be named (e.g., a bed). In the current study,…

  5. Predictions about Bisymmetry and Cross-Modal Matches from Global Theories of Subjective Intensities

    ERIC Educational Resources Information Center

    Luce, R. Duncan

    2012-01-01

    The article first summarizes the assumptions of Luce (2004, 2008) for inherently binary (2-D) stimuli (e.g., the ears and eyes) that lead to a "p-additive," order-preserving psychophysical representation. Next, a somewhat parallel theory for unary (1-D) signals is developed for intensity attributes such as linear extent, vibration to finger, and…

  6. Crossmodal and Incremental Perception of Audiovisual Cues to Emotional Speech

    ERIC Educational Resources Information Center

    Barkhuysen, Pashiera; Krahmer, Emiel; Swerts, Marc

    2010-01-01

    In this article we report on two experiments about the perception of audiovisual cues to emotional speech. The article addresses two questions: (1) how do visual cues from a speaker's face to emotion relate to auditory cues, and (2) what is the recognition speed for various facial cues to emotion? Both experiments reported below are based on tests…

  7. The Lexical Status of the Root in Processing Morphologically Complex Words in Arabic

    ERIC Educational Resources Information Center

    Shalhoub-Awwad, Yasmin; Leikin, Mark

    2016-01-01

    This study investigated the effects of the Arabic root in the visual word recognition process among young readers in order to explore its role in reading acquisition and its development within the structure of the Arabic mental lexicon. We examined cross-modal priming of words that were derived from the same root of the target…

  8. Hand Movement Deviations in a Visual Search Task with Cross Modal Cuing

    ERIC Educational Resources Information Center

    Aslan, Asli; Aslan, Hurol

    2007-01-01

    The purpose of this study is to demonstrate the cross-modal effects of an auditory organization on a visual search task and to investigate the influence of the level of detail in instructions describing or hinting at the associations between auditory stimuli and the possible locations of a visual target. In addition to measuring the participants'…

  9. Crossmodal Congruency Benefits of Tactile and Visual Signalling

    DTIC Science & Technology

    2013-11-12

    modal information format seemed to produce faster and more accurate performance. The question of learning complex tactile communication signals...SECURITY CLASSIFICATION OF: We conducted an experiment in which tactile messages were created based on five common military arm and hand signals. We...compared response times and accuracy rates of novice individuals responding to visual and tactile representations of these messages, which were

  10. Tidal, daily, and lunar-day activity cycles in the marine polychaete Nereis virens.

    PubMed

    Last, Kim S; Bailhache, Thierry; Kramer, Cas; Kyriacou, Charalambos P; Rosato, Ezio; Olive, Peter J W

    2009-02-01

    The burrow emergence activity of the wild caught ragworm Nereis virens Sars associated with food prospecting was investigated under various photoperiodic (LD) and simulated tidal cycles (STC) using a laboratory based actograph. Just over half (57%) of the animals under LD with STC displayed significant tidal (approximately 12.4 h) and/or lunar-day (approximately 24.8 h) activity patterns. Under constant light (LL) plus a STC, 25% of all animals were tidal, while one animal responded with a circadian (24.2 h) activity rhythm suggestive of cross-modal entrainment where the environmental stimulus of one period entrains rhythmic behavior of a different period. All peaks of activity under a STC, apart from that of the individual cross-modal entrainment case, coincided with the period of tank flooding. Under only LD without a STC, 49% of the animals showed nocturnal (approximately 24 h) activity. When animals were maintained under free-running LL conditions, 15% displayed significant rhythmicity with circatidal and circadian/circalunidian periodicities. Although activity cycles in N. virens at the population level are robust, at the individual level they are particularly labile, suggesting complex biological clock-control with multiple clock output pathways.

  11. Modality distribution of sensory neurons in the feline caudate nucleus and the substantia nigra.

    PubMed

    Márkus, Zita; Eördegh, Gabriella; Paróczy, Zsuzsanna; Benedek, G; Nagy, A

    2008-09-01

    Despite extensive analysis of the motor functions of the basal ganglia and the fact that multisensory information processing appears critical for the execution of their behavioral action, little is known concerning the sensory functions of the caudate nucleus (CN) and the substantia nigra (SN). In the present study, we set out to describe the sensory modality distribution and to determine the proportions of multisensory units within the CN and the SN. The separate single sensory modality tests demonstrated that a majority of the neurons responded to only one modality, so that they seemed to be unimodal. In contrast with these findings, a large proportion of these neurons exhibited significant multisensory cross-modal interactions. Thus, these neurons should also be classified as multisensory. Our results suggest that a surprisingly high proportion of sensory neurons in the basal ganglia are multisensory, and demonstrate that an analysis without a consideration of multisensory cross-modal interactions may strongly underrepresent the number of multisensory units. We conclude that a majority of the sensory neurons in the CN and SN process multisensory information and only a minority of these units are clearly unimodal.

  12. Sounds activate visual cortex and improve visual discrimination.

    PubMed

    Feng, Wenfeng; Störmer, Viola S; Martinez, Antigona; McDonald, John J; Hillyard, Steven A

    2014-07-16

    A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. Copyright © 2014 the authors 0270-6474/14/349817-08$15.00/0.

  13. Neural correlates of cross-modal affective priming by music in Williams syndrome.

    PubMed

    Lense, Miriam D; Gordon, Reyna L; Key, Alexandra P F; Dykens, Elisabeth M

    2014-04-01

    Emotional connection is the main reason people engage with music, and the emotional features of music can influence processing in other domains. Williams syndrome (WS) is a neurodevelopmental genetic disorder where musicality and sociability are prominent aspects of the phenotype. This study examined oscillatory brain activity during a musical affective priming paradigm. Participants with WS and age-matched typically developing controls heard brief emotional musical excerpts or emotionally neutral sounds and then reported the emotional valence (happy/sad) of subsequently presented faces. Participants with WS demonstrated greater evoked fronto-central alpha activity to the happy vs sad musical excerpts. The size of these alpha effects correlated with parent-reported emotional reactivity to music. Although participant groups did not differ in accuracy of identifying facial emotions, reaction time data revealed a music priming effect only in persons with WS, who responded faster when the face matched the emotional valence of the preceding musical excerpt vs when the valence differed. Matching emotional valence was also associated with greater evoked gamma activity thought to reflect cross-modal integration. This effect was not present in controls. The results suggest a specific connection between music and socioemotional processing and have implications for clinical and educational approaches for WS.

  14. Olfactory discrimination: when vision matters?

    PubMed

    Demattè, M Luisa; Sanabria, Daniel; Spence, Charles

    2009-02-01

    Many previous studies have attempted to investigate the effect of visual cues on olfactory perception in humans. The majority of this research has only looked at the modulatory effect of color, which has typically been explained in terms of multisensory perceptual interactions. However, such crossmodal effects may equally well relate to interactions taking place at a higher level of information processing as well. In fact, it is well-known that semantic knowledge can have a substantial effect on people's olfactory perception. In the present study, we therefore investigated the influence of visual cues, consisting of color patches and/or shapes, on people's olfactory discrimination performance. Participants had to make speeded odor discrimination responses (lemon vs. strawberry) while viewing a red or yellow color patch, an outline drawing of a strawberry or lemon, or a combination of these color and shape cues. Even though participants were instructed to ignore the visual stimuli, our results demonstrate that the accuracy of their odor discrimination responses was influenced by visual distractors. This result shows that both color and shape information are taken into account during speeded olfactory discrimination, even when such information is completely task irrelevant, hinting at the automaticity of such higher level visual-olfactory crossmodal interactions.

  15. Plasticity of attentional functions in older adults after non-action video game training: a randomized controlled trial.

    PubMed

    Mayas, Julia; Parmentier, Fabrice B R; Andrés, Pilar; Ballesteros, Soledad

    2014-01-01

    A major goal of recent research in aging has been to examine cognitive plasticity in older adults and its capacity to counteract cognitive decline. The aim of the present study was to investigate whether older adults could benefit from brain training with video games in a cross-modal oddball task designed to assess distraction and alertness. Twenty-seven healthy older adults participated in the study (15 in the experimental group, 12 in the control group. The experimental group received 20 1-hr video game training sessions using a commercially available brain-training package (Lumosity) involving problem solving, mental calculation, working memory and attention tasks. The control group did not practice this package and, instead, attended meetings with the other members of the study several times along the course of the study. Both groups were evaluated before and after the intervention using a cross-modal oddball task measuring alertness and distraction. The results showed a significant reduction of distraction and an increase of alertness in the experimental group and no variation in the control group. These results suggest neurocognitive plasticity in the old human brain as training enhanced cognitive performance on attentional functions. ClinicalTrials.gov NCT02007616.

  16. Global inhibition and stimulus competition in the owl optic tectum

    PubMed Central

    Mysore, Shreesh P.; Asadollahi, Ali; Knudsen, Eric I.

    2010-01-01

    Stimulus selection for gaze and spatial attention involves competition among stimuli across sensory modalities and across all of space. We demonstrate that such cross-modal, global competition takes place in the intermediate and deep layers of the optic tectum, a structure known to be involved in gaze control and attention. A variety of either visual or auditory stimuli located anywhere outside of a neuron's receptive field (RF) were shown to suppress or completely eliminate responses to a visual stimulus located inside the RF in nitrous oxide sedated owls. The essential mechanism underlying this stimulus competition is global, divisive inhibition. Unlike the effect of the classical inhibitory surround, which decreases with distance from the RF center and shapes neuronal responses to individual stimuli, global inhibition acts across the entirety of space and modulates responses primarily in the context of multiple stimuli. Whereas the source of this global inhibition is as yet unknown, our data indicate that different networks mediate the classical surround and global inhibition. We hypothesize that this global, cross-modal inhibition, which acts automatically in a bottom-up fashion even in sedated animals, is critical to the creation of a map of stimulus salience in the optic tectum. PMID:20130182

  17. Cross-modal interactions for custard desserts differ in obese and normal weight Italian women.

    PubMed

    Proserpio, Cristina; Laureati, Monica; Invitti, Cecilia; Pasqualinotto, Lucia; Bergamaschi, Valentina; Pagliarini, Ella

    2016-05-01

    The effects of variation in odors and thickening agents on sensory properties and acceptability of a model custard dessert were investigated in normal weight and obese women. Subjects rated their liking and the intensity of sensory properties (sweetness, vanilla and butter flavors, and creaminess) of 3 block samples (the first varied in vanilla aroma, the second varied in butter aroma and the third varied in xanthan gum). Significant differences were found in acceptability and intensity ratings in relation to body mass index. The addition of butter aroma in the custard was the most effective way to elicit odor-taste, odor-flavor and odor-texture interactions in obese women. In this group, butter aroma, signaling energy dense products, increased the perception of sweetness, vanilla flavor and creaminess, which are all desirable properties in a custard, while maintaining a high liking degree. Understanding cross-modal interactions in relation to nutritional status is interesting in order to develop new food products with reduced sugar and fat, that are still satisfying for the consumer. This could have important implications to reduce caloric intake and tackle the obesity epidemic. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Does Shape Discrimination by the Mouth Activate the Parietal and Occipital Lobes? – Near-Infrared Spectroscopy Study

    PubMed Central

    Kagawa, Tomonori; Narita, Noriyuki; Iwaki, Sunao; Kawasaki, Shingo; Kamiya, Kazunobu; Minakuchi, Shunsuke

    2014-01-01

    A cross-modal association between somatosensory tactile sensation and parietal and occipital activities during Braille reading was initially discovered in tests with blind subjects, with sighted and blindfolded healthy subjects used as controls. However, the neural background of oral stereognosis remains unclear. In the present study, we investigated whether the parietal and occipital cortices are activated during shape discrimination by the mouth using functional near-infrared spectroscopy (fNIRS). Following presentation of the test piece shape, a sham discrimination trial without the test pieces induced posterior parietal lobe (BA7), extrastriate cortex (BA18, BA19), and striate cortex (BA17) activation as compared with the rest session, while shape discrimination of the test pieces markedly activated those areas as compared with the rest session. Furthermore, shape discrimination of the test pieces specifically activated the posterior parietal cortex (precuneus/BA7), extrastriate cortex (BA18, 19), and striate cortex (BA17), as compared with sham sessions without a test piece. We concluded that oral tactile sensation is recognized through tactile/visual cross-modal substrates in the parietal and occipital cortices during shape discrimination by the mouth. PMID:25299397

  19. Does shape discrimination by the mouth activate the parietal and occipital lobes? - near-infrared spectroscopy study.

    PubMed

    Kagawa, Tomonori; Narita, Noriyuki; Iwaki, Sunao; Kawasaki, Shingo; Kamiya, Kazunobu; Minakuchi, Shunsuke

    2014-01-01

    A cross-modal association between somatosensory tactile sensation and parietal and occipital activities during Braille reading was initially discovered in tests with blind subjects, with sighted and blindfolded healthy subjects used as controls. However, the neural background of oral stereognosis remains unclear. In the present study, we investigated whether the parietal and occipital cortices are activated during shape discrimination by the mouth using functional near-infrared spectroscopy (fNIRS). Following presentation of the test piece shape, a sham discrimination trial without the test pieces induced posterior parietal lobe (BA7), extrastriate cortex (BA18, BA19), and striate cortex (BA17) activation as compared with the rest session, while shape discrimination of the test pieces markedly activated those areas as compared with the rest session. Furthermore, shape discrimination of the test pieces specifically activated the posterior parietal cortex (precuneus/BA7), extrastriate cortex (BA18, 19), and striate cortex (BA17), as compared with sham sessions without a test piece. We concluded that oral tactile sensation is recognized through tactile/visual cross-modal substrates in the parietal and occipital cortices during shape discrimination by the mouth.

  20. Automated cross-modal mapping in robotic eye/hand systems using plastic radial basis function networks

    NASA Astrophysics Data System (ADS)

    Meng, Qinggang; Lee, M. H.

    2007-03-01

    Advanced autonomous artificial systems will need incremental learning and adaptive abilities similar to those seen in humans. Knowledge from biology, psychology and neuroscience is now inspiring new approaches for systems that have sensory-motor capabilities and operate in complex environments. Eye/hand coordination is an important cross-modal cognitive function, and is also typical of many of the other coordinations that must be involved in the control and operation of embodied intelligent systems. This paper examines a biologically inspired approach for incrementally constructing compact mapping networks for eye/hand coordination. We present a simplified node-decoupled extended Kalman filter for radial basis function networks, and compare this with other learning algorithms. An experimental system consisting of a robot arm and a pan-and-tilt head with a colour camera is used to produce results and test the algorithms in this paper. We also present three approaches for adapting to structural changes during eye/hand coordination tasks, and the robustness of the algorithms under noise are investigated. The learning and adaptation approaches in this paper have similarities with current ideas about neural growth in the brains of humans and animals during tool-use, and infants during early cognitive development.

  1. Are 6-month-old human infants able to transfer emotional information (happy or angry) from voices to faces? An eye-tracking study.

    PubMed

    Palama, Amaya; Malsert, Jennifer; Gentaz, Edouard

    2018-01-01

    The present study examined whether 6-month-old infants could transfer amodal information (i.e. independently of sensory modalities) from emotional voices to emotional faces. Thus, sequences of successive emotional stimuli (voice or face from one sensory modality -auditory- to another sensory modality -visual-), corresponding to a cross-modal transfer, were displayed to 24 infants. Each sequence presented an emotional (angry or happy) or neutral voice, uniquely, followed by the simultaneous presentation of two static emotional faces (angry or happy, congruous or incongruous with the emotional voice). Eye movements in response to the visual stimuli were recorded with an eye-tracker. First, results suggested no difference in infants' looking time to happy or angry face after listening to the neutral voice or the angry voice. Nevertheless, after listening to the happy voice, infants looked longer at the incongruent angry face (the mouth area in particular) than the congruent happy face. These results revealed that a cross-modal transfer (from auditory to visual modalities) is possible for 6-month-old infants only after the presentation of a happy voice, suggesting that they recognize this emotion amodally.

  2. Nutrient loads exported from managed catchments reveal emergent biogeochemical stationarity

    NASA Astrophysics Data System (ADS)

    Basu, Nandita B.; Destouni, Georgia; Jawitz, James W.; Thompson, Sally E.; Loukinova, Natalia V.; Darracq, Amélie; Zanardo, Stefano; Yaeger, Mary; Sivapalan, Murugesu; Rinaldo, Andrea; Rao, P. Suresh C.

    2010-12-01

    Complexity of heterogeneous catchments poses challenges in predicting biogeochemical responses to human alterations and stochastic hydro-climatic drivers. Human interferences and climate change may have contributed to the demise of hydrologic stationarity, but our synthesis of a large body of observational data suggests that anthropogenic impacts have also resulted in the emergence of effective biogeochemical stationarity in managed catchments. Long-term monitoring data from the Mississippi-Atchafalaya River Basin (MARB) and the Baltic Sea Drainage Basin (BSDB) reveal that inter-annual variations in loads (LT) for total-N (TN) and total-P (TP), exported from a catchment are dominantly controlled by discharge (QT) leading inevitably to temporal invariance of the annual, flow-weighted concentration, $\\overline{Cf = (LT/QT). Emergence of this consistent pattern across diverse managed catchments is attributed to the anthropogenic legacy of accumulated nutrient sources generating memory, similar to ubiquitously present sources for geogenic constituents that also exhibit a linear LT-QT relationship. These responses are characteristic of transport-limited systems. In contrast, in the absence of legacy sources in less-managed catchments, $\\overline{Cf values were highly variable and supply limited. We offer a theoretical explanation for the observed patterns at the event scale, and extend it to consider the stochastic nature of rainfall/flow patterns at annual scales. Our analysis suggests that: (1) expected inter-annual variations in LT can be robustly predicted given discharge variations arising from hydro-climatic or anthropogenic forcing, and (2) water-quality problems in receiving inland and coastal waters would persist until the accumulated storages of nutrients have been substantially depleted. The finding has notable implications on catchment management to mitigate adverse water-quality impacts, and on acceleration of global biogeochemical cycles.

  3. Pore‐Scale Hydrodynamics in a Progressively Bioclogged Three‐Dimensional Porous Medium: 3‐D Particle Tracking Experiments and Stochastic Transport Modeling

    PubMed Central

    Carrel, M.; Dentz, M.; Derlon, N.; Morgenroth, E.

    2018-01-01

    Abstract Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3‐D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean‐squared displacements, are found to be non‐Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered. PMID:29780184

  4. Pores-scale hydrodynamics in a progressively bio-clogged three-dimensional porous medium: 3D particle tracking experiments and stochastic transport modelling

    NASA Astrophysics Data System (ADS)

    Morales, V. L.; Carrel, M.; Dentz, M.; Derlon, N.; Morgenroth, E.; Holzner, M.

    2017-12-01

    Biofilms are ubiquitous bacterial communities growing in various porous media including soils, trickling and sand filters and are relevant for applications such as the degradation of pollutants for bioremediation, waste water or drinking water production purposes. By their development, biofilms dynamically change the structure of porous media, increasing the heterogeneity of the pore network and the non-Fickian or anomalous dispersion. In this work, we use an experimental approach to investigate the influence of biofilm growth on pore scale hydrodynamics and transport processes and propose a correlated continuous time random walk model capturing these observations. We perform three-dimensional particle tracking velocimetry at four different time points from 0 to 48 hours of biofilm growth. The biofilm growth notably impacts pore-scale hydrodynamics, as shown by strong increase of the average velocity and in tailing of Lagrangian velocity probability density functions. Additionally, the spatial correlation length of the flow increases substantially. This points at the formation of preferential flow pathways and stagnation zones, which ultimately leads to an increase of anomalous transport in the porous media considered, characterized by non-Fickian scaling of mean-squared displacements and non-Gaussian distributions of the displacement probability density functions. A gamma distribution provides a remarkable approximation of the bulk and the high tail of the Lagrangian pore-scale velocity magnitude, indicating a transition from a parallel pore arrangement towards a more serial one. Finally, a correlated continuous time random walk based on a stochastic relation velocity model accurately reproduces the observations and could be used to predict transport beyond the time scales accessible to the experiment.

  5. Pore-Scale Hydrodynamics in a Progressively Bioclogged Three-Dimensional Porous Medium: 3-D Particle Tracking Experiments and Stochastic Transport Modeling

    NASA Astrophysics Data System (ADS)

    Carrel, M.; Morales, V. L.; Dentz, M.; Derlon, N.; Morgenroth, E.; Holzner, M.

    2018-03-01

    Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3-D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean-squared displacements, are found to be non-Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered.

  6. An alternate protocol to achieve stochastic and deterministic resonances

    NASA Astrophysics Data System (ADS)

    Tiwari, Ishant; Dave, Darshil; Phogat, Richa; Khera, Neev; Parmananda, P.

    2017-10-01

    Periodic and Aperiodic Stochastic Resonance (SR) and Deterministic Resonance (DR) are studied in this paper. To check for the ubiquitousness of the phenomena, two unrelated systems, namely, FitzHugh-Nagumo and a particle in a bistable potential well, are studied. Instead of the conventional scenario of noise amplitude (in the case of SR) or chaotic signal amplitude (in the case of DR) variation, a tunable system parameter ("a" in the case of FitzHugh-Nagumo model and the damping coefficient "j" in the bistable model) is regulated. The operating values of these parameters are defined as the "setpoint" of the system throughout the present work. Our results indicate that there exists an optimal value of the setpoint for which maximum information transfer between the input and the output signals takes place. This information transfer from the input sub-threshold signal to the output dynamics is quantified by the normalised cross-correlation coefficient ( | CCC | ). | CCC | as a function of the setpoint exhibits a unimodal variation which is characteristic of SR (or DR). Furthermore, | CCC | is computed for a grid of noise (or chaotic signal) amplitude and setpoint values. The heat map of | CCC | over this grid yields the presence of a resonance region in the noise-setpoint plane for which the maximum enhancement of the input sub-threshold signal is observed. This resonance region could be possibly used to explain how organisms maintain their signal detection efficacy with fluctuating amounts of noise present in their environment. Interestingly, the method of regulating the setpoint without changing the noise amplitude was not able to induce Coherence Resonance (CR). A possible, qualitative reasoning for this is provided.

  7. Early Sign Language Experience Goes along with an Increased Cross-Modal Gain for Affective Prosodic Recognition in Congenitally Deaf CI Users

    ERIC Educational Resources Information Center

    Fengler, Ineke; Delfau, Pia-Céline; Röder, Brigitte

    2018-01-01

    It is yet unclear whether congenitally deaf cochlear implant (CD CI) users' visual and multisensory emotion perception is influenced by their history in sign language acquisition. We hypothesized that early-signing CD CI users, relative to late-signing CD CI users and hearing, non-signing controls, show better facial expression recognition and…

  8. Cortical plasticity and preserved function in early blindness

    PubMed Central

    Renier, Laurent; De Volder, Anne G.; Rauschecker, Josef P.

    2013-01-01

    The “neural Darwinism” theory predicts that when one sensory modality is lacking, as in congenital blindness, the target structures are taken over by the afferent inputs from other senses that will promote and control their functional maturation (Edelman, 1993). This view receives support from both cross-modal plasticity experiments in animal models and functional imaging studies in man, which are presented here. PMID:23453908

  9. Cross-Mode Comparability of Computer-Based Testing (CBT) versus Paper-Pencil Based Testing (PPT): An Investigation of Testing Administration Mode among Iranian Intermediate EFL Learners

    ERIC Educational Resources Information Center

    Khoshsima, Hooshang; Hosseini, Monirosadat; Toroujeni, Seyyed Morteza Hashemi

    2017-01-01

    Advent of technology has caused growing interest in using computers to convert conventional paper and pencil-based testing (Henceforth PPT) into Computer-based testing (Henceforth CBT) in the field of education during last decades. This constant promulgation of computers to reshape the conventional tests into computerized format permeated the…

  10. A Report on Army Science Planning and Strategy 2016

    DTIC Science & Technology

    2017-06-01

    Army Research Laboratory (ARL) hosted a series of meetings in fall 2016 to develop a strategic vision for Army Science. Meeting topics were vetted...reduce maturation time . • Support internal Army research efforts to enhance Army investments in multiscale modeling to accelerate the rate of...requirement are research needs including cross-modal approaches to enabling real- time human comprehension under constraints of bandwidth, information

  11. Working memory for braille is shaped by experience.

    PubMed

    Cohen, Henri; Scherzer, Peter; Viau, Robert; Voss, Patrice; Lepore, Franco

    2011-03-01

    Tactile working memory was found to be more developed in completely blind (congenital and acquired) than in semi-sighted subjects, indicating that experience plays a crucial role in shaping working memory. A model of working memory, adapted from the classical model proposed by Baddeley and Hitch1 and Baddeley2 is presented where the connection strengths of a highly cross-modal network are altered through experience.

  12. Brief Report: Which Came First? Exploring Crossmodal Temporal Order Judgements and Their Relationship with Sensory Reactivity in Autism and Neurotypicals

    ERIC Educational Resources Information Center

    Poole, Daniel; Gowen, Emma; Warren, Paul A.; Poliakoff, Ellen

    2017-01-01

    Previous studies have indicated that visual-auditory temporal acuity is reduced in children with autism spectrum conditions (ASC) in comparison to neurotypicals. In the present study we investigated temporal acuity for all possible bimodal pairings of visual, tactile and auditory information in adults with ASC (n = 18) and a matched control group…

  13. The Deterministic Information Bottleneck

    NASA Astrophysics Data System (ADS)

    Strouse, D. J.; Schwab, David

    2015-03-01

    A fundamental and ubiquitous task that all organisms face is prediction of the future based on past sensory experience. Since an individual's memory resources are limited and costly, however, there is a tradeoff between memory cost and predictive payoff. The information bottleneck (IB) method (Tishby, Pereira, & Bialek 2000) formulates this tradeoff as a mathematical optimization problem using an information theoretic cost function. IB encourages storing as few bits of past sensory input as possible while selectively preserving the bits that are most predictive of the future. Here we introduce an alternative formulation of the IB method, which we call the deterministic information bottleneck (DIB). First, we argue for an alternative cost function, which better represents the biologically-motivated goal of minimizing required memory resources. Then, we show that this seemingly minor change has the dramatic effect of converting the optimal memory encoder from stochastic to deterministic. Next, we propose an iterative algorithm for solving the DIB problem. Additionally, we compare the IB and DIB methods on a variety of synthetic datasets, and examine the performance of retinal ganglion cell populations relative to the optimal encoding strategy for each problem.

  14. How mutation alters the evolutionary dynamics of cooperation on networks

    NASA Astrophysics Data System (ADS)

    Ichinose, Genki; Satotani, Yoshiki; Sayama, Hiroki

    2018-05-01

    Cooperation is ubiquitous at every level of living organisms. It is known that spatial (network) structure is a viable mechanism for cooperation to evolve. A recently proposed numerical metric, average gradient of selection (AGoS), a useful tool for interpreting and visualizing evolutionary dynamics on networks, allows simulation results to be visualized on a one-dimensional phase space. However, stochastic mutation of strategies was not considered in the analysis of AGoS. Here we extend AGoS so that it can analyze the evolution of cooperation where mutation may alter strategies of individuals on networks. We show that our extended AGoS correctly visualizes the final states of cooperation with mutation in the individual-based simulations. Our analyses revealed that mutation always has a negative effect on the evolution of cooperation regardless of the payoff functions, fraction of cooperators, and network structures. Moreover, we found that scale-free networks are the most vulnerable to mutation and thus the dynamics of cooperation are altered from bistability to coexistence on those networks, undergoing an imperfect pitchfork bifurcation.

  15. Manipulation of long-term dynamics in a colloidal active matter system using speckle light fields

    NASA Astrophysics Data System (ADS)

    Pince, Ercag; Velu, Sabareesh K. P.; Callegari, Agnese; Elahi, Parviz; Gigan, Sylvain; Volpe, Giovanni; Volpe, Giorgio

    Particles undergoing a stochastic motion within a disordered medium is a ubiquitous physical and biological phenomena. Examples can be given from organelles performing tasks in the cytoplasm to large animals moving in patchy environment. Here, we use speckle light fields to study the anomalous diffusion in an active matter system consisting of micron-sized silica particles(diameter 5 μm) and motile bacterial cells (E. coli). The speckle light fields are generated by mode mixing inside a multimode optical fiber where a small amount of incident laser power is needed to obtain an effective disordered optical landscape for the purpose of optical manipulation. We experimentally show how complex potentials contribute to the long-term dynamics of the active matter system and observed an enhanced diffusion of particles interacting with the active bacterial bath in the speckle light fields. We showed that this effect can be tuned and controlled by varying the intensity and the statistical properties of the speckle pattern. Potentially, these results could be of interest for many technological applications, such as the manipulation of microparticles inside optically disordered media of biological interest.

  16. Evolution of the magnetorotational instability on initially tangled magnetic fields

    NASA Astrophysics Data System (ADS)

    Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.; Subramanian, Kandaswamy

    2017-12-01

    The initial magnetic field of previous magnetorotational instability (MRI) simulations has always included a significant system-scale component, even if stochastic. However, it is of conceptual and practical interest to assess whether the MRI can grow when the initial field is turbulent. The ubiquitous presence of turbulent or random flows in astrophysical plasmas generically leads to a small-scale dynamo (SSD), which would provide initial seed turbulent velocity and magnetic fields in the plasma that becomes an accretion disc. Can the MRI grow from these more realistic initial conditions? To address this, we supply a standard shearing box with isotropically forced SSD generated magnetic and velocity fields as initial conditions and remove the forcing. We find that if the initially supplied fields are too weak or too incoherent, they decay from the initial turbulent cascade faster than they can grow via the MRI. When the initially supplied fields are sufficient to allow MRI growth and sustenance, the saturated stresses, large-scale fields and power spectra match those of the standard zero net flux MRI simulation with an initial large-scale vertical field.

  17. Coordinated phenotype switching with large-scale chromosome flip-flop inversion observed in bacteria.

    PubMed

    Cui, Longzhu; Neoh, Hui-min; Iwamoto, Akira; Hiramatsu, Keiichi

    2012-06-19

    Genome inversions are ubiquitous in organisms ranging from prokaryotes to eukaryotes. Typical examples can be identified by comparing the genomes of two or more closely related organisms, where genome inversion footprints are clearly visible. Although the evolutionary implications of this phenomenon are huge, little is known about the function and biological meaning of this process. Here, we report our findings on a bacterium that generates a reversible, large-scale inversion of its chromosome (about half of its total genome) at high frequencies of up to once every four generations. This inversion switches on or off bacterial phenotypes, including colony morphology, antibiotic susceptibility, hemolytic activity, and expression of dozens of genes. Quantitative measurements and mathematical analyses indicate that this reversible switching is stochastic but self-organized so as to maintain two forms of stable cell populations (i.e., small colony variant, normal colony variant) as a bet-hedging strategy. Thus, this heritable and reversible genome fluctuation seems to govern the bacterial life cycle; it has a profound impact on the course and outcomes of bacterial infections.

  18. Random walks and diffusion on networks

    NASA Astrophysics Data System (ADS)

    Masuda, Naoki; Porter, Mason A.; Lambiotte, Renaud

    2017-11-01

    Random walks are ubiquitous in the sciences, and they are interesting from both theoretical and practical perspectives. They are one of the most fundamental types of stochastic processes; can be used to model numerous phenomena, including diffusion, interactions, and opinions among humans and animals; and can be used to extract information about important entities or dense groups of entities in a network. Random walks have been studied for many decades on both regular lattices and (especially in the last couple of decades) on networks with a variety of structures. In the present article, we survey the theory and applications of random walks on networks, restricting ourselves to simple cases of single and non-adaptive random walkers. We distinguish three main types of random walks: discrete-time random walks, node-centric continuous-time random walks, and edge-centric continuous-time random walks. We first briefly survey random walks on a line, and then we consider random walks on various types of networks. We extensively discuss applications of random walks, including ranking of nodes (e.g., PageRank), community detection, respondent-driven sampling, and opinion models such as voter models.

  19. The emergence of mirror-like response properties from domain-general principles in vision and audition.

    PubMed

    Saygin, Ayse P; Dick, Frederic

    2014-04-01

    Like Cook et al., we suggest that mirror neurons are a fascinating product of cross-modal learning. As predicted by an associative account, responses in motor regions are observed for novel and/or abstract visual stimuli such as point-light and android movements. Domain-specific mirror responses also emerge as a function of audiomotor expertise that is slowly acquired over years of intensive training.

  20. Working memory for braille is shaped by experience

    PubMed Central

    Scherzer, Peter; Viau, Robert; Voss, Patrice; Lepore, Franco

    2011-01-01

    Tactile working memory was found to be more developed in completely blind (congenital and acquired) than in semi-sighted subjects, indicating that experience plays a crucial role in shaping working memory. A model of working memory, adapted from the classical model proposed by Baddeley and Hitch1 and Baddeley2 is presented where the connection strengths of a highly cross-modal network are altered through experience. PMID:21655448

  1. Defining Reward Value by Cross-Modal Scaling

    PubMed Central

    Casey, Anna H.; Silberberg, Alan; Paukner, Annika; Suomi, Stephen J.

    2013-01-01

    Researchers in comparative psychology often use different food rewards in their studies, with food values defined by a pre-experimental preference test. While this technique rank orders food values, it provides limited information about value differences because preferences may reflect not only value differences, but also the degree to which one good may “substitute” for another (e.g., one food may substitute well for another food, but neither substitutes well for water). We propose scaling the value of food pairs by a third food that is less substitutable for either food offered in preference tests (cross-modal scaling). Here, Cebus monkeys chose between four pairwise alternatives: fruits A vs. B; cereal amount X vs. fruit A and cereal amount Y vs. fruit B where X and Y were adjusted to produce indifference between each cereal amount and each fruit; and cereal amounts X vs. Y. When choice was between perfect substitutes (different cereal amounts), preferences were nearly absolute; so too when choice was between close substitutes (fruits); however, when choice was between fruits and cereal amounts, preferences were more modest and less likely due to substitutability. These results suggest that scaling between-good value differences in terms of a third, less-substitutable good may be better than simple preference tests in defining between-good value differences. PMID:23771492

  2. Unimodal and crossmodal working memory representations of visual and kinesthetic movement trajectories.

    PubMed

    Seemüller, Anna; Fiehler, Katja; Rösler, Frank

    2011-01-01

    The present study investigated whether visual and kinesthetic stimuli are stored as multisensory or modality-specific representations in unimodal and crossmodal working memory tasks. To this end, angle-shaped movement trajectories were presented to 16 subjects in delayed matching-to-sample tasks either visually or kinesthetically during encoding and recognition. During the retention interval, a secondary visual or kinesthetic interference task was inserted either immediately or with a delay after encoding. The modality of the interference task interacted significantly with the encoding modality. After visual encoding, memory was more impaired by a visual than by a kinesthetic secondary task, while after kinesthetic encoding the pattern was reversed. The time when the secondary task had to be performed interacted with the encoding modality as well. For visual encoding, memory was more impaired, when the secondary task had to be performed at the beginning of the retention interval. In contrast, memory after kinesthetic encoding was more affected, when the secondary task was introduced later in the retention interval. The findings suggest that working memory traces are maintained in a modality-specific format characterized by distinct consolidation processes that take longer after kinesthetic than after visual encoding. Copyright © 2010 Elsevier B.V. All rights reserved.

  3. Processing of visual food cues during bitter taste perception in female patients with binge-eating symptoms: A cross-modal ERP study.

    PubMed

    Schienle, Anne; Scharmüller, Wilfried; Schwab, Daniela

    2017-11-01

    In healthy individuals, the perception of an intense bitter taste decreased the reward value of visual food cues, as reflected by the reduction of a specific event-related brain potential (ERP), frontal late positivity. The current cross-modal ERP study investigated responses of female patients with binge-eating symptoms (BES) to this type of visual-gustatory stimulation. Women with BES (n=36) and female control participants (n=38) viewed food images after they rinsed their mouth with either bitter wormwood tea or water. Relative to controls, the patients showed elevated late positivity (LPP: 400-700ms) to the food images in the bitter condition. The LPP source was located in the medial prefrontal cortex. Both groups did not differ in the ratings for the fluids (intensity, bitterness, disgust). This ERP study showed that a bitter taste did not decrease late positivity to visual food cues (reflecting food reward) in women with BES. The atypical bitter responding might be a biological marker of this condition and possibly contributes to overeating. Future studies should additionally record food intake behavior to further investigate this mechanism. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  4. Hot and Cold Smells: Odor-Temperature Associations across Cultures

    PubMed Central

    Wnuk, Ewelina; de Valk, Josje M.; Huisman, John L. A.; Majid, Asifa

    2017-01-01

    It is often assumed odors are associated with hot and cold temperature, since odor processing may trigger thermal sensations, such as coolness in the case of mint. It is unknown, however, whether people make consistent temperature associations for a variety of everyday odors, and, if so, what determines them. Previous work investigating the bases of cross-modal associations suggests a number of possibilities, including universal forces (e.g., perception), as well as culture-specific forces (e.g., language and cultural beliefs). In this study, we examined odor-temperature associations in three cultures—Maniq (N = 11), Thai (N = 24), and Dutch (N = 24)—who differ with respect to their cultural preoccupation with odors, their odor lexicons, and their beliefs about the relationship of odors (and odor objects) to temperature. Participants matched 15 odors to temperature by touching cups filled with hot or cold water, and described the odors in their native language. The results showed no consistent associations among the Maniq, and only a handful of consistent associations between odor and temperature among the Thai and Dutch. The consistent associations differed across the two groups, arguing against their universality. Further analysis revealed cross-modal associations could not be explained by language, but could be the result of cultural beliefs. PMID:28848482

  5. Generic HRTFs May be Good Enough in Virtual Reality. Improving Source Localization through Cross-Modal Plasticity.

    PubMed

    Berger, Christopher C; Gonzalez-Franco, Mar; Tajadura-Jiménez, Ana; Florencio, Dinei; Zhang, Zhengyou

    2018-01-01

    Auditory spatial localization in humans is performed using a combination of interaural time differences, interaural level differences, as well as spectral cues provided by the geometry of the ear. To render spatialized sounds within a virtual reality (VR) headset, either individualized or generic Head Related Transfer Functions (HRTFs) are usually employed. The former require arduous calibrations, but enable accurate auditory source localization, which may lead to a heightened sense of presence within VR. The latter obviate the need for individualized calibrations, but result in less accurate auditory source localization. Previous research on auditory source localization in the real world suggests that our representation of acoustic space is highly plastic. In light of these findings, we investigated whether auditory source localization could be improved for users of generic HRTFs via cross-modal learning. The results show that pairing a dynamic auditory stimulus, with a spatio-temporally aligned visual counterpart, enabled users of generic HRTFs to improve subsequent auditory source localization. Exposure to the auditory stimulus alone or to asynchronous audiovisual stimuli did not improve auditory source localization. These findings have important implications for human perception as well as the development of VR systems as they indicate that generic HRTFs may be enough to enable good auditory source localization in VR.

  6. Cross-modal and modality-specific expectancy effects between pain and disgust

    PubMed Central

    Sharvit, Gil; Vuilleumier, Patrik; Delplanque, Sylvain; Corradi-Dell’ Acqua, Corrado

    2015-01-01

    Pain sensitivity increases when a noxious stimulus is preceded by cues predicting higher intensity. However, it is unclear whether the modulation of nociception by expectancy is sensory-specific (“modality based”) or reflects the aversive-affective consequence of the upcoming event (“unpleasantness”), potentially common with other negative events. Here we compared expectancy effects for pain and disgust by using different, but equally unpleasant, nociceptive (thermal) and olfactory stimulations. Indeed both pain and disgust are aversive, associated with threat to the organism, and processed in partly overlapping brain networks. Participants saw cues predicting the unpleasantness (high/low) and the modality (pain/disgust) of upcoming thermal or olfactory stimulations, and rated the associated unpleasantness after stimuli delivery. Results showed that identical thermal stimuli were perceived as more unpleasant when preceded by cues threatening about high (as opposed to low) pain. A similar expectancy effect was found for olfactory disgust. Critically, cross-modal expectancy effects were observed on inconsistent trials when thermal stimuli were preceded by high-disgust cues or olfactory stimuli preceded by high-pain cues. However, these effects were stronger in consistent than inconsistent conditions. Taken together, our results suggest that expectation of an unpleasant event elicits representations of both its modality-specific properties and its aversive consequences. PMID:26631975

  7. Plasticity of Attentional Functions in Older Adults after Non-Action Video Game Training: A Randomized Controlled Trial

    PubMed Central

    Mayas, Julia; Parmentier, Fabrice B. R.; Andrés, Pilar; Ballesteros, Soledad

    2014-01-01

    A major goal of recent research in aging has been to examine cognitive plasticity in older adults and its capacity to counteract cognitive decline. The aim of the present study was to investigate whether older adults could benefit from brain training with video games in a cross-modal oddball task designed to assess distraction and alertness. Twenty-seven healthy older adults participated in the study (15 in the experimental group, 12 in the control group. The experimental group received 20 1-hr video game training sessions using a commercially available brain-training package (Lumosity) involving problem solving, mental calculation, working memory and attention tasks. The control group did not practice this package and, instead, attended meetings with the other members of the study several times along the course of the study. Both groups were evaluated before and after the intervention using a cross-modal oddball task measuring alertness and distraction. The results showed a significant reduction of distraction and an increase of alertness in the experimental group and no variation in the control group. These results suggest neurocognitive plasticity in the old human brain as training enhanced cognitive performance on attentional functions. Trial Registration ClinicalTrials.gov NCT02007616 PMID:24647551

  8. Audio-Visual, Visuo-Tactile and Audio-Tactile Correspondences in Preschoolers.

    PubMed

    Nava, Elena; Grassi, Massimo; Turati, Chiara

    2016-01-01

    Interest in crossmodal correspondences has recently seen a renaissance thanks to numerous studies in human adults. Yet, still very little is known about crossmodal correspondences in children, particularly in sensory pairings other than audition and vision. In the current study, we investigated whether 4-5-year-old children match auditory pitch to the spatial motion of visual objects (audio-visual condition). In addition, we investigated whether this correspondence extends to touch, i.e., whether children also match auditory pitch to the spatial motion of touch (audio-tactile condition) and the spatial motion of visual objects to touch (visuo-tactile condition). In two experiments, two different groups of children were asked to indicate which of two stimuli fitted best with a centrally located third stimulus (Experiment 1), or to report whether two presented stimuli fitted together well (Experiment 2). We found sensitivity to the congruency of all of the sensory pairings only in Experiment 2, suggesting that only under specific circumstances can these correspondences be observed. Our results suggest that pitch-height correspondences for audio-visual and audio-tactile combinations may still be weak in preschool children, and speculate that this could be due to immature linguistic and auditory cues that are still developing at age five.

  9. Early Sign Language Experience Goes Along with an Increased Cross-modal Gain for Affective Prosodic Recognition in Congenitally Deaf CI Users.

    PubMed

    Fengler, Ineke; Delfau, Pia-Céline; Röder, Brigitte

    2018-04-01

    It is yet unclear whether congenitally deaf cochlear implant (CD CI) users' visual and multisensory emotion perception is influenced by their history in sign language acquisition. We hypothesized that early-signing CD CI users, relative to late-signing CD CI users and hearing, non-signing controls, show better facial expression recognition and rely more on the facial cues of audio-visual emotional stimuli. Two groups of young adult CD CI users-early signers (ES CI users; n = 11) and late signers (LS CI users; n = 10)-and a group of hearing, non-signing, age-matched controls (n = 12) performed an emotion recognition task with auditory, visual, and cross-modal emotionally congruent and incongruent speech stimuli. On different trials, participants categorized either the facial or the vocal expressions. The ES CI users more accurately recognized affective prosody than the LS CI users in the presence of congruent facial information. Furthermore, the ES CI users, but not the LS CI users, gained more than the controls from congruent visual stimuli when recognizing affective prosody. Both CI groups performed overall worse than the controls in recognizing affective prosody. These results suggest that early sign language experience affects multisensory emotion perception in CD CI users.

  10. Cross-Modal Matching of Audio-Visual German and French Fluent Speech in Infancy

    PubMed Central

    Kubicek, Claudia; Hillairet de Boisferon, Anne; Dupierrix, Eve; Pascalis, Olivier; Lœvenbruck, Hélène; Gervain, Judit; Schwarzer, Gudrun

    2014-01-01

    The present study examined when and how the ability to cross-modally match audio-visual fluent speech develops in 4.5-, 6- and 12-month-old German-learning infants. In Experiment 1, 4.5- and 6-month-old infants’ audio-visual matching ability of native (German) and non-native (French) fluent speech was assessed by presenting auditory and visual speech information sequentially, that is, in the absence of temporal synchrony cues. The results showed that 4.5-month-old infants were capable of matching native as well as non-native audio and visual speech stimuli, whereas 6-month-olds perceived the audio-visual correspondence of native language stimuli only. This suggests that intersensory matching narrows for fluent speech between 4.5 and 6 months of age. In Experiment 2, auditory and visual speech information was presented simultaneously, therefore, providing temporal synchrony cues. Here, 6-month-olds were found to match native as well as non-native speech indicating facilitation of temporal synchrony cues on the intersensory perception of non-native fluent speech. Intriguingly, despite the fact that audio and visual stimuli cohered temporally, 12-month-olds matched the non-native language only. Results were discussed with regard to multisensory perceptual narrowing during the first year of life. PMID:24586651

  11. Unconscious presentation of fearful face modulates electrophysiological responses to emotional prosody.

    PubMed

    Doi, Hirokazu; Shinohara, Kazuyuki

    2015-03-01

    Cross-modal integration of visual and auditory emotional cues is supposed to be advantageous in the accurate recognition of emotional signals. However, the neural locus of cross-modal integration between affective prosody and unconsciously presented facial expression in the neurologically intact population is still elusive at this point. The present study examined the influences of unconsciously presented facial expressions on the event-related potentials (ERPs) in emotional prosody recognition. In the experiment, fearful, happy, and neutral faces were presented without awareness by continuous flash suppression simultaneously with voices containing laughter and a fearful shout. The conventional peak analysis revealed that the ERPs were modulated interactively by emotional prosody and facial expression at multiple latency ranges, indicating that audio-visual integration of emotional signals takes place automatically without conscious awareness. In addition, the global field power during the late-latency range was larger for shout than for laughter only when a fearful face was presented unconsciously. The neural locus of this effect was localized to the left posterior fusiform gyrus, giving support to the view that the cortical region, traditionally considered to be unisensory region for visual processing, functions as the locus of audiovisual integration of emotional signals. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. In defense of abstract conceptual representations.

    PubMed

    Binder, Jeffrey R

    2016-08-01

    An extensive program of research in the past 2 decades has focused on the role of modal sensory, motor, and affective brain systems in storing and retrieving concept knowledge. This focus has led in some circles to an underestimation of the need for more abstract, supramodal conceptual representations in semantic cognition. Evidence for supramodal processing comes from neuroimaging work documenting a large, well-defined cortical network that responds to meaningful stimuli regardless of modal content. The nodes in this network correspond to high-level "convergence zones" that receive broadly crossmodal input and presumably process crossmodal conjunctions. It is proposed that highly conjunctive representations are needed for several critical functions, including capturing conceptual similarity structure, enabling thematic associative relationships independent of conceptual similarity, and providing efficient "chunking" of concept representations for a range of higher order tasks that require concepts to be configured as situations. These hypothesized functions account for a wide range of neuroimaging results showing modulation of the supramodal convergence zone network by associative strength, lexicality, familiarity, imageability, frequency, and semantic compositionality. The evidence supports a hierarchical model of knowledge representation in which modal systems provide a mechanism for concept acquisition and serve to ground individual concepts in external reality, whereas broadly conjunctive, supramodal representations play an equally important role in concept association and situation knowledge.

  13. Impairments in multisensory processing are not universal to the autism spectrum: no evidence for crossmodal priming deficits in Asperger syndrome.

    PubMed

    David, Nicole; R Schneider, Till; Vogeley, Kai; Engel, Andreas K

    2011-10-01

    Individuals suffering from autism spectrum disorders (ASD) often show a tendency for detail- or feature-based perception (also referred to as "local processing bias") instead of more holistic stimulus processing typical for unaffected people. This local processing bias has been demonstrated for the visual and auditory domains and there is evidence that multisensory processing may also be affected in ASD. Most multisensory processing paradigms used social-communicative stimuli, such as human speech or faces, probing the processing of simultaneously occuring sensory signals. Multisensory processing, however, is not limited to simultaneous stimulation. In this study, we investigated whether multisensory processing deficits in ASD persist when semantically complex but nonsocial stimuli are presented in succession. Fifteen adult individuals with Asperger syndrome and 15 control persons participated in a visual-audio priming task, which required the classification of sounds that were either primed by semantically congruent or incongruent preceding pictures of objects. As expected, performance on congruent trials was faster and more accurate compared with incongruent trials (crossmodal priming effect). The Asperger group, however, did not differ significantly from the control group. Our results do not support a general multisensory processing deficit, which is universal to the entire autism spectrum. Copyright © 2011, International Society for Autism Research, Wiley-Liss, Inc.

  14. The role of semantic and phonological factors in word recognition: an ERP cross-modal priming study of derivational morphology.

    PubMed

    Kielar, Aneta; Joanisse, Marc F

    2011-01-01

    Theories of morphological processing differ on the issue of how lexical and grammatical information are stored and accessed. A key point of contention is whether complex forms are decomposed during recognition (e.g., establish+ment), compared to forms that cannot be analyzed into constituent morphemes (e.g., apartment). In the present study, we examined these issues with respect to English derivational morphology by measuring ERP responses during a cross-modal priming lexical decision task. ERP priming effects for semantically and phonologically transparent derived words (government-govern) were compared to those of semantically opaque derived words (apartment-apart) as well as "quasi-regular" items that represent intermediate cases of morphological transparency (dresser-dress). Additional conditions independently manipulated semantic and phonological relatedness in non-derived words (semantics: couch-sofa; phonology: panel-pan). The degree of N400 ERP priming to morphological forms varied depending on the amount of semantic and phonological overlap between word types, rather than respecting a bivariate distinction between derived and opaque forms. Moreover, these effects could not be accounted for by semantic or phonological relatedness alone. The findings support the theory that morphological relatedness is graded rather than absolute, and depend on the joint contribution of form and meaning overlap. Copyright © 2010 Elsevier Ltd. All rights reserved.

  15. Brain activity during divided and selective attention to auditory and visual sentence comprehension tasks

    PubMed Central

    Moisala, Mona; Salmela, Viljami; Salo, Emma; Carlson, Synnöve; Vuontela, Virve; Salonen, Oili; Alho, Kimmo

    2015-01-01

    Using functional magnetic resonance imaging (fMRI), we measured brain activity of human participants while they performed a sentence congruence judgment task in either the visual or auditory modality separately, or in both modalities simultaneously. Significant performance decrements were observed when attention was divided between the two modalities compared with when one modality was selectively attended. Compared with selective attention (i.e., single tasking), divided attention (i.e., dual-tasking) did not recruit additional cortical regions, but resulted in increased activity in medial and lateral frontal regions which were also activated by the component tasks when performed separately. Areas involved in semantic language processing were revealed predominantly in the left lateral prefrontal cortex by contrasting incongruent with congruent sentences. These areas also showed significant activity increases during divided attention in relation to selective attention. In the sensory cortices, no crossmodal inhibition was observed during divided attention when compared with selective attention to one modality. Our results suggest that the observed performance decrements during dual-tasking are due to interference of the two tasks because they utilize the same part of the cortex. Moreover, semantic dual-tasking did not appear to recruit additional brain areas in comparison with single tasking, and no crossmodal inhibition was observed during intermodal divided attention. PMID:25745395

  16. Brain activity during divided and selective attention to auditory and visual sentence comprehension tasks.

    PubMed

    Moisala, Mona; Salmela, Viljami; Salo, Emma; Carlson, Synnöve; Vuontela, Virve; Salonen, Oili; Alho, Kimmo

    2015-01-01

    Using functional magnetic resonance imaging (fMRI), we measured brain activity of human participants while they performed a sentence congruence judgment task in either the visual or auditory modality separately, or in both modalities simultaneously. Significant performance decrements were observed when attention was divided between the two modalities compared with when one modality was selectively attended. Compared with selective attention (i.e., single tasking), divided attention (i.e., dual-tasking) did not recruit additional cortical regions, but resulted in increased activity in medial and lateral frontal regions which were also activated by the component tasks when performed separately. Areas involved in semantic language processing were revealed predominantly in the left lateral prefrontal cortex by contrasting incongruent with congruent sentences. These areas also showed significant activity increases during divided attention in relation to selective attention. In the sensory cortices, no crossmodal inhibition was observed during divided attention when compared with selective attention to one modality. Our results suggest that the observed performance decrements during dual-tasking are due to interference of the two tasks because they utilize the same part of the cortex. Moreover, semantic dual-tasking did not appear to recruit additional brain areas in comparison with single tasking, and no crossmodal inhibition was observed during intermodal divided attention.

  17. Neural correlates of cross-modal affective priming by music in Williams syndrome

    PubMed Central

    Lense, Miriam D.; Gordon, Reyna L.; Key, Alexandra P. F.; Dykens, Elisabeth M.

    2014-01-01

    Emotional connection is the main reason people engage with music, and the emotional features of music can influence processing in other domains. Williams syndrome (WS) is a neurodevelopmental genetic disorder where musicality and sociability are prominent aspects of the phenotype. This study examined oscillatory brain activity during a musical affective priming paradigm. Participants with WS and age-matched typically developing controls heard brief emotional musical excerpts or emotionally neutral sounds and then reported the emotional valence (happy/sad) of subsequently presented faces. Participants with WS demonstrated greater evoked fronto-central alpha activity to the happy vs sad musical excerpts. The size of these alpha effects correlated with parent-reported emotional reactivity to music. Although participant groups did not differ in accuracy of identifying facial emotions, reaction time data revealed a music priming effect only in persons with WS, who responded faster when the face matched the emotional valence of the preceding musical excerpt vs when the valence differed. Matching emotional valence was also associated with greater evoked gamma activity thought to reflect cross-modal integration. This effect was not present in controls. The results suggest a specific connection between music and socioemotional processing and have implications for clinical and educational approaches for WS. PMID:23386738

  18. Modality independence of order coding in working memory: Evidence from cross-modal order interference at recall.

    PubMed

    Vandierendonck, André

    2016-01-01

    Working memory researchers do not agree on whether order in serial recall is encoded by dedicated modality-specific systems or by a more general modality-independent system. Although previous research supports the existence of autonomous modality-specific systems, it has been shown that serial recognition memory is prone to cross-modal order interference by concurrent tasks. The present study used a serial recall task, which was performed in a single-task condition and in a dual-task condition with an embedded memory task in the retention interval. The modality of the serial task was either verbal or visuospatial, and the embedded tasks were in the other modality and required either serial or item recall. Care was taken to avoid modality overlaps during presentation and recall. In Experiment 1, visuospatial but not verbal serial recall was more impaired when the embedded task was an order than when it was an item task. Using a more difficult verbal serial recall task, verbal serial recall was also more impaired by another order recall task in Experiment 2. These findings are consistent with the hypothesis of modality-independent order coding. The implications for views on short-term recall and the multicomponent view of working memory are discussed.

  19. Involvement of Right STS in Audio-Visual Integration for Affective Speech Demonstrated Using MEG

    PubMed Central

    Hagan, Cindy C.; Woods, Will; Johnson, Sam; Green, Gary G. R.; Young, Andrew W.

    2013-01-01

    Speech and emotion perception are dynamic processes in which it may be optimal to integrate synchronous signals emitted from different sources. Studies of audio-visual (AV) perception of neutrally expressed speech demonstrate supra-additive (i.e., where AV>[unimodal auditory+unimodal visual]) responses in left STS to crossmodal speech stimuli. However, emotions are often conveyed simultaneously with speech; through the voice in the form of speech prosody and through the face in the form of facial expression. Previous studies of AV nonverbal emotion integration showed a role for right (rather than left) STS. The current study therefore examined whether the integration of facial and prosodic signals of emotional speech is associated with supra-additive responses in left (cf. results for speech integration) or right (due to emotional content) STS. As emotional displays are sometimes difficult to interpret, we also examined whether supra-additive responses were affected by emotional incongruence (i.e., ambiguity). Using magnetoencephalography, we continuously recorded eighteen participants as they viewed and heard AV congruent emotional and AV incongruent emotional speech stimuli. Significant supra-additive responses were observed in right STS within the first 250 ms for emotionally incongruent and emotionally congruent AV speech stimuli, which further underscores the role of right STS in processing crossmodal emotive signals. PMID:23950977

  20. Structural reorganization of the early visual cortex following Braille training in sighted adults.

    PubMed

    Bola, Łukasz; Siuda-Krzywicka, Katarzyna; Paplińska, Małgorzata; Sumera, Ewa; Zimmermann, Maria; Jednoróg, Katarzyna; Marchewka, Artur; Szwed, Marcin

    2017-12-12

    Training can induce cross-modal plasticity in the human cortex. A well-known example of this phenomenon is the recruitment of visual areas for tactile and auditory processing. It remains unclear to what extent such plasticity is associated with changes in anatomy. Here we enrolled 29 sighted adults into a nine-month tactile Braille-reading training, and used voxel-based morphometry and diffusion tensor imaging to describe the resulting anatomical changes. In addition, we collected resting-state fMRI data to relate these changes to functional connectivity between visual and somatosensory-motor cortices. Following Braille-training, we observed substantial grey and white matter reorganization in the anterior part of early visual cortex (peripheral visual field). Moreover, relative to its posterior, foveal part, the peripheral representation of early visual cortex had stronger functional connections to somatosensory and motor cortices even before the onset of training. Previous studies show that the early visual cortex can be functionally recruited for tactile discrimination, including recognition of Braille characters. Our results demonstrate that reorganization in this region induced by tactile training can also be anatomical. This change most likely reflects a strengthening of existing connectivity between the peripheral visual cortex and somatosensory cortices, which suggests a putative mechanism for cross-modal recruitment of visual areas.

  1. Origins of thalamic and cortical projections to the posterior auditory field in congenitally deaf cats.

    PubMed

    Butler, Blake E; Chabot, Nicole; Kral, Andrej; Lomber, Stephen G

    2017-01-01

    Crossmodal plasticity takes place following sensory loss, such that areas that normally process the missing modality are reorganized to provide compensatory function in the remaining sensory systems. For example, congenitally deaf cats outperform normal hearing animals on localization of visual stimuli presented in the periphery, and this advantage has been shown to be mediated by the posterior auditory field (PAF). In order to determine the nature of the anatomical differences that underlie this phenomenon, we injected a retrograde tracer into PAF of congenitally deaf animals and quantified the thalamic and cortical projections to this field. The pattern of projections from areas throughout the brain was determined to be qualitatively similar to that previously demonstrated in normal hearing animals, but with twice as many projections arising from non-auditory cortical areas. In addition, small ectopic projections were observed from a number of fields in visual cortex, including areas 19, 20a, 20b, and 21b, and area 7 of parietal cortex. These areas did not show projections to PAF in cats deafened ototoxically near the onset of hearing, and provide a possible mechanism for crossmodal reorganization of PAF. These, along with the possible contributions of other mechanisms, are considered. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. The Current Status of Somatostatin-Interneurons in Inhibitory Control of Brain Function and Plasticity

    PubMed Central

    2016-01-01

    The mammalian neocortex contains many distinct inhibitory neuronal populations to balance excitatory neurotransmission. A correct excitation/inhibition equilibrium is crucial for normal brain development, functioning, and controlling lifelong cortical plasticity. Knowledge about how the inhibitory network contributes to brain plasticity however remains incomplete. Somatostatin- (SST-) interneurons constitute a large neocortical subpopulation of interneurons, next to parvalbumin- (PV-) and vasoactive intestinal peptide- (VIP-) interneurons. Unlike the extensively studied PV-interneurons, acknowledged as key components in guiding ocular dominance plasticity, the contribution of SST-interneurons is less understood. Nevertheless, SST-interneurons are ideally situated within cortical networks to integrate unimodal or cross-modal sensory information processing and therefore likely to be important mediators of experience-dependent plasticity. The lack of knowledge on SST-interneurons partially relates to the wide variety of distinct subpopulations present in the sensory neocortex. This review informs on those SST-subpopulations hitherto described based on anatomical, molecular, or electrophysiological characteristics and whose functional roles can be attributed based on specific cortical wiring patterns. A possible role for these subpopulations in experience-dependent plasticity will be discussed, emphasizing on learning-induced plasticity and on unimodal and cross-modal plasticity upon sensory loss. This knowledge will ultimately contribute to guide brain plasticity into well-defined directions to restore sensory function and promote lifelong learning. PMID:27403348

  3. Cross-Cultural Color-Odor Associations

    PubMed Central

    Levitan, Carmel A.; Ren, Jiana; Woods, Andy T.; Boesveldt, Sanne; Chan, Jason S.; McKenzie, Kirsten J.; Dodson, Michael; Levin, Jai A.; Leong, Christine X. R.; van den Bosch, Jasper J. F.

    2014-01-01

    Colors and odors are associated; for instance, people typically match the smell of strawberries to the color pink or red. These associations are forms of crossmodal correspondences. Recently, there has been discussion about the extent to which these correspondences arise for structural reasons (i.e., an inherent mapping between color and odor), statistical reasons (i.e., covariance in experience), and/or semantically-mediated reasons (i.e., stemming from language). The present study probed this question by testing color-odor correspondences in 6 different cultural groups (Dutch, Netherlands-residing-Chinese, German, Malay, Malaysian-Chinese, and US residents), using the same set of 14 odors and asking participants to make congruent and incongruent color choices for each odor. We found consistent patterns in color choices for each odor within each culture, showing that participants were making non-random color-odor matches. We used representational dissimilarity analysis to probe for variations in the patterns of color-odor associations across cultures; we found that US and German participants had the most similar patterns of associations, followed by German and Malay participants. The largest group differences were between Malay and Netherlands-resident Chinese participants and between Dutch and Malaysian-Chinese participants. We conclude that culture plays a role in color-odor crossmodal associations, which likely arise, at least in part, through experience. PMID:25007343

  4. Relation between brain activation and lexical performance.

    PubMed

    Booth, James R; Burman, Douglas D; Meyer, Joel R; Gitelman, Darren R; Parrish, Todd B; Mesulam, M Marsel

    2003-07-01

    Functional magnetic resonance imaging (fMRI) was used to determine whether performance on lexical tasks was correlated with cerebral activation patterns. We found that such relationships did exist and that their anatomical distribution reflected the neurocognitive processing routes required by the task. Better performance on intramodal tasks (determining if visual words were spelled the same or if auditory words rhymed) was correlated with more activation in unimodal regions corresponding to the modality of sensory input, namely the fusiform gyrus (BA 37) for written words and the superior temporal gyrus (BA 22) for spoken words. Better performance in tasks requiring cross-modal conversions (determining if auditory words were spelled the same or if visual words rhymed), on the other hand, was correlated with more activation in posterior heteromodal regions, including the supramarginal gyrus (BA 40) and the angular gyrus (BA 39). Better performance in these cross-modal tasks was also correlated with greater activation in unimodal regions corresponding to the target modality of the conversion process (i.e., fusiform gyrus for auditory spelling and superior temporal gyrus for visual rhyming). In contrast, performance on the auditory spelling task was inversely correlated with activation in the superior temporal gyrus possibly reflecting a greater emphasis on the properties of the perceptual input rather than on the relevant transmodal conversions. Copyright 2003 Wiley-Liss, Inc.

  5. Neurons in the barrel cortex turn into processing whisker and odor signals: a cellular mechanism for the storage and retrieval of associative signals

    PubMed Central

    Wang, Dangui; Zhao, Jun; Gao, Zilong; Chen, Na; Wen, Bo; Lu, Wei; Lei, Zhuofan; Chen, Changfeng; Liu, Yahui; Feng, Jing; Wang, Jin-Hui

    2015-01-01

    Associative learning and memory are essential to logical thinking and cognition. How the neurons are recruited as associative memory cells to encode multiple input signals for their associated storage and distinguishable retrieval remains unclear. We studied this issue in the barrel cortex by in vivo two-photon calcium imaging, electrophysiology, and neural tracing in our mouse model that the simultaneous whisker and olfaction stimulations led to odorant-induced whisker motion. After this cross-modal reflex arose, the barrel and piriform cortices connected. More than 40% of barrel cortical neurons became to encode odor signal alongside whisker signal. Some of these neurons expressed distinct activity patterns in response to acquired odor signal and innate whisker signal, and others encoded similar pattern in response to these signals. In the meantime, certain barrel cortical astrocytes encoded odorant and whisker signals. After associative learning, the neurons and astrocytes in the sensory cortices are able to store the newly learnt signal (cross-modal memory) besides the innate signal (native-modal memory). Such associative memory cells distinguish the differences of these signals by programming different codes and signify the historical associations of these signals by similar codes in information retrievals. PMID:26347609

  6. Single-unit analysis of somatosensory processing in the core auditory cortex of hearing ferrets.

    PubMed

    Meredith, M Alex; Allman, Brian L

    2015-03-01

    The recent findings in several species that the primary auditory cortex processes non-auditory information have largely overlooked the possibility of somatosensory effects. Therefore, the present investigation examined the core auditory cortices (anterior auditory field and primary auditory cortex) for tactile responsivity. Multiple single-unit recordings from anesthetised ferret cortex yielded histologically verified neurons (n = 311) tested with electronically controlled auditory, visual and tactile stimuli, and their combinations. Of the auditory neurons tested, a small proportion (17%) was influenced by visual cues, but a somewhat larger number (23%) was affected by tactile stimulation. Tactile effects rarely occurred alone and spiking responses were observed in bimodal auditory-tactile neurons. However, the broadest tactile effect that was observed, which occurred in all neuron types, was that of suppression of the response to a concurrent auditory cue. The presence of tactile effects in the core auditory cortices was supported by a substantial anatomical projection from the rostral suprasylvian sulcal somatosensory area. Collectively, these results demonstrate that crossmodal effects in the auditory cortex are not exclusively visual and that somatosensation plays a significant role in modulation of acoustic processing, and indicate that crossmodal plasticity following deafness may unmask these existing non-auditory functions. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  7. Involvement of right STS in audio-visual integration for affective speech demonstrated using MEG.

    PubMed

    Hagan, Cindy C; Woods, Will; Johnson, Sam; Green, Gary G R; Young, Andrew W

    2013-01-01

    Speech and emotion perception are dynamic processes in which it may be optimal to integrate synchronous signals emitted from different sources. Studies of audio-visual (AV) perception of neutrally expressed speech demonstrate supra-additive (i.e., where AV>[unimodal auditory+unimodal visual]) responses in left STS to crossmodal speech stimuli. However, emotions are often conveyed simultaneously with speech; through the voice in the form of speech prosody and through the face in the form of facial expression. Previous studies of AV nonverbal emotion integration showed a role for right (rather than left) STS. The current study therefore examined whether the integration of facial and prosodic signals of emotional speech is associated with supra-additive responses in left (cf. results for speech integration) or right (due to emotional content) STS. As emotional displays are sometimes difficult to interpret, we also examined whether supra-additive responses were affected by emotional incongruence (i.e., ambiguity). Using magnetoencephalography, we continuously recorded eighteen participants as they viewed and heard AV congruent emotional and AV incongruent emotional speech stimuli. Significant supra-additive responses were observed in right STS within the first 250 ms for emotionally incongruent and emotionally congruent AV speech stimuli, which further underscores the role of right STS in processing crossmodal emotive signals.

  8. Auditory enhancement of visual perception at threshold depends on visual abilities.

    PubMed

    Caclin, Anne; Bouchet, Patrick; Djoulah, Farida; Pirat, Elodie; Pernier, Jacques; Giard, Marie-Hélène

    2011-06-17

    Whether or not multisensory interactions can improve detection thresholds, and thus widen the range of perceptible events is a long-standing debate. Here we revisit this question, by testing the influence of auditory stimuli on visual detection threshold, in subjects exhibiting a wide range of visual-only performance. Above the perceptual threshold, crossmodal interactions have indeed been reported to depend on the subject's performance when the modalities are presented in isolation. We thus tested normal-seeing subjects and short-sighted subjects wearing their usual glasses. We used a paradigm limiting potential shortcomings of previous studies: we chose a criterion-free threshold measurement procedure and precluded exogenous cueing effects by systematically presenting a visual cue whenever a visual target (a faint Gabor patch) might occur. Using this carefully controlled procedure, we found that concurrent sounds only improved visual detection thresholds in the sub-group of subjects exhibiting the poorest performance in the visual-only conditions. In these subjects, for oblique orientations of the visual stimuli (but not for vertical or horizontal targets), the auditory improvement was still present when visual detection was already helped with flanking visual stimuli generating a collinear facilitation effect. These findings highlight that crossmodal interactions are most efficient to improve perceptual performance when an isolated modality is deficient. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. A Collaborative Model for Ubiquitous Learning Environments

    ERIC Educational Resources Information Center

    Barbosa, Jorge; Barbosa, Debora; Rabello, Solon

    2016-01-01

    Use of mobile devices and widespread adoption of wireless networks have enabled the emergence of Ubiquitous Computing. Application of this technology to improving education strategies gave rise to Ubiquitous e-Learning, also known as Ubiquitous Learning. There are several approaches to organizing ubiquitous learning environments, but most of them…

  10. Methodological challenges to multivariate syndromic surveillance: a case study using Swiss animal health data.

    PubMed

    Vial, Flavie; Wei, Wei; Held, Leonhard

    2016-12-20

    In an era of ubiquitous electronic collection of animal health data, multivariate surveillance systems (which concurrently monitor several data streams) should have a greater probability of detecting disease events than univariate systems. However, despite their limitations, univariate aberration detection algorithms are used in most active syndromic surveillance (SyS) systems because of their ease of application and interpretation. On the other hand, a stochastic modelling-based approach to multivariate surveillance offers more flexibility, allowing for the retention of historical outbreaks, for overdispersion and for non-stationarity. While such methods are not new, they are yet to be applied to animal health surveillance data. We applied an example of such stochastic model, Held and colleagues' two-component model, to two multivariate animal health datasets from Switzerland. In our first application, multivariate time series of the number of laboratories test requests were derived from Swiss animal diagnostic laboratories. We compare the performance of the two-component model to parallel monitoring using an improved Farrington algorithm and found both methods yield a satisfactorily low false alarm rate. However, the calibration test of the two-component model on the one-step ahead predictions proved satisfactory, making such an approach suitable for outbreak prediction. In our second application, the two-component model was applied to the multivariate time series of the number of cattle abortions and the number of test requests for bovine viral diarrhea (a disease that often results in abortions). We found that there is a two days lagged effect from the number of abortions to the number of test requests. We further compared the joint modelling and univariate modelling of the number of laboratory test requests time series. The joint modelling approach showed evidence of superiority in terms of forecasting abilities. Stochastic modelling approaches offer the potential to address more realistic surveillance scenarios through, for example, the inclusion of times series specific parameters, or of covariates known to have an impact on syndrome counts. Nevertheless, many methodological challenges to multivariate surveillance of animal SyS data still remain. Deciding on the amount of corroboration among data streams that is required to escalate into an alert is not a trivial task given the sparse data on the events under consideration (e.g. disease outbreaks).

  11. Adaptation of Physiological and Cognitive Workload via Interactive Multi-modal Displays

    DTIC Science & Technology

    2014-05-28

    peer-reviewed journals (N/A for none) 09/07/2013 Received Paper 8.00 James Merlo, Joseph E. Mercado , Jan B.F. Van Erp, Peter A. Hancock. Improving...08, . : , Mr. Joseph Mercado , Mr. Timothy White, Dr. Peter Hancock. Effects of Cross-Modal Sensory Cueing Automation Failurein a Target Detection Task...fields:...... ...... ...... ...... ...... PERCENT_SUPPORTEDNAME FTE Equivalent: Total Number: Discipline Joseph Mercado 0.50 Timothy White 0.50 1.00 2

  12. Cross-modal associations between materic painting and classical Spanish music.

    PubMed

    Albertazzi, Liliana; Canal, Luisa; Micciolo, Rocco

    2015-01-01

    The study analyses the existence of cross-modal associations in the general population between a series of paintings and a series of clips of classical (guitar) music. Because of the complexity of the stimuli, the study differs from previous analyses conducted on the association between visual and auditory stimuli, which predominantly analyzed single tones and colors by means of psychophysical methods and forced choice responses. More recently, the relation between music and shape has been analyzed in terms of music visualization, or relatively to the role played by emotion in the association, and free response paradigms have also been accepted. In our study, in order to investigate what attributes may be responsible for the phenomenon of the association between visual and auditory stimuli, the clip/painting association was tested in two experiments: the first used the semantic differential on a unidimensional rating scale of adjectives; the second employed a specific methodology based on subjective perceptual judgments in first person account. Because of the complexity of the stimuli, it was decided to have the maximum possible uniformity of style, composition and musical color. The results show that multisensory features expressed by adjectives such as "quick," "agitated," and "strong," and their antonyms "slow," "calm," and "weak" characterized both the visual and auditory stimuli, and that they may have had a role in the associations. The results also suggest that the main perceptual features responsible for the clip/painting associations were hue, lightness, timbre, and musical tempo. Contrary to what was expected, the musical mode usually related to feelings of happiness (major mode), or to feelings of sadness (minor mode), and spatial orientation (vertical and horizontal) did not play a significant role in the association. The consistency of the associations was shown when evaluated on the whole sample, and after considering the different backgrounds and expertise of the subjects. No substantial difference was found between expert and non-expert subjects. The methods used in the experiment (semantic differential and subjective judgements in first person account) corroborated the interpretation of the results as associations due to patterns of qualitative similarity present in stimuli of different sensory modalities and experienced as such by the subjects. The main result of the study consists in showing the existence of cross-modal associations between highly complex stimuli; furthermore, the second experiment employed a specific methodology based on subjective perceptual judgments.

  13. The Dynamic Multisensory Engram: Neural Circuitry Underlying Crossmodal Object Recognition in Rats Changes with the Nature of Object Experience.

    PubMed

    Jacklin, Derek L; Cloke, Jacob M; Potvin, Alphonse; Garrett, Inara; Winters, Boyer D

    2016-01-27

    Rats, humans, and monkeys demonstrate robust crossmodal object recognition (CMOR), identifying objects across sensory modalities. We have shown that rats' performance of a spontaneous tactile-to-visual CMOR task requires functional integration of perirhinal (PRh) and posterior parietal (PPC) cortices, which seemingly provide visual and tactile object feature processing, respectively. However, research with primates has suggested that PRh is sufficient for multisensory object representation. We tested this hypothesis in rats using a modification of the CMOR task in which multimodal preexposure to the to-be-remembered objects significantly facilitates performance. In the original CMOR task, with no preexposure, reversible lesions of PRh or PPC produced patterns of impairment consistent with modality-specific contributions. Conversely, in the CMOR task with preexposure, PPC lesions had no effect, whereas PRh involvement was robust, proving necessary for phases of the task that did not require PRh activity when rats did not have preexposure; this pattern was supported by results from c-fos imaging. We suggest that multimodal preexposure alters the circuitry responsible for object recognition, in this case obviating the need for PPC contributions and expanding PRh involvement, consistent with the polymodal nature of PRh connections and results from primates indicating a key role for PRh in multisensory object representation. These findings have significant implications for our understanding of multisensory information processing, suggesting that the nature of an individual's past experience with an object strongly determines the brain circuitry involved in representing that object's multisensory features in memory. The ability to integrate information from multiple sensory modalities is crucial to the survival of organisms living in complex environments. Appropriate responses to behaviorally relevant objects are informed by integration of multisensory object features. We used crossmodal object recognition tasks in rats to study the neurobiological basis of multisensory object representation. When rats had no prior exposure to the to-be-remembered objects, the spontaneous ability to recognize objects across sensory modalities relied on functional interaction between multiple cortical regions. However, prior multisensory exploration of the task-relevant objects remapped cortical contributions, negating the involvement of one region and significantly expanding the role of another. This finding emphasizes the dynamic nature of cortical representation of objects in relation to past experience. Copyright © 2016 the authors 0270-6474/16/361273-17$15.00/0.

  14. Convergent and invariant object representations for sight, sound, and touch.

    PubMed

    Man, Kingson; Damasio, Antonio; Meyer, Kaspar; Kaplan, Jonas T

    2015-09-01

    We continuously perceive objects in the world through multiple sensory channels. In this study, we investigated the convergence of information from different sensory streams within the cerebral cortex. We presented volunteers with three common objects via three different modalities-sight, sound, and touch-and used multivariate pattern analysis of functional magnetic resonance imaging data to map the cortical regions containing information about the identity of the objects. We could reliably predict which of the three stimuli a subject had seen, heard, or touched from the pattern of neural activity in the corresponding early sensory cortices. Intramodal classification was also successful in large portions of the cerebral cortex beyond the primary areas, with multiple regions showing convergence of information from two or all three modalities. Using crossmodal classification, we also searched for brain regions that would represent objects in a similar fashion across different modalities of presentation. We trained a classifier to distinguish objects presented in one modality and then tested it on the same objects presented in a different modality. We detected audiovisual invariance in the right temporo-occipital junction, audiotactile invariance in the left postcentral gyrus and parietal operculum, and visuotactile invariance in the right postcentral and supramarginal gyri. Our maps of multisensory convergence and crossmodal generalization reveal the underlying organization of the association cortices, and may be related to the neural basis for mental concepts. © 2015 Wiley Periodicals, Inc.

  15. Audiovisual semantic interactions between linguistic and nonlinguistic stimuli: The time-courses and categorical specificity.

    PubMed

    Chen, Yi-Chuan; Spence, Charles

    2018-04-30

    We examined the time-courses and categorical specificity of the crossmodal semantic congruency effects elicited by naturalistic sounds and spoken words on the processing of visual pictures (Experiment 1) and printed words (Experiment 2). Auditory cues were presented at 7 different stimulus onset asynchronies (SOAs) with respect to the visual targets, and participants made speeded categorization judgments (living vs. nonliving). Three common effects were observed across 2 experiments: Both naturalistic sounds and spoken words induced a slowly emerging congruency effect when leading by 250 ms or more in the congruent compared with the incongruent condition, and a rapidly emerging inhibitory effect when leading by 250 ms or less in the incongruent condition as opposed to the noise condition. Only spoken words that did not match the visual targets elicited an additional inhibitory effect when leading by 100 ms or when presented simultaneously. Compared with nonlinguistic stimuli, the crossmodal congruency effects associated with linguistic stimuli occurred over a wider range of SOAs and occurred at a more specific level of the category hierarchy (i.e., the basic level) than was required by the task. A comprehensive framework is proposed to provide a dynamic view regarding how meaning is extracted during the processing of visual or auditory linguistic and nonlinguistic stimuli, therefore contributing to our understanding of multisensory semantic processing in humans. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  16. Cross-Modal Correspondences Enhance Performance on a Colour-to-Sound Sensory Substitution Device.

    PubMed

    Hamilton-Fletcher, Giles; Wright, Thomas D; Ward, Jamie

    Visual sensory substitution devices (SSDs) can represent visual characteristics through distinct patterns of sound, allowing a visually impaired user access to visual information. Previous SSDs have avoided colour and when they do encode colour, have assigned sounds to colour in a largely unprincipled way. This study introduces a new tablet-based SSD termed the ‘Creole’ (so called because it combines tactile scanning with image sonification) and a new algorithm for converting colour to sound that is based on established cross-modal correspondences (intuitive mappings between different sensory dimensions). To test the utility of correspondences, we examined the colour–sound associative memory and object recognition abilities of sighted users who had their device either coded in line with or opposite to sound–colour correspondences. Improved colour memory and reduced colour-errors were made by users who had the correspondence-based mappings. Interestingly, the colour–sound mappings that provided the highest improvements during the associative memory task also saw the greatest gains for recognising realistic objects that also featured these colours, indicating a transfer of abilities from memory to recognition. These users were also marginally better at matching sounds to images varying in luminance, even though luminance was coded identically across the different versions of the device. These findings are discussed with relevance for both colour and correspondences for sensory substitution use.

  17. Crossmodal Statistical Binding of Temporal Information and Stimuli Properties Recalibrates Perception of Visual Apparent Motion

    PubMed Central

    Zhang, Yi; Chen, Lihan

    2016-01-01

    Recent studies of brain plasticity that pertain to time perception have shown that fast training of temporal discrimination in one modality, for example, the auditory modality, can improve performance of temporal discrimination in another modality, such as the visual modality. We here examined whether the perception of visual Ternus motion could be recalibrated through fast crossmodal statistical binding of temporal information and stimuli properties binding. We conducted two experiments, composed of three sessions each: pre-test, learning, and post-test. In both the pre-test and the post-test, participants classified the Ternus display as either “element motion” or “group motion.” For the training session in Experiment 1, we constructed two types of temporal structures, in which two consecutively presented sound beeps were dominantly (80%) flanked by one leading visual Ternus frame and by one lagging visual Ternus frame (VAAV) or dominantly inserted by two Ternus visual frames (AVVA). Participants were required to respond which interval (auditory vs. visual) was longer. In Experiment 2, we presented only a single auditory–visual pair but with similar temporal configurations as in Experiment 1, and asked participants to perform an audio–visual temporal order judgment. The results of these two experiments support that statistical binding of temporal information and stimuli properties can quickly and selectively recalibrate the sensitivity of perceiving visual motion, according to the protocols of the specific bindings. PMID:27065910

  18. Sound iconicity of abstract concepts: Place of articulation is implicitly associated with abstract concepts of size and social dominance.

    PubMed

    Auracher, Jan

    2017-01-01

    The concept of sound iconicity implies that phonemes are intrinsically associated with non-acoustic phenomena, such as emotional expression, object size or shape, or other perceptual features. In this respect, sound iconicity is related to other forms of cross-modal associations in which stimuli from different sensory modalities are associated with each other due to the implicitly perceived correspondence of their primal features. One prominent example is the association between vowels, categorized according to their place of articulation, and size, with back vowels being associated with bigness and front vowels with smallness. However, to date the relative influence of perceptual and conceptual cognitive processing on this association is not clear. To bridge this gap, three experiments were conducted in which associations between nonsense words and pictures of animals or emotional body postures were tested. In these experiments participants had to infer the relation between visual stimuli and the notion of size from the content of the pictures, while directly perceivable features did not support-or even contradicted-the predicted association. Results show that implicit associations between articulatory-acoustic characteristics of phonemes and pictures are mainly influenced by semantic features, i.e., the content of a picture, whereas the influence of perceivable features, i.e., size or shape, is overridden. This suggests that abstract semantic concepts can function as an interface between different sensory modalities, facilitating cross-modal associations.

  19. Cross-Modality Information Transfer: A Hypothesis about the Relationship among Prehistoric Cave Paintings, Symbolic Thinking, and the Emergence of Language.

    PubMed

    Miyagawa, Shigeru; Lesure, Cora; Nóbrega, Vitor A

    2018-01-01

    Early modern humans developed mental capabilities that were immeasurably greater than those of non-human primates. We see this in the rapid innovation in tool making, the development of complex language, and the creation of sophisticated art forms, none of which we find in our closest relatives. While we can readily observe the results of this high-order cognitive capacity, it is difficult to see how it could have developed. We take up the topic of cave art and archeoacoustics, particularly the discovery that cave art is often closely connected to the acoustic properties of the cave chambers in which it is found. Apparently, early modern humans were able to detect the way sound reverberated in these chambers, and they painted artwork on surfaces that were acoustic "hot spots," i.e., suitable for generating echoes. We argue that cave art is a form of cross-modality information transfer, in which acoustic signals are transformed into symbolic visual representations. This form of information transfer across modalities is an instance of how the symbolic mind of early modern humans was taking shape into concrete, externalized language. We also suggest that the earliest rock art found in Africa may constitute one of the first fossilized proxies for the expression of full-fledged human linguistic behavior.

  20. Temporal Ventriloquism Reveals Intact Audiovisual Temporal Integration in Amblyopia.

    PubMed

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-02-01

    We have shown previously that amblyopia involves impaired detection of asynchrony between auditory and visual events. To distinguish whether this impairment represents a defect in temporal integration or nonintegrative multisensory processing (e.g., cross-modal matching), we used the temporal ventriloquism effect in which visual temporal order judgment (TOJ) is normally enhanced by a lagging auditory click. Participants with amblyopia (n = 9) and normally sighted controls (n = 9) performed a visual TOJ task. Pairs of clicks accompanied the two lights such that the first click preceded the first light, or second click lagged the second light by 100, 200, or 450 ms. Baseline audiovisual synchrony and visual-only conditions also were tested. Within both groups, just noticeable differences for the visual TOJ task were significantly reduced compared with baseline in the 100- and 200-ms click lag conditions. Within the amblyopia group, poorer stereo acuity and poorer visual acuity in the amblyopic eye were significantly associated with greater enhancement in visual TOJ performance in the 200-ms click lag condition. Audiovisual temporal integration is intact in amblyopia, as indicated by perceptual enhancement in the temporal ventriloquism effect. Furthermore, poorer stereo acuity and poorer visual acuity in the amblyopic eye are associated with a widened temporal binding window for the effect. These findings suggest that previously reported abnormalities in audiovisual multisensory processing may result from impaired cross-modal matching rather than a diminished capacity for temporal audiovisual integration.

  1. A model of the temporal dynamics of multisensory enhancement

    PubMed Central

    Rowland, Benjamin A.; Stein, Barry E.

    2014-01-01

    The senses transduce different forms of environmental energy, and the brain synthesizes information across them to enhance responses to salient biological events. We hypothesize that the potency of multisensory integration is attributable to the convergence of independent and temporally aligned signals derived from cross-modal stimulus configurations onto multisensory neurons. The temporal profile of multisensory integration in neurons of the deep superior colliculus (SC) is consistent with this hypothesis. The responses of these neurons to visual, auditory, and combinations of visual–auditory stimuli reveal that multisensory integration takes place in real-time; that is, the input signals are integrated as soon as they arrive at the target neuron. Interactions between cross-modal signals may appear to reflect linear or nonlinear computations on a moment-by-moment basis, the aggregate of which determines the net product of multisensory integration. Modeling observations presented here suggest that the early nonlinear components of the temporal profile of multisensory integration can be explained with a simple spiking neuron model, and do not require more sophisticated assumptions about the underlying biology. A transition from nonlinear “super-additive” computation to linear, additive computation can be accomplished via scaled inhibition. The findings provide a set of design constraints for artificial implementations seeking to exploit the basic principles and potency of biological multisensory integration in contexts of sensory substitution or augmentation. PMID:24374382

  2. Blindness alters the microstructure of the ventral but not the dorsal visual stream.

    PubMed

    Reislev, Nina L; Kupers, Ron; Siebner, Hartwig R; Ptito, Maurice; Dyrby, Tim B

    2016-07-01

    Visual deprivation from birth leads to reorganisation of the brain through cross-modal plasticity. Although there is a general agreement that the primary afferent visual pathways are altered in congenitally blind individuals, our knowledge about microstructural changes within the higher-order visual streams, and how this is affected by onset of blindness, remains scant. We used diffusion tensor imaging and tractography to investigate microstructural features in the dorsal (superior longitudinal fasciculus) and ventral (inferior longitudinal and inferior fronto-occipital fasciculi) visual pathways in 12 congenitally blind, 15 late blind and 15 normal sighted controls. We also studied six prematurely born individuals with normal vision to control for the effects of prematurity on brain connectivity. Our data revealed a reduction in fractional anisotropy in the ventral but not the dorsal visual stream for both congenitally and late blind individuals. Prematurely born individuals, with normal vision, did not differ from normal sighted controls, born at term. Our data suggest that although the visual streams are structurally developing without normal visual input from the eyes, blindness selectively affects the microstructure of the ventral visual stream regardless of the time of onset. We suggest that the decreased fractional anisotropy of the ventral stream in the two groups of blind subjects is the combined result of both degenerative and cross-modal compensatory processes, affecting normal white matter development.

  3. Functional dissociations in top-down control dependent neural repetition priming.

    PubMed

    Klaver, Peter; Schnaidt, Malte; Fell, Jürgen; Ruhlmann, Jürgen; Elger, Christian E; Fernández, Guillén

    2007-02-15

    Little is known about the neural mechanisms underlying top-down control of repetition priming. Here, we use functional brain imaging to investigate these mechanisms. Study and repetition tasks used a natural/man-made forced choice task. In the study phase subjects were required to respond to either pictures or words that were presented superimposed on each other. In the repetition phase only words were presented that were new, previously attended or ignored, or picture names that were derived from previously attended or ignored pictures. Relative to new words we found repetition priming for previously attended words. Previously ignored words showed a reduced priming effect, and there was no significant priming for pictures repeated as picture names. Brain imaging data showed that neural priming of words in the left prefrontal cortex (LIPFC) and left fusiform gyrus (LOTC) was affected by attention, semantic compatibility of superimposed stimuli during study and cross-modal priming. Neural priming reduced for words in the LIPFC and for words and pictures in the LOTC if stimuli were previously ignored. Previously ignored words that were semantically incompatible with a superimposed picture during study induce increased neural priming compared to semantically compatible ignored words (LIPFC) and decreased neural priming of previously attended pictures (LOTC). In summary, top-down control induces dissociable effects on neural priming by attention, cross-modal priming and semantic compatibility in a way that was not evident from behavioral results.

  4. Cooperative processing in primary somatosensory cortex and posterior parietal cortex during tactile working memory.

    PubMed

    Ku, Yixuan; Zhao, Di; Bodner, Mark; Zhou, Yong-Di

    2015-08-01

    In the present study, causal roles of both the primary somatosensory cortex (SI) and the posterior parietal cortex (PPC) were investigated in a tactile unimodal working memory (WM) task. Individual magnetic resonance imaging-based single-pulse transcranial magnetic stimulation (spTMS) was applied, respectively, to the left SI (ipsilateral to tactile stimuli), right SI (contralateral to tactile stimuli) and right PPC (contralateral to tactile stimuli), while human participants were performing a tactile-tactile unimodal delayed matching-to-sample task. The time points of spTMS were 300, 600 and 900 ms after the onset of the tactile sample stimulus (duration: 200 ms). Compared with ipsilateral SI, application of spTMS over either contralateral SI or contralateral PPC at those time points significantly impaired the accuracy of task performance. Meanwhile, the deterioration in accuracy did not vary with the stimulating time points. Together, these results indicate that the tactile information is processed cooperatively by SI and PPC in the same hemisphere, starting from the early delay of the tactile unimodal WM task. This pattern of processing of tactile information is different from the pattern in tactile-visual cross-modal WM. In a tactile-visual cross-modal WM task, SI and PPC contribute to the processing sequentially, suggesting a process of sensory information transfer during the early delay between modalities. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  5. Auditory to Visual Cross-Modal Adaptation for Emotion: Psychophysical and Neural Correlates.

    PubMed

    Wang, Xiaodong; Guo, Xiaotao; Chen, Lin; Liu, Yijun; Goldberg, Michael E; Xu, Hong

    2017-02-01

    Adaptation is fundamental in sensory processing and has been studied extensively within the same sensory modality. However, little is known about adaptation across sensory modalities, especially in the context of high-level processing, such as the perception of emotion. Previous studies have shown that prolonged exposure to a face exhibiting one emotion, such as happiness, leads to contrastive biases in the perception of subsequently presented faces toward the opposite emotion, such as sadness. Such work has shown the importance of adaptation in calibrating face perception based on prior visual exposure. In the present study, we showed for the first time that emotion-laden sounds, like laughter, adapt the visual perception of emotional faces, that is, subjects more frequently perceived faces as sad after listening to a happy sound. Furthermore, via electroencephalography recordings and event-related potential analysis, we showed that there was a neural correlate underlying the perceptual bias: There was an attenuated response occurring at ∼ 400 ms to happy test faces and a quickened response to sad test faces, after exposure to a happy sound. Our results provide the first direct evidence for a behavioral cross-modal adaptation effect on the perception of facial emotion, and its neural correlate. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Neurochemical changes in the pericalcarine cortex in congenital blindness attributable to bilateral anophthalmia

    PubMed Central

    Coullon, Gaelle S. L.; Emir, Uzay E.; Fine, Ione; Watkins, Kate E.

    2015-01-01

    Congenital blindness leads to large-scale functional and structural reorganization in the occipital cortex, but relatively little is known about the neurochemical changes underlying this cross-modal plasticity. To investigate the effect of complete and early visual deafferentation on the concentration of metabolites in the pericalcarine cortex, 1H magnetic resonance spectroscopy was performed in 14 sighted subjects and 5 subjects with bilateral anophthalmia, a condition in which both eyes fail to develop. In the pericalcarine cortex, where primary visual cortex is normally located, the proportion of gray matter was significantly greater, and levels of choline, glutamate, glutamine, myo-inositol, and total creatine were elevated in anophthalmic relative to sighted subjects. Anophthalmia had no effect on the structure or neurochemistry of a sensorimotor cortex control region. More gray matter, combined with high levels of choline and myo-inositol, resembles the profile of the cortex at birth and suggests that the lack of visual input from the eyes might have delayed or arrested the maturation of this cortical region. High levels of choline and glutamate/glutamine are consistent with enhanced excitatory circuits in the anophthalmic occipital cortex, which could reflect a shift toward enhanced plasticity or sensitivity that could in turn mediate or unmask cross-modal responses. Finally, it is possible that the change in function of the occipital cortex results in biochemical profiles that resemble those of auditory, language, or somatosensory cortex. PMID:26180125

  7. A crossmodal role for audition in taste perception.

    PubMed

    Yan, Kimberly S; Dando, Robin

    2015-06-01

    Our sense of taste can be influenced by our other senses, with several groups having explored the effects of olfactory, visual, or tactile stimulation on what we perceive as taste. Research into multisensory, or crossmodal perception has rarely linked our sense of taste with that of audition. In our study, 48 participants in a crossover experiment sampled multiple concentrations of solutions of 5 prototypic tastants, during conditions with or without broad spectrum auditory stimulation, simulating that of airline cabin noise. Airline cabins are an unusual environment, in which food is consumed routinely under extreme noise conditions, often over 85 dB, and in which the perceived quality of food is often criticized. Participants rated the intensity of solutions representing varying concentrations of the 5 basic tastes on the general Labeled Magnitude Scale. No difference in intensity ratings was evident between the control and sound condition for salty, sour, or bitter tastes. Likewise, panelists did not perform differently during sound conditions when rating tactile, visual, or auditory stimulation, or in reaction time tests. Interestingly, sweet taste intensity was rated progressively lower, whereas the perception of umami taste was augmented during the experimental sound condition, to a progressively greater degree with increasing concentration. We postulate that this effect arises from mechanostimulation of the chorda tympani nerve, which transits directly across the tympanic membrane of the middle ear. (c) 2015 APA, all rights reserved).

  8. Cross-Modality Information Transfer: A Hypothesis about the Relationship among Prehistoric Cave Paintings, Symbolic Thinking, and the Emergence of Language

    PubMed Central

    Miyagawa, Shigeru; Lesure, Cora; Nóbrega, Vitor A.

    2018-01-01

    Early modern humans developed mental capabilities that were immeasurably greater than those of non-human primates. We see this in the rapid innovation in tool making, the development of complex language, and the creation of sophisticated art forms, none of which we find in our closest relatives. While we can readily observe the results of this high-order cognitive capacity, it is difficult to see how it could have developed. We take up the topic of cave art and archeoacoustics, particularly the discovery that cave art is often closely connected to the acoustic properties of the cave chambers in which it is found. Apparently, early modern humans were able to detect the way sound reverberated in these chambers, and they painted artwork on surfaces that were acoustic “hot spots,” i.e., suitable for generating echoes. We argue that cave art is a form of cross-modality information transfer, in which acoustic signals are transformed into symbolic visual representations. This form of information transfer across modalities is an instance of how the symbolic mind of early modern humans was taking shape into concrete, externalized language. We also suggest that the earliest rock art found in Africa may constitute one of the first fossilized proxies for the expression of full-fledged human linguistic behavior. PMID:29515474

  9. Ubiquitous Learning Environments in Higher Education: A Scoping Literature Review

    ERIC Educational Resources Information Center

    Virtanen, Mari Aulikki; Haavisto, Elina; Liikanen, Eeva; Kääriäinen, Maria

    2018-01-01

    Ubiquitous learning and the use of ubiquitous learning environments heralds a new era in higher education. Ubiquitous learning environments enhance context-aware and seamless learning experiences available from any location at any time. They support smooth interaction between authentic and digital learning resources and provide personalized…

  10. Integrating Collaborative and Decentralized Models to Support Ubiquitous Learning

    ERIC Educational Resources Information Center

    Barbosa, Jorge Luis Victória; Barbosa, Débora Nice Ferrari; Rigo, Sandro José; de Oliveira, Jezer Machado; Rabello, Solon Andrade, Jr.

    2014-01-01

    The application of ubiquitous technologies in the improvement of education strategies is called Ubiquitous Learning. This article proposes the integration between two models dedicated to support ubiquitous learning environments, called Global and CoolEdu. CoolEdu is a generic collaboration model for decentralized environments. Global is an…

  11. The Construction of an Ontology-Based Ubiquitous Learning Grid

    ERIC Educational Resources Information Center

    Liao, Ching-Jung; Chou, Chien-Chih; Yang, Jin-Tan David

    2009-01-01

    The purpose of this study is to incorporate adaptive ontology into ubiquitous learning grid to achieve seamless learning environment. Ubiquitous learning grid uses ubiquitous computing environment to infer and determine the most adaptive learning contents and procedures in anytime, any place and with any device. To achieve the goal, an…

  12. Dynamically orthogonal field equations for stochastic flows and particle dynamics

    DTIC Science & Technology

    2011-02-01

    where uncertainty ‘lives’ as well as a system of Stochastic Di erential Equations that de nes how the uncertainty evolves in the time varying stochastic ... stochastic dynamical component that are both time and space dependent, we derive a system of field equations consisting of a Partial Differential Equation...a system of Stochastic Differential Equations that defines how the stochasticity evolves in the time varying stochastic subspace. These new

  13. Cross-correlation spin noise spectroscopy of heterogeneous interacting spin systems

    DOE PAGES

    Roy, Dibyendu; Yang, Luyi; Crooker, Scott A.; ...

    2015-04-30

    Interacting multi-component spin systems are ubiquitous in nature and in the laboratory. As such, investigations of inter-species spin interactions are of vital importance. Traditionally, they are studied by experimental methods that are necessarily perturbative: e.g., by intentionally polarizing or depolarizing one spin species while detecting the response of the other(s). Here, we describe and demonstrate an alternative approach based on multi-probe spin noise spectroscopy, which can reveal inter-species spin interactions - under conditions of strict thermal equilibrium - by detecting and cross-correlating the stochastic fluctuation signals exhibited by each of the constituent spin species. Specifically, we consider a two-component spinmore » ensemble that interacts via exchange coupling, and we determine cross-correlations between their intrinsic spin fluctuations. The model is experimentally confirmed using “two-color” optical spin noise spectroscopy on a mixture of interacting Rb and Cs vapors. Noise correlations directly reveal the presence of inter-species spin exchange, without ever perturbing the system away from thermal equilibrium. These non-invasive and noise-based techniques should be generally applicable to any heterogeneous spin system in which the fluctuations of the constituent components are detectable.« less

  14. Cellular Signaling Networks Function as Generalized Wiener-Kolmogorov Filters to Suppress Noise

    NASA Astrophysics Data System (ADS)

    Hinczewski, Michael; Thirumalai, D.

    2014-10-01

    Cellular signaling involves the transmission of environmental information through cascades of stochastic biochemical reactions, inevitably introducing noise that compromises signal fidelity. Each stage of the cascade often takes the form of a kinase-phosphatase push-pull network, a basic unit of signaling pathways whose malfunction is linked with a host of cancers. We show that this ubiquitous enzymatic network motif effectively behaves as a Wiener-Kolmogorov optimal noise filter. Using concepts from umbral calculus, we generalize the linear Wiener-Kolmogorov theory, originally introduced in the context of communication and control engineering, to take nonlinear signal transduction and discrete molecule populations into account. This allows us to derive rigorous constraints for efficient noise reduction in this biochemical system. Our mathematical formalism yields bounds on filter performance in cases important to cellular function—such as ultrasensitive response to stimuli. We highlight features of the system relevant for optimizing filter efficiency, encoded in a single, measurable, dimensionless parameter. Our theory, which describes noise control in a large class of signal transduction networks, is also useful both for the design of synthetic biochemical signaling pathways and the manipulation of pathways through experimental probes such as oscillatory input.

  15. Approximation of optimal filter for Ornstein-Uhlenbeck process with quantised discrete-time observation

    NASA Astrophysics Data System (ADS)

    Bania, Piotr; Baranowski, Jerzy

    2018-02-01

    Quantisation of signals is a ubiquitous property of digital processing. In many cases, it introduces significant difficulties in state estimation and in consequence control. Popular approaches either do not address properly the problem of system disturbances or lead to biased estimates. Our intention was to find a method for state estimation for stochastic systems with quantised and discrete observation, that is free of the mentioned drawbacks. We have formulated a general form of the optimal filter derived by a solution of Fokker-Planck equation. We then propose the approximation method based on Galerkin projections. We illustrate the approach for the Ornstein-Uhlenbeck process, and derive analytic formulae for the approximated optimal filter, also extending the results for the variant with control. Operation is illustrated with numerical experiments and compared with classical discrete-continuous Kalman filter. Results of comparison are substantially in favour of our approach, with over 20 times lower mean squared error. The proposed filter is especially effective for signal amplitudes comparable to the quantisation thresholds. Additionally, it was observed that for high order of approximation, state estimate is very close to the true process value. The results open the possibilities of further analysis, especially for more complex processes.

  16. A symplectic integration method for elastic filaments

    NASA Astrophysics Data System (ADS)

    Ladd, Tony; Misra, Gaurav

    2009-03-01

    Elastic rods are a ubiquitous coarse-grained model of semi-flexible biopolymers such as DNA, actin, and microtubules. The Worm-Like Chain (WLC) is the standard numerical model for semi-flexible polymers, but it is only a linearized approximation to the dynamics of an elastic rod, valid for small deflections; typically the torsional motion is neglected as well. In the standard finite-difference and finite-element formulations of an elastic rod, the continuum equations of motion are discretized in space and time, but it is then difficult to ensure that the Hamiltonian structure of the exact equations is preserved. Here we discretize the Hamiltonian itself, expressed as a line integral over the contour of the filament. This discrete representation of the continuum filament can then be integrated by one of the explicit symplectic integrators frequently used in molecular dynamics. The model systematically approximates the continuum partial differential equations, but has the same level of computational complexity as molecular dynamics and is constraint free. Numerical tests show that the algorithm is much more stable than a finite-difference formulation and can be used for high aspect ratio filaments, such as actin. We present numerical results for the deterministic and stochastic motion of single filaments.

  17. A Ubiquitous English Vocabulary Learning System: Evidence of Active/Passive Attitudes vs. Usefulness/Ease-of-Use

    ERIC Educational Resources Information Center

    Huang, Yueh-Min; Huang, Yong-Ming; Huang, Shu-Hsien; Lin, Yen-Ting

    2012-01-01

    English vocabulary learning and ubiquitous learning have separately received considerable attention in recent years. However, research on English vocabulary learning in ubiquitous learning contexts has been less studied. In this study, we develop a ubiquitous English vocabulary learning (UEVL) system to assist students in experiencing a systematic…

  18. Ubiquitous Versus One-to-One

    ERIC Educational Resources Information Center

    McAnear, Anita

    2006-01-01

    When we planned the editorial calendar with the topic ubiquitous computing, we were thinking of ubiquitous computing as the one-to-one ratio of computers to students and teachers and 24/7 access to electronic resources. At the time, we were aware that ubiquitous computing in the computer science field had more to do with wearable computers. Our…

  19. A Dynamic Ubiquitous Learning Resource Model with Context and Its Effects on Ubiquitous Learning

    ERIC Educational Resources Information Center

    Chen, Min; Yu, Sheng Quan; Chiang, Feng Kuang

    2017-01-01

    Most ubiquitous learning researchers use resource recommendation and retrieving based on context to provide contextualized learning resources, but it is the kind of one-way context matching. Learners always obtain fixed digital learning resources, which present all learning contents in any context. This study proposed a dynamic ubiquitous learning…

  20. Using Ubiquitous Games in an English Listening and Speaking Course: Impact on Learning Outcomes and Motivation

    ERIC Educational Resources Information Center

    Liu, Tsung-Yu; Chu, Yu-Ling

    2010-01-01

    This paper reports the results of a study which aimed to investigate how ubiquitous games influence English learning achievement and motivation through a context-aware ubiquitous learning environment. An English curriculum was conducted on a school campus by using a context-aware ubiquitous learning environment called the Handheld English Language…

  1. From Many-to-One to One-to-Many: The Evolution of Ubiquitous Computing in Education

    ERIC Educational Resources Information Center

    Chen, Wenli; Lim, Carolyn; Tan, Ashley

    2011-01-01

    Personal, Internet-connected technologies are becoming ubiquitous in the lives of students, and ubiquitous computing initiatives are already expanding in educational contexts. Historically in the field of education, the terms one-to-one (1:1) computing and ubiquitous computing have been interpreted in a number of ways and have at times been used…

  2. After Installation: Ubiquitous Computing and High School Science in Three Experienced, High-Technology Schools

    ERIC Educational Resources Information Center

    Drayton, Brian; Falk, Joni K.; Stroud, Rena; Hobbs, Kathryn; Hammerman, James

    2010-01-01

    There are few studies of the impact of ubiquitous computing on high school science, and the majority of studies of ubiquitous computing report only on the early stages of implementation. The present study presents data on 3 high schools with carefully elaborated ubiquitous computing systems that have gone through at least one "obsolescence cycle"…

  3. A system for ubiquitous health monitoring in the bedroom via a Bluetooth network and wireless LAN.

    PubMed

    Choi, J M; Choi, B H; Seo, J W; Sohn, R H; Ryu, M S; Yi, W; Park, K S

    2004-01-01

    Advances in information technology have enabled ubiquitous health monitoring at home, which is particularly useful for patients, who have to live alone. We have focused on the automatic and unobtrusive measurement of biomedical signals and activities of patients. We have constructed wireless communication networks in order to transfer data. The networks consist of Bluetooth and Wireless Local Area Network (WLAN). In this paper, we present the concept of a ubiquitous-Bedroom (u-Bedroom) which is a part of a ubiquitous-House (u-House) and we present our systems for ubiquitous health monitoring.

  4. Quantum stochastic calculus associated with quadratic quantum noises

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Un Cig, E-mail: uncigji@chungbuk.ac.kr; Sinha, Kalyan B., E-mail: kbs-jaya@yahoo.co.in

    2016-02-15

    We first study a class of fundamental quantum stochastic processes induced by the generators of a six dimensional non-solvable Lie †-algebra consisting of all linear combinations of the generalized Gross Laplacian and its adjoint, annihilation operator, creation operator, conservation, and time, and then we study the quantum stochastic integrals associated with the class of fundamental quantum stochastic processes, and the quantum Itô formula is revisited. The existence and uniqueness of solution of a quantum stochastic differential equation is proved. The unitarity conditions of solutions of quantum stochastic differential equations associated with the fundamental processes are examined. The quantum stochastic calculusmore » extends the Hudson-Parthasarathy quantum stochastic calculus.« less

  5. Stochastic models for inferring genetic regulation from microarray gene expression data.

    PubMed

    Tian, Tianhai

    2010-03-01

    Microarray expression profiles are inherently noisy and many different sources of variation exist in microarray experiments. It is still a significant challenge to develop stochastic models to realize noise in microarray expression profiles, which has profound influence on the reverse engineering of genetic regulation. Using the target genes of the tumour suppressor gene p53 as the test problem, we developed stochastic differential equation models and established the relationship between the noise strength of stochastic models and parameters of an error model for describing the distribution of the microarray measurements. Numerical results indicate that the simulated variance from stochastic models with a stochastic degradation process can be represented by a monomial in terms of the hybridization intensity and the order of the monomial depends on the type of stochastic process. The developed stochastic models with multiple stochastic processes generated simulations whose variance is consistent with the prediction of the error model. This work also established a general method to develop stochastic models from experimental information. 2009 Elsevier Ireland Ltd. All rights reserved.

  6. Selective Attention Modulates the Direction of Audio-Visual Temporal Recalibration

    PubMed Central

    Ikumi, Nara; Soto-Faraco, Salvador

    2014-01-01

    Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging), was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes. PMID:25004132

  7. Multimodal Discriminative Binary Embedding for Large-Scale Cross-Modal Retrieval.

    PubMed

    Wang, Di; Gao, Xinbo; Wang, Xiumei; He, Lihuo; Yuan, Bo

    2016-10-01

    Multimodal hashing, which conducts effective and efficient nearest neighbor search across heterogeneous data on large-scale multimedia databases, has been attracting increasing interest, given the explosive growth of multimedia content on the Internet. Recent multimodal hashing research mainly aims at learning the compact binary codes to preserve semantic information given by labels. The overwhelming majority of these methods are similarity preserving approaches which approximate pairwise similarity matrix with Hamming distances between the to-be-learnt binary hash codes. However, these methods ignore the discriminative property in hash learning process, which results in hash codes from different classes undistinguished, and therefore reduces the accuracy and robustness for the nearest neighbor search. To this end, we present a novel multimodal hashing method, named multimodal discriminative binary embedding (MDBE), which focuses on learning discriminative hash codes. First, the proposed method formulates the hash function learning in terms of classification, where the binary codes generated by the learned hash functions are expected to be discriminative. And then, it exploits the label information to discover the shared structures inside heterogeneous data. Finally, the learned structures are preserved for hash codes to produce similar binary codes in the same class. Hence, the proposed MDBE can preserve both discriminability and similarity for hash codes, and will enhance retrieval accuracy. Thorough experiments on benchmark data sets demonstrate that the proposed method achieves excellent accuracy and competitive computational efficiency compared with the state-of-the-art methods for large-scale cross-modal retrieval task.

  8. The Renner-Teller effect in HCCCl(+)(X̃(2)Π) studied by zero-kinetic energy photoelectron spectroscopy and ab initio calculations.

    PubMed

    Sun, Wei; Dai, Zuyang; Wang, Jia; Mo, Yuxiang

    2015-05-21

    The spin-vibronic energy levels of the chloroacetylene cation up to 4000 cm(-1) above the ground state have been measured using the one-photon zero-kinetic energy photoelectron spectroscopic method. The spin-vibronic energy levels have also been calculated using a diabatic model, in which the potential energy surfaces are expressed by expansions of internal coordinates, and the Hamiltonian matrix equation is solved using a variational method with harmonic basis functions. The calculated spin-vibronic energy levels are in good agreement with the experimental data. The Renner-Teller (RT) parameters describing the vibronic coupling for the H-C≡C bending mode (ε4), Cl-C≡C bending mode (ε5), the cross-mode vibronic coupling (ε45) of the two bending vibrations, and their vibrational frequencies (ω4 and ω5) have also been determined using an effective Hamiltonian matrix treatment. In comparison with the spin-orbit interaction, the RT effect in the H-C≡C bending (ε4) mode is strong, while the RT effect in the Cl-C≡C bending mode is weak. There is a strong cross-mode vibronic coupling of the two bending vibrations, which may be due to a vibronic resonance between the two bending vibrations. The spin-orbit energy splitting of the ground state has been determined for the first time and is found to be 209 ± 2 cm(-1).

  9. Automatic selective attention as a function of sensory modality in aging.

    PubMed

    Guerreiro, Maria J S; Adam, Jos J; Van Gerven, Pascal W M

    2012-03-01

    It was recently hypothesized that age-related differences in selective attention depend on sensory modality (Guerreiro, M. J. S., Murphy, D. R., & Van Gerven, P. W. M. (2010). The role of sensory modality in age-related distraction: A critical review and a renewed view. Psychological Bulletin, 136, 975-1022. doi:10.1037/a0020731). So far, this hypothesis has not been tested in automatic selective attention. The current study addressed this issue by investigating age-related differences in automatic spatial cueing effects (i.e., facilitation and inhibition of return [IOR]) across sensory modalities. Thirty younger (mean age = 22.4 years) and 25 older adults (mean age = 68.8 years) performed 4 left-right target localization tasks, involving all combinations of visual and auditory cues and targets. We used stimulus onset asynchronies (SOAs) of 100, 500, 1,000, and 1,500 ms between cue and target. The results showed facilitation (shorter reaction times with valid relative to invalid cues at shorter SOAs) in the unimodal auditory and in both cross-modal tasks but not in the unimodal visual task. In contrast, there was IOR (longer reaction times with valid relative to invalid cues at longer SOAs) in both unimodal tasks but not in either of the cross-modal tasks. Most important, these spatial cueing effects were independent of age. The results suggest that the modality hypothesis of age-related differences in selective attention does not extend into the realm of automatic selective attention.

  10. Selective attention modulates the direction of audio-visual temporal recalibration.

    PubMed

    Ikumi, Nara; Soto-Faraco, Salvador

    2014-01-01

    Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging), was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes.

  11. Neurochemical changes in the pericalcarine cortex in congenital blindness attributable to bilateral anophthalmia.

    PubMed

    Coullon, Gaelle S L; Emir, Uzay E; Fine, Ione; Watkins, Kate E; Bridge, Holly

    2015-09-01

    Congenital blindness leads to large-scale functional and structural reorganization in the occipital cortex, but relatively little is known about the neurochemical changes underlying this cross-modal plasticity. To investigate the effect of complete and early visual deafferentation on the concentration of metabolites in the pericalcarine cortex, (1)H magnetic resonance spectroscopy was performed in 14 sighted subjects and 5 subjects with bilateral anophthalmia, a condition in which both eyes fail to develop. In the pericalcarine cortex, where primary visual cortex is normally located, the proportion of gray matter was significantly greater, and levels of choline, glutamate, glutamine, myo-inositol, and total creatine were elevated in anophthalmic relative to sighted subjects. Anophthalmia had no effect on the structure or neurochemistry of a sensorimotor cortex control region. More gray matter, combined with high levels of choline and myo-inositol, resembles the profile of the cortex at birth and suggests that the lack of visual input from the eyes might have delayed or arrested the maturation of this cortical region. High levels of choline and glutamate/glutamine are consistent with enhanced excitatory circuits in the anophthalmic occipital cortex, which could reflect a shift toward enhanced plasticity or sensitivity that could in turn mediate or unmask cross-modal responses. Finally, it is possible that the change in function of the occipital cortex results in biochemical profiles that resemble those of auditory, language, or somatosensory cortex. Copyright © 2015 the American Physiological Society.

  12. Learning of Multimodal Representations With Random Walks on the Click Graph.

    PubMed

    Wu, Fei; Lu, Xinyan; Song, Jun; Yan, Shuicheng; Zhang, Zhongfei Mark; Rui, Yong; Zhuang, Yueting

    2016-02-01

    In multimedia information retrieval, most classic approaches tend to represent different modalities of media in the same feature space. With the click data collected from the users' searching behavior, existing approaches take either one-to-one paired data (text-image pairs) or ranking examples (text-query-image and/or image-query-text ranking lists) as training examples, which do not make full use of the click data, particularly the implicit connections among the data objects. In this paper, we treat the click data as a large click graph, in which vertices are images/text queries and edges indicate the clicks between an image and a query. We consider learning a multimodal representation from the perspective of encoding the explicit/implicit relevance relationship between the vertices in the click graph. By minimizing both the truncated random walk loss as well as the distance between the learned representation of vertices and their corresponding deep neural network output, the proposed model which is named multimodal random walk neural network (MRW-NN) can be applied to not only learn robust representation of the existing multimodal data in the click graph, but also deal with the unseen queries and images to support cross-modal retrieval. We evaluate the latent representation learned by MRW-NN on a public large-scale click log data set Clickture and further show that MRW-NN achieves much better cross-modal retrieval performance on the unseen queries/images than the other state-of-the-art methods.

  13. Salient, Irrelevant Sounds Reflexively Induce Alpha Rhythm Desynchronization in Parallel with Slow Potential Shifts in Visual Cortex.

    PubMed

    Störmer, Viola; Feng, Wenfeng; Martinez, Antigona; McDonald, John; Hillyard, Steven

    2016-03-01

    Recent findings suggest that a salient, irrelevant sound attracts attention to its location involuntarily and facilitates processing of a colocalized visual event [McDonald, J. J., Störmer, V. S., Martinez, A., Feng, W. F., & Hillyard, S. A. Salient sounds activate human visual cortex automatically. Journal of Neuroscience, 33, 9194-9201, 2013]. Associated with this cross-modal facilitation is a sound-evoked slow potential over the contralateral visual cortex termed the auditory-evoked contralateral occipital positivity (ACOP). Here, we further tested the hypothesis that a salient sound captures visual attention involuntarily by examining sound-evoked modulations of the occipital alpha rhythm, which has been strongly associated with visual attention. In two purely auditory experiments, lateralized irrelevant sounds triggered a bilateral desynchronization of occipital alpha-band activity (10-14 Hz) that was more pronounced in the hemisphere contralateral to the sound's location. The timing of the contralateral alpha-band desynchronization overlapped with that of the ACOP (∼240-400 msec), and both measures of neural activity were estimated to arise from neural generators in the ventral-occipital cortex. The magnitude of the lateralized alpha desynchronization was correlated with ACOP amplitude on a trial-by-trial basis and between participants, suggesting that they arise from or are dependent on a common neural mechanism. These results support the hypothesis that the sound-induced alpha desynchronization and ACOP both reflect the involuntary cross-modal orienting of spatial attention to the sound's location.

  14. The contribution of perceptual factors and training on varying audiovisual integration capacity.

    PubMed

    Wilbiks, Jonathan M P; Dyson, Benjamin J

    2018-06-01

    The suggestion that the capacity of audiovisual integration has an upper limit of 1 was challenged in 4 experiments using perceptual factors and training to enhance the binding of auditory and visual information. Participants were required to note a number of specific visual dot locations that changed in polarity when a critical auditory stimulus was presented, under relatively fast (200-ms stimulus onset asynchrony [SOA]) and slow (700-ms SOA) rates of presentation. In Experiment 1, transient cross-modal congruency between the brightness of polarity change and pitch of the auditory tone was manipulated. In Experiment 2, sustained chunking was enabled on certain trials by connecting varying dot locations with vertices. In Experiment 3, training was employed to determine if capacity would increase through repeated experience with an intermediate presentation rate (450 ms). Estimates of audiovisual integration capacity (K) were larger than 1 during cross-modal congruency at slow presentation rates (Experiment 1), during perceptual chunking at slow and fast presentation rates (Experiment 2), and, during an intermediate presentation rate posttraining (Experiment 3). Finally, Experiment 4 showed a linear increase in K using SOAs ranging from 100 to 600 ms, suggestive of quantitative rather than qualitative changes in the mechanisms in audiovisual integration as a function of presentation rate. The data compromise the suggestion that the capacity of audiovisual integration is limited to 1 and suggest that the ability to bind sounds to sights is contingent on individual and environmental factors. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  15. Crossmodal binding rivalry: A "race" for integration between unequal sensory inputs.

    PubMed

    Kostaki, Maria; Vatakis, Argiro

    2016-10-01

    Exposure to multiple but unequal (in number) sensory inputs often leads to illusory percepts, which may be the product of a conflict between those inputs. To test this conflict, we utilized the classic sound induced visual fission and fusion illusions under various temporal configurations and timing presentations. This conflict between unequal numbers of sensory inputs (i.e., crossmodal binding rivalry) depends on the binding of the first audiovisual pair and its temporal proximity to the upcoming unisensory stimulus. We, therefore, expected that tight coupling of the first audiovisual pair would lead to higher rivalry with the upcoming unisensory stimulus and, thus, weaker illusory percepts. Loose coupling, on the other hand, would lead to lower rivalry and higher illusory percepts. Our data showed the emergence of two different participant groups, those with low discrimination performance and strong illusion reports (particularly for fusion) and those with the exact opposite pattern, thus extending previous findings on the effect of visual acuity in the strength of the illusion. Most importantly, our data revealed differential illusory strength across different temporal configurations for the fission illusion, while for the fusion illusion these effects were only noted for the largest stimulus onset asynchronies tested. These findings support that the optimal integration theory for the double flash illusion should be expanded so as to also take into account the multisensory temporal interactions of the stimuli presented (i.e., temporal sequence and configuration). Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Long-range dismount activity classification: LODAC

    NASA Astrophysics Data System (ADS)

    Garagic, Denis; Peskoe, Jacob; Liu, Fang; Cuevas, Manuel; Freeman, Andrew M.; Rhodes, Bradley J.

    2014-06-01

    Continuous classification of dismount types (including gender, age, ethnicity) and their activities (such as walking, running) evolving over space and time is challenging. Limited sensor resolution (often exacerbated as a function of platform standoff distance) and clutter from shadows in dense target environments, unfavorable environmental conditions, and the normal properties of real data all contribute to the challenge. The unique and innovative aspect of our approach is a synthesis of multimodal signal processing with incremental non-parametric, hierarchical Bayesian machine learning methods to create a new kind of target classification architecture. This architecture is designed from the ground up to optimally exploit correlations among the multiple sensing modalities (multimodal data fusion) and rapidly and continuously learns (online self-tuning) patterns of distinct classes of dismounts given little a priori information. This increases classification performance in the presence of challenges posed by anti-access/area denial (A2/AD) sensing. To fuse multimodal features, Long-range Dismount Activity Classification (LODAC) develops a novel statistical information theoretic approach for multimodal data fusion that jointly models multimodal data (i.e., a probabilistic model for cross-modal signal generation) and discovers the critical cross-modal correlations by identifying components (features) with maximal mutual information (MI) which is efficiently estimated using non-parametric entropy models. LODAC develops a generic probabilistic pattern learning and classification framework based on a new class of hierarchical Bayesian learning algorithms for efficiently discovering recurring patterns (classes of dismounts) in multiple simultaneous time series (sensor modalities) at multiple levels of feature granularity.

  17. Cross-Modal Recruitment of Auditory and Orofacial Areas During Sign Language in a Deaf Subject.

    PubMed

    Martino, Juan; Velasquez, Carlos; Vázquez-Bourgon, Javier; de Lucas, Enrique Marco; Gomez, Elsa

    2017-09-01

    Modern sign languages used by deaf people are fully expressive, natural human languages that are perceived visually and produced manually. The literature contains little data concerning human brain organization in conditions of deficient sensory information such as deafness. A deaf-mute patient underwent surgery of a left temporoinsular low-grade glioma. The patient underwent awake surgery with intraoperative electrical stimulation mapping, allowing direct study of the cortical and subcortical organization of sign language. We found a similar distribution of language sites to what has been reported in mapping studies of patients with oral language, including 1) speech perception areas inducing anomias and alexias close to the auditory cortex (at the posterior portion of the superior temporal gyrus and supramarginal gyrus); 2) speech production areas inducing speech arrest (anarthria) at the ventral premotor cortex, close to the lip motor area and away from the hand motor area; and 3) subcortical stimulation-induced semantic paraphasias at the inferior fronto-occipital fasciculus at the temporal isthmus. The intraoperative setup for sign language mapping with intraoperative electrical stimulation in deaf-mute patients is similar to the setup described in patients with oral language. To elucidate the type of language errors, a sign language interpreter in close interaction with the neuropsychologist is necessary. Sign language is perceived visually and produced manually; however, this case revealed a cross-modal recruitment of auditory and orofacial motor areas. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Seeing music: The perception of melodic 'ups and downs' modulates the spatial processing of visual stimuli.

    PubMed

    Romero-Rivas, Carlos; Vera-Constán, Fátima; Rodríguez-Cuadrado, Sara; Puigcerver, Laura; Fernández-Prieto, Irune; Navarra, Jordi

    2018-05-10

    Musical melodies have "peaks" and "valleys". Although the vertical component of pitch and music is well-known, the mechanisms underlying its mental representation still remain elusive. We show evidence regarding the importance of previous experience with melodies for crossmodal interactions to emerge. The impact of these crossmodal interactions on other perceptual and attentional processes was also studied. Melodies including two tones with different frequency (e.g., E4 and D3) were repeatedly presented during the study. These melodies could either generate strong predictions (e.g., E4-D3-E4-D3-E4-[D3]) or not (e.g., E4-D3-E4-E4-D3-[?]). After the presentation of each melody, the participants had to judge the colour of a visual stimulus that appeared in a position that was, according to the traditional vertical connotations of pitch, either congruent (e.g., high-low-high-low-[up]), incongruent (high-low-high-low-[down]) or unpredicted with respect to the melody. Behavioural and electroencephalographic responses to the visual stimuli were obtained. Congruent visual stimuli elicited faster responses at the end of the experiment than at the beginning. Additionally, incongruent visual stimuli that broke the spatial prediction generated by the melody elicited larger P3b amplitudes (reflecting 'surprise' responses). Our results suggest that the passive (but repeated) exposure to melodies elicits spatial predictions that modulate the processing of other sensory events. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Pathways to Seeing Music: Enhanced Structural Connectivity in Colored-Music Synesthesia

    PubMed Central

    Zamm, Anna; Schlaug, Gottfried; Eagleman, David M.; Loui, Psyche

    2013-01-01

    Synesthesia, a condition in which a stimulus in one sensory modality consistently and automatically triggers concurrent percepts in another modality, provides a window into the neural correlates of cross-modal associations. While research on grapheme-color synesthesia has provided evidence for both hyperconnectivity/hyperbinding and disinhibited feedback as possible underlying mechanisms, less research has explored the neuroanatomical basis of other forms of synesthesia. In the current study we investigated the white matter correlates of colored-music synesthesia. As these synesthetes report seeing colors upon hearing musical sounds, we hypothesized they might show different patterns of connectivity between visual and auditory association areas. We used diffusion tensor imaging to trace the white matter tracts in temporal and occipital lobe regions in 10 synesthetes and 10 matched non-synesthete controls. Results showed that synesthetes possessed different hemispheric patterns of fractional anisotropy, an index of white matter integrity, in the inferior fronto-occipital fasciculus (IFOF), a major white matter pathway that connects visual and auditory association areas to frontal regions. Specifically, white matter integrity within the right IFOF was significantly greater in synesthetes than controls. Furthermore, white matter integrity in synesthetes was correlated with scores on audiovisual tests of the Synesthesia Battery, especially in white matter underlying the right fusiform gyrus. Our findings provide the first evidence of a white matter substrate of colored-music synesthesia, and suggest that enhanced white matter connectivity is involved in enhanced cross-modal associations. PMID:23454047

  20. Dynamics of cortico-subcortical cross-modal operations involved in audio-visual object detection in humans.

    PubMed

    Fort, Alexandra; Delpuech, Claude; Pernier, Jacques; Giard, Marie-Hélène

    2002-10-01

    Very recently, a number of neuroimaging studies in humans have begun to investigate the question of how the brain integrates information from different sensory modalities to form unified percepts. Already, intermodal neural processing appears to depend on the modalities of inputs or the nature (speech/non-speech) of information to be combined. Yet, the variety of paradigms, stimuli and technics used make it difficult to understand the relationships between the factors operating at the perceptual level and the underlying physiological processes. In a previous experiment, we used event-related potentials to describe the spatio-temporal organization of audio-visual interactions during a bimodal object recognition task. Here we examined the network of cross-modal interactions involved in simple detection of the same objects. The objects were defined either by unimodal auditory or visual features alone, or by the combination of the two features. As expected, subjects detected bimodal stimuli more rapidly than either unimodal stimuli. Combined analysis of potentials, scalp current densities and dipole modeling revealed several interaction patterns within the first 200 micro s post-stimulus: in occipito-parietal visual areas (45-85 micro s), in deep brain structures, possibly the superior colliculus (105-140 micro s), and in right temporo-frontal regions (170-185 micro s). These interactions differed from those found during object identification in sensory-specific areas and possibly in the superior colliculus, indicating that the neural operations governing multisensory integration depend crucially on the nature of the perceptual processes involved.

Top