Sample records for temporal visual association

  1. Visual paired-associate learning: in search of material-specific effects in adult patients who have undergone temporal lobectomy.

    PubMed

    Smith, Mary Lou; Bigel, Marla; Miller, Laurie A

    2011-02-01

    The mesial temporal lobes are important for learning arbitrary associations. It has previously been demonstrated that left mesial temporal structures are involved in learning word pairs, but it is not yet known whether comparable lesions in the right temporal lobe impair visually mediated associative learning. Patients who had undergone left (n=16) or right (n=18) temporal lobectomy for relief of intractable epilepsy and healthy controls (n=13) were administered two paired-associate learning tasks assessing their learning and memory of pairs of abstract designs or pairs of symbols in unique locations. Both patient groups had deficits in learning the designs, but only the right temporal group was impaired in recognition. For the symbol location task, differences were not found in learning, but again a recognition deficit was found for the right temporal group. The findings implicate the mesial temporal structures in relational learning. They support a material-specific effect for recognition but not for learning and recall of arbitrary visual and visual-spatial associative information. Copyright © 2010 Elsevier Inc. All rights reserved.

  2. A Spatial and Temporal Frequency Based Figure-Ground Processor

    NASA Astrophysics Data System (ADS)

    Weisstein, Namoi; Wong, Eva

    1990-03-01

    Recent findings in visual psychophysics have shown that figure-ground perception can be specified by the spatial and temporal response characteristics of the visual system. Higher spatial frequency regions of the visual field are perceived as figure and lower spatial frequency regions are perceived as background/ (Klymenko and Weisstein, 1986, Wong and Weisstein, 1989). Higher temporal frequency regions are seen as background and lower temporal frequency regions are seen as figure (Wong and Weisstein, 1987, Klymenko, Weisstein, Topolski, and Hsieh, 1988). Thus, high spatial and low temporal frequencies appear to be associated with figure and low spatial and high temporal frequencies appear to be associated with background.

  3. Temporal Ventriloquism Reveals Intact Audiovisual Temporal Integration in Amblyopia.

    PubMed

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-02-01

    We have shown previously that amblyopia involves impaired detection of asynchrony between auditory and visual events. To distinguish whether this impairment represents a defect in temporal integration or nonintegrative multisensory processing (e.g., cross-modal matching), we used the temporal ventriloquism effect in which visual temporal order judgment (TOJ) is normally enhanced by a lagging auditory click. Participants with amblyopia (n = 9) and normally sighted controls (n = 9) performed a visual TOJ task. Pairs of clicks accompanied the two lights such that the first click preceded the first light, or second click lagged the second light by 100, 200, or 450 ms. Baseline audiovisual synchrony and visual-only conditions also were tested. Within both groups, just noticeable differences for the visual TOJ task were significantly reduced compared with baseline in the 100- and 200-ms click lag conditions. Within the amblyopia group, poorer stereo acuity and poorer visual acuity in the amblyopic eye were significantly associated with greater enhancement in visual TOJ performance in the 200-ms click lag condition. Audiovisual temporal integration is intact in amblyopia, as indicated by perceptual enhancement in the temporal ventriloquism effect. Furthermore, poorer stereo acuity and poorer visual acuity in the amblyopic eye are associated with a widened temporal binding window for the effect. These findings suggest that previously reported abnormalities in audiovisual multisensory processing may result from impaired cross-modal matching rather than a diminished capacity for temporal audiovisual integration.

  4. Quantitative Evaluation of Medial Temporal Lobe Morphology in Children with Febrile Status Epilepticus: Results of the FEBSTAT Study.

    PubMed

    McClelland, A C; Gomes, W A; Shinnar, S; Hesdorffer, D C; Bagiella, E; Lewis, D V; Bello, J A; Chan, S; MacFall, J; Chen, M; Pellock, J M; Nordli, D R; Frank, L M; Moshé, S L; Shinnar, R C; Sun, S

    2016-12-01

    The pathogenesis of febrile status epilepticus is poorly understood, but prior studies have suggested an association with temporal lobe abnormalities, including hippocampal malrotation. We used a quantitative morphometric method to assess the association between temporal lobe morphology and febrile status epilepticus. Brain MR imaging was performed in children presenting with febrile status epilepticus and control subjects as part of the Consequences of Prolonged Febrile Seizures in Childhood study. Medial temporal lobe morphologic parameters were measured manually, including the distance of the hippocampus from the midline, hippocampal height:width ratio, hippocampal angle, collateral sulcus angle, and width of the temporal horn. Temporal lobe morphologic parameters were correlated with the presence of visual hippocampal malrotation; the strongest association was with left temporal horn width (P < .001; adjusted OR, 10.59). Multiple morphologic parameters correlated with febrile status epilepticus, encompassing both the right and left sides. This association was statistically strongest in the right temporal lobe, whereas hippocampal malrotation was almost exclusively left-sided in this cohort. The association between temporal lobe measurements and febrile status epilepticus persisted when the analysis was restricted to cases with visually normal imaging findings without hippocampal malrotation or other visually apparent abnormalities. Several component morphologic features of hippocampal malrotation are independently associated with febrile status epilepticus, even when complete hippocampal malrotation is absent. Unexpectedly, this association predominantly involves the right temporal lobe. These findings suggest that a spectrum of bilateral temporal lobe anomalies are associated with febrile status epilepticus in children. Hippocampal malrotation may represent a visually apparent subset of this spectrum. © 2016 by American Journal of Neuroradiology.

  5. Learning and disrupting invariance in visual recognition with a temporal association rule

    PubMed Central

    Isik, Leyla; Leibo, Joel Z.; Poggio, Tomaso

    2012-01-01

    Learning by temporal association rules such as Foldiak's trace rule is an attractive hypothesis that explains the development of invariance in visual recognition. Consistent with these rules, several recent experiments have shown that invariance can be broken at both the psychophysical and single cell levels. We show (1) that temporal association learning provides appropriate invariance in models of object recognition inspired by the visual cortex, (2) that we can replicate the “invariance disruption” experiments using these models with a temporal association learning rule to develop and maintain invariance, and (3) that despite dramatic single cell effects, a population of cells is very robust to these disruptions. We argue that these models account for the stability of perceptual invariance despite the underlying plasticity of the system, the variability of the visual world and expected noise in the biological mechanisms. PMID:22754523

  6. Visual temporal processing in dyslexia and the magnocellular deficit theory: the need for speed?

    PubMed

    McLean, Gregor M T; Stuart, Geoffrey W; Coltheart, Veronika; Castles, Anne

    2011-12-01

    A controversial question in reading research is whether dyslexia is associated with impairments in the magnocellular system and, if so, how these low-level visual impairments might affect reading acquisition. This study used a novel chromatic flicker perception task to specifically explore temporal aspects of magnocellular functioning in 40 children with dyslexia and 42 age-matched controls (aged 7-11). The relationship between magnocellular temporal resolution and higher-level aspects of visual temporal processing including inspection time, single and dual-target (attentional blink) RSVP performance, go/no-go reaction time, and rapid naming was also assessed. The Dyslexia group exhibited significant deficits in magnocellular temporal resolution compared with controls, but the two groups did not differ in parvocellular temporal resolution. Despite the significant group differences, associations between magnocellular temporal resolution and reading ability were relatively weak, and links between low-level temporal resolution and reading ability did not appear specific to the magnocellular system. Factor analyses revealed that a collective Perceptual Speed factor, involving both low-level and higher-level visual temporal processing measures, accounted for unique variance in reading ability independently of phonological processing, rapid naming, and general ability.

  7. Audio-visual temporal perception in children with restored hearing.

    PubMed

    Gori, Monica; Chilosi, Anna; Forli, Francesca; Burr, David

    2017-05-01

    It is not clear how audio-visual temporal perception develops in children with restored hearing. In this study we measured temporal discrimination thresholds with an audio-visual temporal bisection task in 9 deaf children with restored audition, and 22 typically hearing children. In typically hearing children, audition was more precise than vision, with no gain in multisensory conditions (as previously reported in Gori et al. (2012b)). However, deaf children with restored audition showed similar thresholds for audio and visual thresholds and some evidence of gain in audio-visual temporal multisensory conditions. Interestingly, we found a strong correlation between auditory weighting of multisensory signals and quality of language: patients who gave more weight to audition had better language skills. Similarly, auditory thresholds for the temporal bisection task were also a good predictor of language skills. This result supports the idea that the temporal auditory processing is associated with language development. Copyright © 2017. Published by Elsevier Ltd.

  8. Maturation of Visual and Auditory Temporal Processing in School-Aged Children

    ERIC Educational Resources Information Center

    Dawes, Piers; Bishop, Dorothy V. M.

    2008-01-01

    Purpose: To examine development of sensitivity to auditory and visual temporal processes in children and the association with standardized measures of auditory processing and communication. Methods: Normative data on tests of visual and auditory processing were collected on 18 adults and 98 children aged 6-10 years of age. Auditory processes…

  9. Dynamic spatial organization of the occipito-temporal word form area for second language processing.

    PubMed

    Gao, Yue; Sun, Yafeng; Lu, Chunming; Ding, Guosheng; Guo, Taomei; Malins, Jeffrey G; Booth, James R; Peng, Danling; Liu, Li

    2017-08-01

    Despite the left occipito-temporal region having shown consistent activation in visual word form processing across numerous studies in different languages, the mechanisms by which word forms of second languages are processed in this region remain unclear. To examine this more closely, 16 Chinese-English and 14 English-Chinese late bilinguals were recruited to perform lexical decision tasks to visually presented words in both their native and second languages (L1 and L2) during functional magnetic resonance imaging scanning. Here we demonstrate that visual word form processing for L1 versus L2 engaged different spatial areas of the left occipito-temporal region. Namely, the spatial organization of the visual word form processing in the left occipito-temporal region is more medial and posterior for L2 than L1 processing in Chinese-English bilinguals, whereas activation is more lateral and anterior for L2 in English-Chinese bilinguals. In addition, for Chinese-English bilinguals, more lateral recruitment of the occipito-temporal region was correlated with higher L2 proficiency, suggesting higher L2 proficiency is associated with greater involvement of L1-preferred mechanisms. For English-Chinese bilinguals, higher L2 proficiency was correlated with more lateral and anterior activation of the occipito-temporal region, suggesting higher L2 proficiency is associated with greater involvement of L2-preferred mechanisms. Taken together, our results indicate that L1 and L2 recruit spatially different areas of the occipito-temporal region in visual word processing when the two scripts belong to different writing systems, and that the spatial organization of this region for L2 visual word processing is dynamically modulated by L2 proficiency. Specifically, proficiency in L2 in Chinese-English is associated with assimilation to the native language mechanisms, whereas L2 in English-Chinese is associated with accommodation to second language mechanisms. Copyright © 2017. Published by Elsevier Ltd.

  10. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding

    PubMed Central

    Desantis, Andrea; Haggard, Patrick

    2016-01-01

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events. PMID:27982063

  11. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding.

    PubMed

    Desantis, Andrea; Haggard, Patrick

    2016-12-16

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events.

  12. Consolidation of visual associative long-term memory in the temporal cortex of primates.

    PubMed

    Miyashita, Y; Kameyama, M; Hasegawa, I; Fukushima, T

    1998-01-01

    Neuropsychological theories have proposed a critical role for the interaction between the medial temporal lobe and the neocortex in the formation of long-term memory for facts and events, which has often been tested by learning of a series of paired words or figures in humans. We have examined neural mechanisms underlying the memory "consolidation" process by single-unit recording and molecular biological methods in an animal model of a visual pair-association task in monkeys. In our previous studies, we found that long-term associative representations of visual objects are acquired through learning in the neural network of the anterior inferior temporal (IT) cortex. In this article, we propose the hypothesis that limbic neurons undergo rapid modification of synaptic connectivity and provide backward signals that guide the reorganization of neocortical neural circuits. Two experiments tested this hypothesis: (1) we examined the role of the backward connections from the medial temporal lobe to the IT cortex by injecting ibotenic acid into the entorhinal and perirhinal cortices, which provided massive backward projections ipsilaterally to the IT cortex. We found that the limbic lesion disrupted the associative code of the IT neurons between the paired associates, without impairing the visual response to each stimulus. (2) We then tested the first half of this hypothesis by detecting the expression of immediate-early genes in the monkey temporal cortex. We found specific expression of zif268 during the learning of a new set of paired associates in the pair-association task, most intensively in area 36 of the perirhinal cortex. All these results with the visual pair-association task support our hypothesis and demonstrate that the consolidation process, which was first proposed on the basis of clinico-psychological evidence, can now be examined in primates using neurophysiolocical and molecular biological approaches. Copyright 1998 Academic Press.

  13. Multi-voxel patterns of visual category representation during episodic encoding are predictive of subsequent memory

    PubMed Central

    Kuhl, Brice A.; Rissman, Jesse; Wagner, Anthony D.

    2012-01-01

    Successful encoding of episodic memories is thought to depend on contributions from prefrontal and temporal lobe structures. Neural processes that contribute to successful encoding have been extensively explored through univariate analyses of neuroimaging data that compare mean activity levels elicited during the encoding of events that are subsequently remembered vs. those subsequently forgotten. Here, we applied pattern classification to fMRI data to assess the degree to which distributed patterns of activity within prefrontal and temporal lobe structures elicited during the encoding of word-image pairs were diagnostic of the visual category (Face or Scene) of the encoded image. We then assessed whether representation of category information was predictive of subsequent memory. Classification analyses indicated that temporal lobe structures contained information robustly diagnostic of visual category. Information in prefrontal cortex was less diagnostic of visual category, but was nonetheless associated with highly reliable classifier-based evidence for category representation. Critically, trials associated with greater classifier-based estimates of category representation in temporal and prefrontal regions were associated with a higher probability of subsequent remembering. Finally, consideration of trial-by-trial variance in classifier-based measures of category representation revealed positive correlations between prefrontal and temporal lobe representations, with the strength of these correlations varying as a function of the category of image being encoded. Together, these results indicate that multi-voxel representations of encoded information can provide unique insights into how visual experiences are transformed into episodic memories. PMID:21925190

  14. Neuronal correlate of visual associative long-term memory in the primate temporal cortex

    NASA Astrophysics Data System (ADS)

    Miyashita, Yasushi

    1988-10-01

    In human long-term memory, ideas and concepts become associated in the learning process1. No neuronal correlate for this cognitive function has so far been described, except that memory traces are thought to be localized in the cerebral cortex; the temporal lobe has been assigned as the site for visual experience because electric stimulation of this area results in imagery recall,2 and lesions produce deficits in visual recognition of objects3-9. We previously reported that in the anterior ventral temporal cortex of monkeys, individual neurons have a sustained activity that is highly selective for a few of the 100 coloured fractal patterns used in a visual working-memory task10. Here I report the development of this selectivity through repeated trials involving the working memory. The few patterns for which a neuron was conjointly selective were frequently related to each other through stimulus-stimulus association imposed during training. The results indicate that the selectivity acquired by these cells represents a neuronal correlate of the associative long-term memory of pictures.

  15. Differences in visual vs. verbal memory impairments as a result of focal temporal lobe damage in patients with traumatic brain injury.

    PubMed

    Ariza, Mar; Pueyo, Roser; Junqué, Carme; Mataró, María; Poca, María Antonia; Mena, Maria Pau; Sahuquillo, Juan

    2006-09-01

    The aim of the present study was to determine whether the type of lesion in a sample of moderate and severe traumatic brain injury (TBI) was related to material-specific memory impairment. Fifty-nine patients with TBI were classified into three groups according to whether the site of the lesion was right temporal, left temporal or diffuse. Six-months post-injury, visual (Warrington's Facial Recognition Memory Test and Rey's Complex Figure Test) and verbal (Rey's Auditory Verbal Learning Test) memories were assessed. Visual memory deficits assessed by facial memory were associated with right temporal lobe lesion, whereas verbal memory performance assessed with a list of words was related to left temporal lobe lesion. The group with diffuse injury showed both verbal and visual memory impairment. These results suggest a material-specific memory impairment in moderate and severe TBI after focal temporal lesions and a non-specific memory impairment after diffuse damage.

  16. The role of temporal synchrony as a binding cue for visual persistence in early visual areas: an fMRI study.

    PubMed

    Wong, Yvonne J; Aldcroft, Adrian J; Large, Mary-Ellen; Culham, Jody C; Vilis, Tutis

    2009-12-01

    We examined the role of temporal synchrony-the simultaneous appearance of visual features-in the perceptual and neural processes underlying object persistence. When a binding cue (such as color or motion) momentarily exposes an object from a background of similar elements, viewers remain aware of the object for several seconds before it perceptually fades into the background, a phenomenon known as object persistence. We showed that persistence from temporal stimulus synchrony, like that arising from motion and color, is associated with activation in the lateral occipital (LO) area, as measured by functional magnetic resonance imaging. We also compared the distribution of occipital cortex activity related to persistence to that of iconic visual memory. Although activation related to iconic memory was largely confined to LO, activation related to object persistence was present across V1 to LO, peaking in V3 and V4, regardless of the binding cue (temporal synchrony, motion, or color). Although persistence from motion cues was not associated with higher activation in the MT+ motion complex, persistence from color cues was associated with increased activation in V4. Taken together, these results demonstrate that although persistence is a form of visual memory, it relies on neural mechanisms different from those of iconic memory. That is, persistence not only activates LO in a cue-independent manner, it also recruits visual areas that may be necessary to maintain binding between object elements.

  17. Visuocortical Changes During Delay and Trace Aversive Conditioning: Evidence From Steady-State Visual Evoked Potentials

    PubMed Central

    Miskovic, Vladimir; Keil, Andreas

    2015-01-01

    The visual system is biased towards sensory cues that have been associated with danger or harm through temporal co-occurrence. An outstanding question about conditioning-induced changes in visuocortical processing is the extent to which they are driven primarily by top-down factors such as expectancy or by low-level factors such as the temporal proximity between conditioned stimuli and aversive outcomes. Here, we examined this question using two different differential aversive conditioning experiments: participants learned to associate a particular grating stimulus with an aversive noise that was presented either in close temporal proximity (delay conditioning experiment) or after a prolonged stimulus-free interval (trace conditioning experiment). In both experiments we probed cue-related cortical responses by recording steady-state visual evoked potentials (ssVEPs). Although behavioral ratings indicated that all participants successfully learned to discriminate between the grating patterns that predicted the presence versus absence of the aversive noise, selective amplification of population-level responses in visual cortex for the conditioned danger signal was observed only when the grating and the noise were temporally contiguous. Our findings are in line with notions purporting that changes in the electrocortical response of visual neurons induced by aversive conditioning are a product of Hebbian associations among sensory cell assemblies rather than being driven entirely by expectancy-based, declarative processes. PMID:23398582

  18. Visual Temporal Processing in Dyslexia and the Magnocellular Deficit Theory: The Need for Speed?

    ERIC Educational Resources Information Center

    McLean, Gregor M. T.; Stuart, Geoffrey W.; Coltheart, Veronika; Castles, Anne

    2011-01-01

    A controversial question in reading research is whether dyslexia is associated with impairments in the magnocellular system and, if so, how these low-level visual impairments might affect reading acquisition. This study used a novel chromatic flicker perception task to specifically explore "temporal" aspects of magnocellular functioning…

  19. Visual agnosia and prosopagnosia secondary to melanoma metastases: case report

    PubMed Central

    Frota, Norberto Anízio Ferreira; Pinto, Lécio Figueira; Porto, Claudia Sellitto; de Aguia, Paulo Henrique Pires; Castro, Luiz Henrique Martins; Caramelli, Paulo

    2007-01-01

    The association of visual agnosia and prosopagnosia with cerebral metastasis is very rare. The presence of symmetric and bilateral cerebral metastases of melanoma is also uncommon.We report the case of a 34 year-old man who was admitted to hospital with seizures and a three-month history of headache, with blurred vision during the past month. A previous history of melanoma resection was obtained. CT of the skull showed bilateral heterogeneous hypodense lesions in the occipito-temporal regions, with a ring pattern of contrast enhancement. Surgical resection of both metastatic lesions was performed after which the patient developed visual agnosia and prosopagnosia. On follow-up, he showed partial recovery of visual agnosia, while prosopagnosia was still evident. The relevance of this case is the rare presentation of metastatic malignant melanoma affecting homologous occipito-temporal areas associated with prosopagnosia and associative visual agnosia. PMID:29213375

  20. The Dynamics and Neural Correlates of Audio-Visual Integration Capacity as Determined by Temporal Unpredictability, Proactive Interference, and SOA.

    PubMed

    Wilbiks, Jonathan M P; Dyson, Benjamin J

    2016-01-01

    Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus.

  1. The Dynamics and Neural Correlates of Audio-Visual Integration Capacity as Determined by Temporal Unpredictability, Proactive Interference, and SOA

    PubMed Central

    Wilbiks, Jonathan M. P.; Dyson, Benjamin J.

    2016-01-01

    Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus. PMID:27977790

  2. Temporal visual field defects are associated with monocular inattention in chiasmal pathology.

    PubMed

    Fledelius, Hans C

    2009-11-01

    Chiasmal lesions have been shown to give rise occasionally to uni-ocular temporal inattention, which cannot be compensated for by volitional eye movement. This article describes the assessments of 46 such patients with chiasmal pathology. It aims to determine the clinical spectrum of this disorder, including interference with reading. Retrospective consecutive observational clinical case study over a 7-year period comprising 46 patients with chiasmal field loss of varying degrees. Observation of reading behaviour during monocular visual acuity testing ascertained from consecutive patients who appeared unable to read optotypes on the temporal side of the chart. Visual fields were evaluated by kinetic (Goldmann) and static (Octopus) techniques. Five patients who clearly manifested this condition are presented in more detail. The results of visual field testing were related to absence or presence of uni-ocular visual inattentive behaviour for distance visual acuity testing and/or reading printed text. Despite normal eye movements, the 46 patients making up the clinical series perceived only optotypes in the nasal part of the chart, in one eye or in both, when tested for each eye in turn. The temporal optotypes were ignored, and this behaviour persisted despite instruction to search for any additional letters temporal to those, which had been seen. This phenomenon of unilateral visual inattention held for both eyes in 18 and was unilateral in the remaining 28 patients. Partial or full reversibility after treatment was recorded in 21 of the 39 for whom reliable follow-up data were available. Reading a text was affected in 24 individuals, and permanently so in six. A neglect-like spatial unawareness and a lack of cognitive compensation for varying degrees of temporal visual field loss were present in all the patients observed. Not only is visual field loss a feature of chiasmal pathology, but the higher visual function of affording attention within the temporal visual field by means of using conscious thought to invoke appropriate compensatory eye movement was also absent. This suggests the possibility of 'trans-synaptic dysfunction' caused by loss of visual input to higher visual centres. When inattention to the temporal side is manifest on monocular visual testing it should raise the suspicion of chiasmal pathology.

  3. Visual Object Detection, Categorization, and Identification Tasks Are Associated with Different Time Courses and Sensitivities

    ERIC Educational Resources Information Center

    de la Rosa, Stephan; Choudhery, Rabia N.; Chatziastros, Astros

    2011-01-01

    Recent evidence suggests that the recognition of an object's presence and its explicit recognition are temporally closely related. Here we re-examined the time course (using a fine and a coarse temporal resolution) and the sensitivity of three possible component processes of visual object recognition. In particular, participants saw briefly…

  4. The associations between multisensory temporal processing and symptoms of schizophrenia.

    PubMed

    Stevenson, Ryan A; Park, Sohee; Cochran, Channing; McIntosh, Lindsey G; Noel, Jean-Paul; Barense, Morgan D; Ferber, Susanne; Wallace, Mark T

    2017-01-01

    Recent neurobiological accounts of schizophrenia have included an emphasis on changes in sensory processing. These sensory and perceptual deficits can have a cascading effect onto higher-level cognitive processes and clinical symptoms. One form of sensory dysfunction that has been consistently observed in schizophrenia is altered temporal processing. In this study, we investigated temporal processing within and across the auditory and visual modalities in individuals with schizophrenia (SCZ) and age-matched healthy controls. Individuals with SCZ showed auditory and visual temporal processing abnormalities, as well as multisensory temporal processing dysfunction that extended beyond that attributable to unisensory processing dysfunction. Most importantly, these multisensory temporal deficits were associated with the severity of hallucinations. This link between atypical multisensory temporal perception and clinical symptomatology suggests that clinical symptoms of schizophrenia may be at least partly a result of cascading effects from (multi)sensory disturbances. These results are discussed in terms of underlying neural bases and the possible implications for remediation. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Gender-specific effects of emotional modulation on visual temporal order thresholds.

    PubMed

    Liang, Wei; Zhang, Jiyuan; Bao, Yan

    2015-09-01

    Emotions affect temporal information processing in the low-frequency time window of a few seconds, but little is known about their effect in the high-frequency domain of some tens of milliseconds. The present study aims to investigate whether negative and positive emotional states influence the ability to discriminate the temporal order of visual stimuli, and whether gender plays a role in temporal processing. Due to the hemispheric lateralization of emotion, a hemispheric asymmetry between the left and the right visual field might be expected. Using a block design, subjects were primed with neutral, negative and positive emotional pictures before performing temporal order judgment tasks. Results showed that male subjects exhibited similarly reduced order thresholds under negative and positive emotional states, while female subjects demonstrated increased threshold under positive emotional state and reduced threshold under negative emotional state. Besides, emotions influenced female subjects more intensely than male subjects, and no hemispheric lateralization was observed. These observations indicate an influence of emotional states on temporal order processing of visual stimuli, and they suggest a gender difference, which is possibly associated with a different emotional stability.

  6. Self-Organization of Spatio-Temporal Hierarchy via Learning of Dynamic Visual Image Patterns on Action Sequences

    PubMed Central

    Jung, Minju; Hwang, Jungsik; Tani, Jun

    2015-01-01

    It is well known that the visual cortex efficiently processes high-dimensional spatial information by using a hierarchical structure. Recently, computational models that were inspired by the spatial hierarchy of the visual cortex have shown remarkable performance in image recognition. Up to now, however, most biological and computational modeling studies have mainly focused on the spatial domain and do not discuss temporal domain processing of the visual cortex. Several studies on the visual cortex and other brain areas associated with motor control support that the brain also uses its hierarchical structure as a processing mechanism for temporal information. Based on the success of previous computational models using spatial hierarchy and temporal hierarchy observed in the brain, the current report introduces a novel neural network model for the recognition of dynamic visual image patterns based solely on the learning of exemplars. This model is characterized by the application of both spatial and temporal constraints on local neural activities, resulting in the self-organization of a spatio-temporal hierarchy necessary for the recognition of complex dynamic visual image patterns. The evaluation with the Weizmann dataset in recognition of a set of prototypical human movement patterns showed that the proposed model is significantly robust in recognizing dynamically occluded visual patterns compared to other baseline models. Furthermore, an evaluation test for the recognition of concatenated sequences of those prototypical movement patterns indicated that the model is endowed with a remarkable capability for the contextual recognition of long-range dynamic visual image patterns. PMID:26147887

  7. Self-Organization of Spatio-Temporal Hierarchy via Learning of Dynamic Visual Image Patterns on Action Sequences.

    PubMed

    Jung, Minju; Hwang, Jungsik; Tani, Jun

    2015-01-01

    It is well known that the visual cortex efficiently processes high-dimensional spatial information by using a hierarchical structure. Recently, computational models that were inspired by the spatial hierarchy of the visual cortex have shown remarkable performance in image recognition. Up to now, however, most biological and computational modeling studies have mainly focused on the spatial domain and do not discuss temporal domain processing of the visual cortex. Several studies on the visual cortex and other brain areas associated with motor control support that the brain also uses its hierarchical structure as a processing mechanism for temporal information. Based on the success of previous computational models using spatial hierarchy and temporal hierarchy observed in the brain, the current report introduces a novel neural network model for the recognition of dynamic visual image patterns based solely on the learning of exemplars. This model is characterized by the application of both spatial and temporal constraints on local neural activities, resulting in the self-organization of a spatio-temporal hierarchy necessary for the recognition of complex dynamic visual image patterns. The evaluation with the Weizmann dataset in recognition of a set of prototypical human movement patterns showed that the proposed model is significantly robust in recognizing dynamically occluded visual patterns compared to other baseline models. Furthermore, an evaluation test for the recognition of concatenated sequences of those prototypical movement patterns indicated that the model is endowed with a remarkable capability for the contextual recognition of long-range dynamic visual image patterns.

  8. Change of temporal-order judgment of sounds during long-lasting exposure to large-field visual motion.

    PubMed

    Teramoto, Wataru; Watanabe, Hiroshi; Umemura, Hiroyuki

    2008-01-01

    The perceived temporal order of external successive events does not always follow their physical temporal order. We examined the contribution of self-motion mechanisms in the perception of temporal order in the auditory modality. We measured perceptual biases in the judgment of the temporal order of two short sounds presented successively, while participants experienced visually induced self-motion (yaw-axis circular vection) elicited by viewing long-lasting large-field visual motion. In experiment 1, a pair of white-noise patterns was presented to participants at various stimulus-onset asynchronies through headphones, while they experienced visually induced self-motion. Perceived temporal order of auditory events was modulated by the direction of the visual motion (or self-motion). Specifically, the sound presented to the ear in the direction opposite to the visual motion (ie heading direction) was perceived prior to the sound presented to the ear in the same direction. Experiments 2A and 2B were designed to reduce the contributions of decisional and/or response processes. In experiment 2A, the directional cueing of the background (left or right) and the response dimension (high pitch or low pitch) were not spatially associated. In experiment 2B, participants were additionally asked to report which of the two sounds was perceived 'second'. Almost the same results as in experiment 1 were observed, suggesting that the change in temporal order of auditory events during large-field visual motion reflects a change in perceptual processing. Experiment 3 showed that the biases in the temporal-order judgments of auditory events were caused by concurrent actual self-motion with a rotatory chair. In experiment 4, using a small display, we showed that 'pure' long exposure to visual motion without the sensation of self-motion was not responsible for this phenomenon. These results are consistent with previous studies reporting a change in the perceived temporal order of visual or tactile events depending on the direction of self-motion. Hence, large-field induced (ie optic flow) self-motion can affect the temporal order of successive external events across various modalities.

  9. Assessing the effect of physical differences in the articulation of consonants and vowels on audiovisual temporal perception

    PubMed Central

    Vatakis, Argiro; Maragos, Petros; Rodomagoulakis, Isidoros; Spence, Charles

    2012-01-01

    We investigated how the physical differences associated with the articulation of speech affect the temporal aspects of audiovisual speech perception. Video clips of consonants and vowels uttered by three different speakers were presented. The video clips were analyzed using an auditory-visual signal saliency model in order to compare signal saliency and behavioral data. Participants made temporal order judgments (TOJs) regarding which speech-stream (auditory or visual) had been presented first. The sensitivity of participants' TOJs and the point of subjective simultaneity (PSS) were analyzed as a function of the place, manner of articulation, and voicing for consonants, and the height/backness of the tongue and lip-roundedness for vowels. We expected that in the case of the place of articulation and roundedness, where the visual-speech signal is more salient, temporal perception of speech would be modulated by the visual-speech signal. No such effect was expected for the manner of articulation or height. The results demonstrate that for place and manner of articulation, participants' temporal percept was affected (although not always significantly) by highly-salient speech-signals with the visual-signals requiring smaller visual-leads at the PSS. This was not the case when height was evaluated. These findings suggest that in the case of audiovisual speech perception, a highly salient visual-speech signal may lead to higher probabilities regarding the identity of the auditory-signal that modulate the temporal window of multisensory integration of the speech-stimulus. PMID:23060756

  10. Functionally defined white matter reveals segregated pathways in human ventral temporal cortex associated with category-specific processing

    PubMed Central

    Gomez, Jesse; Pestilli, Franco; Witthoft, Nathan; Golarai, Golijeh; Liberman, Alina; Poltoratski, Sonia; Yoon, Jennifer; Grill-Spector, Kalanit

    2014-01-01

    Summary It is unknown if the white matter properties associated with specific visual networks selectively affect category-specific processing. In a novel protocol we combined measurements of white matter structure, functional selectivity, and behavior in the same subjects. We find two parallel white matter pathways along the ventral temporal lobe connecting to either face-selective or place-selective regions. Diffusion properties of portions of these tracts adjacent to face- and place-selective regions of ventral temporal cortex correlate with behavioral performance for face or place processing, respectively. Strikingly, adults with developmental prosopagnosia (face blindness) express an atypical structure-behavior relationship near face-selective cortex, suggesting that white matter atypicalities in this region may have behavioral consequences. These data suggest that examining the interplay between cortical function, anatomical connectivity, and visual behavior is integral to understanding functional networks and their role in producing visual abilities and deficits. PMID:25569351

  11. Visual Temporal Filtering and Intermittent Visual Displays.

    DTIC Science & Technology

    1986-08-08

    suport Mud Kaplan, Associate Professor, 20% time and effort Michelangelo ROssetto, Research Associate, 20% time and m4pport Margo Greene, Research...reached and are described as follows. The variable raster rate display was designed and built by Michelangelo R0ssetto and Norman Milkman, Research

  12. Contribution of Temporal Processing Skills to Reading Comprehension in 8-Year-Olds: Evidence for a Mediation Effect of Phonological Awareness

    ERIC Educational Resources Information Center

    Malenfant, Nathalie; Grondin, Simon; Boivin, Michel; Forget-Dubois, Nadine; Robaey, Philippe; Dionne, Ginette

    2012-01-01

    This study tested whether the association between temporal processing (TP) and reading is mediated by phonological awareness (PA) in a normative sample of 615 eight-year-olds. TP was measured with auditory and bimodal (visual-auditory) temporal order judgment tasks and PA with a phoneme deletion task. PA partially mediated the association between…

  13. Neural signatures of lexical tone reading.

    PubMed

    Kwok, Veronica P Y; Wang, Tianfu; Chen, Siping; Yakpo, Kofi; Zhu, Linlin; Fox, Peter T; Tan, Li Hai

    2015-01-01

    Research on how lexical tone is neuroanatomically represented in the human brain is central to our understanding of cortical regions subserving language. Past studies have exclusively focused on tone perception of the spoken language, and little is known as to the lexical tone processing in reading visual words and its associated brain mechanisms. In this study, we performed two experiments to identify neural substrates in Chinese tone reading. First, we used a tone judgment paradigm to investigate tone processing of visually presented Chinese characters. We found that, relative to baseline, tone perception of printed Chinese characters were mediated by strong brain activation in bilateral frontal regions, left inferior parietal lobule, left posterior middle/medial temporal gyrus, left inferior temporal region, bilateral visual systems, and cerebellum. Surprisingly, no activation was found in superior temporal regions, brain sites well known for speech tone processing. In activation likelihood estimation (ALE) meta-analysis to combine results of relevant published studies, we attempted to elucidate whether the left temporal cortex activities identified in Experiment one is consistent with those found in previous studies of auditory lexical tone perception. ALE results showed that only the left superior temporal gyrus and putamen were critical in auditory lexical tone processing. These findings suggest that activation in the superior temporal cortex associated with lexical tone perception is modality-dependent. © 2014 Wiley Periodicals, Inc.

  14. Temporal Order Judgment in Dyslexia--Task Difficulty or Temporal Processing Deficiency?

    ERIC Educational Resources Information Center

    Skottun, Bernt C.; Skoyles, John R.

    2010-01-01

    Dyslexia has been widely held to be associated with deficient temporal processing. It is, however, not established that the slower visual processing of dyslexic readers is not a secondary effect of task difficulty. To illustrate this we re-analyze data from Liddle et al. (2009) who studied temporal order judgment in dyslexia and plotted the…

  15. Temporally Coordinated Deep Brain Stimulation in the Dorsal and Ventral Striatum Synergistically Enhances Associative Learning.

    PubMed

    Katnani, Husam A; Patel, Shaun R; Kwon, Churl-Su; Abdel-Aziz, Samer; Gale, John T; Eskandar, Emad N

    2016-01-04

    The primate brain has the remarkable ability of mapping sensory stimuli into motor behaviors that can lead to positive outcomes. We have previously shown that during the reinforcement of visual-motor behavior, activity in the caudate nucleus is correlated with the rate of learning. Moreover, phasic microstimulation in the caudate during the reinforcement period was shown to enhance associative learning, demonstrating the importance of temporal specificity to manipulate learning related changes. Here we present evidence that extends upon our previous finding by demonstrating that temporally coordinated phasic deep brain stimulation across both the nucleus accumbens and caudate can further enhance associative learning. Monkeys performed a visual-motor associative learning task and received stimulation at time points critical to learning related changes. Resulting performance revealed an enhancement in the rate, ceiling, and reaction times of learning. Stimulation of each brain region alone or at different time points did not generate the same effect.

  16. Associative-memory representations emerge as shared spatial patterns of theta activity spanning the primate temporal cortex

    PubMed Central

    Nakahara, Kiyoshi; Adachi, Ken; Kawasaki, Keisuke; Matsuo, Takeshi; Sawahata, Hirohito; Majima, Kei; Takeda, Masaki; Sugiyama, Sayaka; Nakata, Ryota; Iijima, Atsuhiko; Tanigawa, Hisashi; Suzuki, Takafumi; Kamitani, Yukiyasu; Hasegawa, Isao

    2016-01-01

    Highly localized neuronal spikes in primate temporal cortex can encode associative memory; however, whether memory formation involves area-wide reorganization of ensemble activity, which often accompanies rhythmicity, or just local microcircuit-level plasticity, remains elusive. Using high-density electrocorticography, we capture local-field potentials spanning the monkey temporal lobes, and show that the visual pair-association (PA) memory is encoded in spatial patterns of theta activity in areas TE, 36, and, partially, in the parahippocampal cortex, but not in the entorhinal cortex. The theta patterns elicited by learned paired associates are distinct between pairs, but similar within pairs. This pattern similarity, emerging through novel PA learning, allows a machine-learning decoder trained on theta patterns elicited by a particular visual item to correctly predict the identity of those elicited by its paired associate. Our results suggest that the formation and sharing of widespread cortical theta patterns via learning-induced reorganization are involved in the mechanisms of associative memory representation. PMID:27282247

  17. Multisensory connections of monkey auditory cerebral cortex

    PubMed Central

    Smiley, John F.; Falchier, Arnaud

    2009-01-01

    Functional studies have demonstrated multisensory responses in auditory cortex, even in the primary and early auditory association areas. The features of somatosensory and visual responses in auditory cortex suggest that they are involved in multiple processes including spatial, temporal and object-related perception. Tract tracing studies in monkeys have demonstrated several potential sources of somatosensory and visual inputs to auditory cortex. These include potential somatosensory inputs from the retroinsular (RI) and granular insula (Ig) cortical areas, and from the thalamic posterior (PO) nucleus. Potential sources of visual responses include peripheral field representations of areas V2 and prostriata, as well as the superior temporal polysensory area (STP) in the superior temporal sulcus, and the magnocellular medial geniculate thalamic nucleus (MGm). Besides these sources, there are several other thalamic, limbic and cortical association structures that have multisensory responses and may contribute cross-modal inputs to auditory cortex. These connections demonstrated by tract tracing provide a list of potential inputs, but in most cases their significance has not been confirmed by functional experiments. It is possible that the somatosensory and visual modulation of auditory cortex are each mediated by multiple extrinsic sources. PMID:19619628

  18. Evidence for a basal temporal visual language center: cortical stimulation producing pure alexia.

    PubMed

    Mani, J; Diehl, B; Piao, Z; Schuele, S S; Lapresto, E; Liu, P; Nair, D R; Dinner, D S; Lüders, H O

    2008-11-11

    Dejerine and Benson and Geschwind postulated disconnection of the dominant angular gyrus from both visual association cortices as the basis for pure alexia, emphasizing disruption of white matter tracts in the dominant temporooccipital region. Recently functional imaging studies provide evidence for direct participation of basal temporal and occipital cortices in the cognitive process of reading. The exact location and function of these areas remain a matter of debate. To confirm the participation of the basal temporal region in reading. Extraoperative electrical stimulation of the dominant hemisphere was performed in three subjects using subdural electrodes, as part of presurgical evaluation for refractory epilepsy. Pure alexia was reproduced during cortical stimulation of the dominant posterior fusiform and inferior temporal gyri in all three patients. Stimulation resulted in selective reading difficulty with intact auditory comprehension and writing. Reading difficulty involved sentences and words with intact letter by letter reading. Picture naming difficulties were also noted at some electrodes. This region is located posterior to and contiguous with the basal temporal language area (BTLA) where stimulation resulted in global language dysfunction in visual and auditory realms. The location corresponded with the visual word form area described on functional MRI. These observations support the existence of a visual language area in the dominant fusiform and occipitotemporal gyri, contiguous with basal temporal language area. A portion of visual language area was exclusively involved in lexical processing while the other part of this region processed both lexical and nonlexical symbols.

  19. Processing Stages Underlying Word Recognition in the Anteroventral Temporal Lobe

    PubMed Central

    Halgren, Eric; Wang, Chunmao; Schomer, Donald L.; Knake, Susanne; Marinkovic, Ksenija; Wu, Julian; Ulbert, Istvan

    2006-01-01

    The anteroventral temporal lobe integrates visual, lexical, semantic and mnestic aspects of word-processing, through its reciprocal connections with the ventral visual stream, language areas, and the hippocampal formation. We used linear microelectrode arrays to probe population synaptic currents and neuronal firing in different cortical layers of the anteroventral temporal lobe, during semantic judgments with implicit priming, and overt word recognition. Since different extrinsic and associative inputs preferentially target different cortical layers, this method can help reveal the sequence and nature of local processing stages at a higher resolution than was previously possible. The initial response in inferotemporal and perirhinal cortices is a brief current sink beginning at ~120ms, and peaking at ~170ms. Localization of this initial sink to middle layers suggests that it represents feedforward input from lower visual areas, and simultaneously increased firing implies that it represents excitatory synaptic currents. Until ~800ms, the main focus of transmembrane current sinks alternates between middle and superficial layers, with the superficial focus becoming increasingly dominant after ~550ms. Since superficial layers are the target of local and feedback associative inputs, this suggests an alternation in predominant synaptic input between feedforward and feedback modes. Word repetition does not affect the initial perirhinal and inferotemporal middle layer sink, but does decrease later activity. Entorhinal activity begins later (~200ms), with greater apparent excitatory postsynaptic currents and multiunit activity in neocortically-projecting than hippocampal-projecting layers. In contrast to perirhinal and entorhinal responses, entorhinal responses are larger to repeated words during memory retrieval. These results identify a sequence of physiological activation, beginning with a sharp activation from lower level visual areas carrying specific information to middle layers. This is followed by feedback and associative interactions involving upper cortical layers, which are abbreviated to repeated words. Following bottom-up and associative stages, top-down recollective processes may be driven by entorhinal cortex. Word processing involves a systematic sequence of fast feedforward information transfer from visual areas to anteroventral temporal cortex, followed by prolonged interactions of this feedforward information with local associations, and feedback mnestic information from the medial temporal lobe. PMID:16488158

  20. Regional cerebral blood flow in Parkinson disease with nonpsychotic visual hallucinations.

    PubMed

    Oishi, N; Udaka, F; Kameyama, M; Sawamoto, N; Hashikawa, K; Fukuyama, H

    2005-12-13

    Patients with Parkinson disease (PD) often experience visual hallucinations (VH) with retained insight (nonpsychotic) but the precise mechanism remains unclear. To clarify which neural substrates participate in nonpsychotic VH in PD, the authors evaluated regional cerebral blood flow (rCBF) changes in patients with PD and VH. The authors compared 24 patients with PD who had nonpsychotic VH (hallucinators) and 41 patients with PD who had never experienced VH (non-hallucinators) using SPECT images with N-isopropyl-p-[(123)I]iodoamphetamine. There were no significant differences in age, sex, duration of disease, doses of PD medications, Hoehn and Yahr scale, or Mini-Mental State Examination (MMSE) scores between the two groups. The rCBF data were analyzed using statistical parametric mapping (SPM). The rCBF in the right fusiform gyrus was lower in the hallucinators than in the non-hallucinators (corrected p < 0.05 at cluster levels). The hallucinators revealed higher rCBF in the right superior and middle temporal gyri than the non-hallucinators (uncorrected p < 0.001). These significant differences were demonstrated after MMSE scores and duration of disease, which are the relevant factors associated with VH, were covariated out. Nonpsychotic visual hallucinations in Parkinson disease (PD) may be associated with hypoperfusion in the right fusiform gyrus and hyperperfusion in the right superior and middle temporal gyri. These temporal regions are important for visual object recognition and these regional cerebral blood flow changes are associated with inappropriate visual processing and are responsible for nonpsychotic visual hallucinations in PD.

  1. Semantics of the visual environment encoded in parahippocampal cortex

    PubMed Central

    Bonner, Michael F.; Price, Amy Rose; Peelle, Jonathan E.; Grossman, Murray

    2016-01-01

    Semantic representations capture the statistics of experience and store this information in memory. A fundamental component of this memory system is knowledge of the visual environment, including knowledge of objects and their associations. Visual semantic information underlies a range of behaviors, from perceptual categorization to cognitive processes such as language and reasoning. Here we examine the neuroanatomic system that encodes visual semantics. Across three experiments, we found converging evidence indicating that knowledge of verbally mediated visual concepts relies on information encoded in a region of the ventral-medial temporal lobe centered on parahippocampal cortex. In an fMRI study, this region was strongly engaged by the processing of concepts relying on visual knowledge but not by concepts relying on other sensory modalities. In a study of patients with the semantic variant of primary progressive aphasia (semantic dementia), atrophy that encompassed this region was associated with a specific impairment in verbally mediated visual semantic knowledge. Finally, in a structural study of healthy adults from the fMRI experiment, gray matter density in this region related to individual variability in the processing of visual concepts. The anatomic location of these findings aligns with recent work linking the ventral-medial temporal lobe with high-level visual representation, contextual associations, and reasoning through imagination. Together this work suggests a critical role for parahippocampal cortex in linking the visual environment with knowledge systems in the human brain. PMID:26679216

  2. Semantics of the Visual Environment Encoded in Parahippocampal Cortex.

    PubMed

    Bonner, Michael F; Price, Amy Rose; Peelle, Jonathan E; Grossman, Murray

    2016-03-01

    Semantic representations capture the statistics of experience and store this information in memory. A fundamental component of this memory system is knowledge of the visual environment, including knowledge of objects and their associations. Visual semantic information underlies a range of behaviors, from perceptual categorization to cognitive processes such as language and reasoning. Here we examine the neuroanatomic system that encodes visual semantics. Across three experiments, we found converging evidence indicating that knowledge of verbally mediated visual concepts relies on information encoded in a region of the ventral-medial temporal lobe centered on parahippocampal cortex. In an fMRI study, this region was strongly engaged by the processing of concepts relying on visual knowledge but not by concepts relying on other sensory modalities. In a study of patients with the semantic variant of primary progressive aphasia (semantic dementia), atrophy that encompassed this region was associated with a specific impairment in verbally mediated visual semantic knowledge. Finally, in a structural study of healthy adults from the fMRI experiment, gray matter density in this region related to individual variability in the processing of visual concepts. The anatomic location of these findings aligns with recent work linking the ventral-medial temporal lobe with high-level visual representation, contextual associations, and reasoning through imagination. Together, this work suggests a critical role for parahippocampal cortex in linking the visual environment with knowledge systems in the human brain.

  3. Resolution of spatial and temporal visual attention in infants with fragile X syndrome.

    PubMed

    Farzin, Faraz; Rivera, Susan M; Whitney, David

    2011-11-01

    Fragile X syndrome is the most common cause of inherited intellectual impairment and the most common single-gene cause of autism. Individuals with fragile X syndrome present with a neurobehavioural phenotype that includes selective deficits in spatiotemporal visual perception associated with neural processing in frontal-parietal networks of the brain. The goal of the current study was to examine whether reduced resolution of spatial and/or temporal visual attention may underlie perceptual deficits related to fragile X syndrome. Eye tracking was used to psychophysically measure the limits of spatial and temporal attention in infants with fragile X syndrome and age-matched neurotypically developing infants. Results from these experiments revealed that infants with fragile X syndrome experience drastically reduced resolution of temporal attention in a genetic dose-sensitive manner, but have a spatial resolution of attention that is not impaired. Coarse temporal attention could have significant knock-on effects for the development of perceptual, cognitive and motor abilities in individuals with the disorder.

  4. Resolution of spatial and temporal visual attention in infants with fragile X syndrome

    PubMed Central

    Rivera, Susan M.; Whitney, David

    2011-01-01

    Fragile X syndrome is the most common cause of inherited intellectual impairment and the most common single-gene cause of autism. Individuals with fragile X syndrome present with a neurobehavioural phenotype that includes selective deficits in spatiotemporal visual perception associated with neural processing in frontal–parietal networks of the brain. The goal of the current study was to examine whether reduced resolution of spatial and/or temporal visual attention may underlie perceptual deficits related to fragile X syndrome. Eye tracking was used to psychophysically measure the limits of spatial and temporal attention in infants with fragile X syndrome and age-matched neurotypically developing infants. Results from these experiments revealed that infants with fragile X syndrome experience drastically reduced resolution of temporal attention in a genetic dose-sensitive manner, but have a spatial resolution of attention that is not impaired. Coarse temporal attention could have significant knock-on effects for the development of perceptual, cognitive and motor abilities in individuals with the disorder. PMID:22075522

  5. Spatio-temporal distribution of brain activity associated with audio-visually congruent and incongruent speech and the McGurk Effect.

    PubMed

    Pratt, Hillel; Bleich, Naomi; Mittelman, Nomi

    2015-11-01

    Spatio-temporal distributions of cortical activity to audio-visual presentations of meaningless vowel-consonant-vowels and the effects of audio-visual congruence/incongruence, with emphasis on the McGurk effect, were studied. The McGurk effect occurs when a clearly audible syllable with one consonant, is presented simultaneously with a visual presentation of a face articulating a syllable with a different consonant and the resulting percept is a syllable with a consonant other than the auditorily presented one. Twenty subjects listened to pairs of audio-visually congruent or incongruent utterances and indicated whether pair members were the same or not. Source current densities of event-related potentials to the first utterance in the pair were estimated and effects of stimulus-response combinations, brain area, hemisphere, and clarity of visual articulation were assessed. Auditory cortex, superior parietal cortex, and middle temporal cortex were the most consistently involved areas across experimental conditions. Early (<200 msec) processing of the consonant was overall prominent in the left hemisphere, except right hemisphere prominence in superior parietal cortex and secondary visual cortex. Clarity of visual articulation impacted activity in secondary visual cortex and Wernicke's area. McGurk perception was associated with decreased activity in primary and secondary auditory cortices and Wernicke's area before 100 msec, increased activity around 100 msec which decreased again around 180 msec. Activity in Broca's area was unaffected by McGurk perception and was only increased to congruent audio-visual stimuli 30-70 msec following consonant onset. The results suggest left hemisphere prominence in the effects of stimulus and response conditions on eight brain areas involved in dynamically distributed parallel processing of audio-visual integration. Initially (30-70 msec) subcortical contributions to auditory cortex, superior parietal cortex, and middle temporal cortex occur. During 100-140 msec, peristriate visual influences and Wernicke's area join in the processing. Resolution of incongruent audio-visual inputs is then attempted, and if successful, McGurk perception occurs and cortical activity in left hemisphere further increases between 170 and 260 msec.

  6. An impaired attentional dwell time after parietal and frontal lesions related to impaired selective attention not unilateral neglect.

    PubMed

    Correani, Alessia; Humphreys, Glyn W

    2011-07-01

    The attentional blink, a measure of the temporal dynamics of visual processing, has been documented to be more pronounced following brain lesions that are associated with visual neglect. This suggests that, in addition to their spatial bias in attention, neglect patients may have a prolonged dwell time for attention. Here the attentional dwell time was examined in patients with damage focused on either posterior parietal or frontal cortices. In three experiments, we show that there is an abnormally pronounced attentional dwell time, which does not differ in patients with posterior parietal and with frontal lobe lesions, and this is associated with a measure of selective attention but not with measures of spatial bias in selection. These data occurred both when we attempted to match patients and controls for overall differences in performance and when a single set stimulus exposure was used across participants. In Experiments 1 and 2, requiring report of colour-form conjunctions, there was evidence that the patients were also impaired at temporal binding, showing errors in feature combination across stimuli and in reporting in the correct temporal order. In Experiment 3, requiring only the report of features but introducing task switching led to similar results. The data suggest that damage to a frontoparietal network can compromise temporal selection of visual stimuli; however, this is not necessarily related to a deficit in hemispatial visual attention but it is to impaired target selection. We discuss the implications for understanding visual selection.

  7. Temporal processing dysfunction in schizophrenia.

    PubMed

    Carroll, Christine A; Boggs, Jennifer; O'Donnell, Brian F; Shekhar, Anantha; Hetrick, William P

    2008-07-01

    Schizophrenia may be associated with a fundamental disturbance in the temporal coordination of information processing in the brain, leading to classic symptoms of schizophrenia such as thought disorder and disorganized and contextually inappropriate behavior. Despite the growing interest and centrality of time-dependent conceptualizations of the pathophysiology of schizophrenia, there remains a paucity of research directly examining overt timing performance in the disorder. Accordingly, the present study investigated timing in schizophrenia using a well-established task of time perception. Twenty-three individuals with schizophrenia and 22 non-psychiatric control participants completed a temporal bisection task, which required participants to make temporal judgments about auditory and visually presented durations ranging from 300 to 600 ms. Both schizophrenia and control groups displayed greater visual compared to auditory timing variability, with no difference between groups in the visual modality. However, individuals with schizophrenia exhibited less temporal precision than controls in the perception of auditory durations. These findings correlated with parameter estimates obtained from a quantitative model of time estimation, and provide evidence of a fundamental deficit in temporal auditory precision in schizophrenia.

  8. Encoding model of temporal processing in human visual cortex.

    PubMed

    Stigliani, Anthony; Jeska, Brianna; Grill-Spector, Kalanit

    2017-12-19

    How is temporal information processed in human visual cortex? Visual input is relayed to V1 through segregated transient and sustained channels in the retina and lateral geniculate nucleus (LGN). However, there is intense debate as to how sustained and transient temporal channels contribute to visual processing beyond V1. The prevailing view associates transient processing predominately with motion-sensitive regions and sustained processing with ventral stream regions, while the opposing view suggests that both temporal channels contribute to neural processing beyond V1. Using fMRI, we measured cortical responses to time-varying stimuli and then implemented a two temporal channel-encoding model to evaluate the contributions of each channel. Different from the general linear model of fMRI that predicts responses directly from the stimulus, the encoding approach first models neural responses to the stimulus from which fMRI responses are derived. This encoding approach not only predicts cortical responses to time-varying stimuli from milliseconds to seconds but also, reveals differential contributions of temporal channels across visual cortex. Consistent with the prevailing view, motion-sensitive regions and adjacent lateral occipitotemporal regions are dominated by transient responses. However, ventral occipitotemporal regions are driven by both sustained and transient channels, with transient responses exceeding the sustained. These findings propose a rethinking of temporal processing in the ventral stream and suggest that transient processing may contribute to rapid extraction of the content of the visual input. Importantly, our encoding approach has vast implications, because it can be applied with fMRI to decipher neural computations in millisecond resolution in any part of the brain. Copyright © 2017 the Author(s). Published by PNAS.

  9. Tagging cortical networks in emotion: a topographical analysis

    PubMed Central

    Keil, Andreas; Costa, Vincent; Smith, J. Carson; Sabatinelli, Dean; McGinnis, E. Menton; Bradley, Margaret M.; Lang, Peter J.

    2013-01-01

    Viewing emotional pictures is associated with heightened perception and attention, indexed by a relative increase in visual cortical activity. Visual cortical modulation by emotion is hypothesized to reflect re-entrant connectivity originating in higher-order cortical and/or limbic structures. The present study used dense-array electroencephalography and individual brain anatomy to investigate functional coupling between the visual cortex and other cortical areas during affective picture viewing. Participants viewed pleasant, neutral, and unpleasant pictures that flickered at a rate of 10 Hz to evoke steady-state visual evoked potentials (ssVEPs) in the EEG. The spectral power of ssVEPs was quantified using Fourier transform, and cortical sources were estimated using beamformer spatial filters based on individual structural magnetic resonance images. In addition to lower-tier visual cortex, a network of occipito-temporal and parietal (bilateral precuneus, inferior parietal lobules) structures showed enhanced ssVEP power when participants viewed emotional (either pleasant or unpleasant), compared to neutral pictures. Functional coupling during emotional processing was enhanced between the bilateral occipital poles and a network of temporal (left middle/inferior temporal gyrus), parietal (bilateral parietal lobules), and frontal (left middle/inferior frontal gyrus) structures. These results converge with findings from hemodynamic analyses of emotional picture viewing and suggest that viewing emotionally engaging stimuli is associated with the formation of functional links between visual cortex and the cortical regions underlying attention modulation and preparation for action. PMID:21954087

  10. Relationship between slow visual processing and reading speed in people with macular degeneration

    PubMed Central

    Cheong, Allen MY; Legge, Gordon E; Lawrence, Mary G; Cheung, Sing-Hang; Ruff, Mary A

    2007-01-01

    Purpose People with macular degeneration (MD) often read slowly even with adequate magnification to compensate for acuity loss. Oculomotor deficits may affect reading in MD, but cannot fully explain the substantial reduction in reading speed. Central-field loss (CFL) is often a consequence of macular degeneration, necessitating the use of peripheral vision for reading. We hypothesized that slower temporal processing of visual patterns in peripheral vision is a factor contributing to slow reading performance in MD patients. Methods Fifteen subjects with MD, including 12 with CFL, and five age-matched control subjects were recruited. Maximum reading speed and critical print size were measured with RSVP (Rapid Serial Visual Presentation). Temporal processing speed was studied by measuring letter-recognition accuracy for strings of three randomly selected letters centered at fixation for a range of exposure times. Temporal threshold was defined as the exposure time yielding 80% recognition accuracy for the central letter. Results Temporal thresholds for the MD subjects ranged from 159 to 5881 ms, much longer than values for age-matched controls in central vision (13 ms, p<0.01). The mean temporal threshold for the 11 MD subjects who used eccentric fixation (1555.8 ± 1708.4 ms) was much longer than the mean temporal threshold (97.0 ms ± 34.2 ms, p<0.01) for the age-matched controls at 10° in the lower visual field. Individual temporal thresholds accounted for 30% of the variance in reading speed (p<0.05). Conclusion The significant association between increased temporal threshold for letter recognition and reduced reading speed is consistent with the hypothesis that slower visual processing of letter recognition is one of the factors limiting reading speed in MD subjects. PMID:17881032

  11. Charles Bonnet syndrome in hemianopia, following antero-mesial temporal lobectomy for drug-resistant epilepsy.

    PubMed

    Contardi, Sara; Rubboli, Guido; Giulioni, Marco; Michelucci, Roberto; Pizza, Fabio; Gardella, Elena; Pinardi, Federica; Bartolomei, Ilaria; Tassinari, Carlo Alberto

    2007-09-01

    Charles Bonnet syndrome (CBS) is a disorder characterized by the occurrence of complex visual hallucinations in patients with acquired impairment of vision and without psychiatric disorders. In spite of the high incidence of visual field defects following antero-mesial temporal lobectomy for refractory temporal lobe epilepsy, reports of CBS in patients who underwent this surgical procedure are surprisingly rare. We describe a patient operated on for drug-resistant epilepsy. As a result of left antero-mesial temporal resection, she presented right homonymous hemianopia. A few days after surgery, she started complaining of visual hallucinations, such as static or moving "Lilliputian" human figures, or countryside scenes, restricted to the hemianopic field. The patient was fully aware of their fictitious nature. These disturbances disappeared progressively over a few weeks. The incidence of CBS associated with visual field defects following epilepsy surgery might be underestimated. Patients with post-surgical CBS should be reassured that it is not an epileptic phenomenon, and that it has a benign, self-limiting, course which does not usually require treatment.

  12. Cross-Modal and Intra-Modal Characteristics of Visual Function and Speech Perception Performance in Postlingually Deafened, Cochlear Implant Users

    PubMed Central

    Kim, Min-Beom; Shim, Hyun-Yong; Jin, Sun Hwa; Kang, Soojin; Woo, Jihwan; Han, Jong Chul; Lee, Ji Young; Kim, Martha; Cho, Yang-Sun

    2016-01-01

    Evidence of visual-auditory cross-modal plasticity in deaf individuals has been widely reported. Superior visual abilities of deaf individuals have been shown to result in enhanced reactivity to visual events and/or enhanced peripheral spatial attention. The goal of this study was to investigate the association between visual-auditory cross-modal plasticity and speech perception in post-lingually deafened, adult cochlear implant (CI) users. Post-lingually deafened adults with CIs (N = 14) and a group of normal hearing, adult controls (N = 12) participated in this study. The CI participants were divided into a good performer group (good CI, N = 7) and a poor performer group (poor CI, N = 7) based on word recognition scores. Visual evoked potentials (VEP) were recorded from the temporal and occipital cortex to assess reactivity. Visual field (VF) testing was used to assess spatial attention and Goldmann perimetry measures were analyzed to identify differences across groups in the VF. The association of the amplitude of the P1 VEP response over the right temporal or occipital cortex among three groups (control, good CI, poor CI) was analyzed. In addition, the association between VF by different stimuli and word perception score was evaluated. The P1 VEP amplitude recorded from the right temporal cortex was larger in the group of poorly performing CI users than the group of good performers. The P1 amplitude recorded from electrodes near the occipital cortex was smaller for the poor performing group. P1 VEP amplitude in right temporal lobe was negatively correlated with speech perception outcomes for the CI participants (r = -0.736, P = 0.003). However, P1 VEP amplitude measures recorded from near the occipital cortex had a positive correlation with speech perception outcome in the CI participants (r = 0.775, P = 0.001). In VF analysis, CI users showed narrowed central VF (VF to low intensity stimuli). However, their far peripheral VF (VF to high intensity stimuli) was not different from the controls. In addition, the extent of their central VF was positively correlated with speech perception outcome (r = 0.669, P = 0.009). Persistent visual activation in right temporal cortex even after CI causes negative effect on outcome in post-lingual deaf adults. We interpret these results to suggest that insufficient intra-modal (visual) compensation by the occipital cortex may cause negative effects on outcome. Based on our results, it appears that a narrowed central VF could help identify CI users with poor outcomes with their device. PMID:26848755

  13. Crossmodal association of auditory and visual material properties in infants.

    PubMed

    Ujiie, Yuta; Yamashita, Wakayo; Fujisaki, Waka; Kanazawa, So; Yamaguchi, Masami K

    2018-06-18

    The human perceptual system enables us to extract visual properties of an object's material from auditory information. In monkeys, the neural basis underlying such multisensory association develops through experience of exposure to a material; material information could be processed in the posterior inferior temporal cortex, progressively from the high-order visual areas. In humans, however, the development of this neural representation remains poorly understood. Here, we demonstrated for the first time the presence of a mapping of the auditory material property with visual material ("Metal" and "Wood") in the right temporal region in preverbal 4- to 8-month-old infants, using near-infrared spectroscopy (NIRS). Furthermore, we found that infants acquired the audio-visual mapping for a property of the "Metal" material later than for the "Wood" material, since infants form the visual property of "Metal" material after approximately 6 months of age. These findings indicate that multisensory processing of material information induces the activation of brain areas related to sound symbolism. Our findings also indicate that the material's familiarity might facilitate the development of multisensory processing during the first year of life.

  14. Source memory errors in schizophrenia, hallucinations and negative symptoms: a synthesis of research findings.

    PubMed

    Brébion, G; Ohlsen, R I; Bressan, R A; David, A S

    2012-12-01

    Previous research has shown associations between source memory errors and hallucinations in patients with schizophrenia. We bring together here findings from a broad memory investigation to specify better the type of source memory failure that is associated with auditory and visual hallucinations. Forty-one patients with schizophrenia and 43 healthy participants underwent a memory task involving recall and recognition of lists of words, recognition of pictures, memory for temporal and spatial context of presentation of the stimuli, and remembering whether target items were presented as words or pictures. False recognition of words and pictures was associated with hallucination scores. The extra-list intrusions in free recall were associated with verbal hallucinations whereas the intra-list intrusions were associated with a global hallucination score. Errors in discriminating the temporal context of word presentation and the spatial context of picture presentation were associated with auditory hallucinations. The tendency to remember verbal labels of items as pictures of these items was associated with visual hallucinations. Several memory errors were also inversely associated with affective flattening and anhedonia. Verbal and visual hallucinations are associated with confusion between internal verbal thoughts or internal visual images and perception. In addition, auditory hallucinations are associated with failure to process or remember the context of presentation of the events. Certain negative symptoms have an opposite effect on memory errors.

  15. Disturbed temporal dynamics of brain synchronization in vision loss.

    PubMed

    Bola, Michał; Gall, Carolin; Sabel, Bernhard A

    2015-06-01

    Damage along the visual pathway prevents bottom-up visual input from reaching further processing stages and consequently leads to loss of vision. But perception is not a simple bottom-up process - rather it emerges from activity of widespread cortical networks which coordinate visual processing in space and time. Here we set out to study how vision loss affects activity of brain visual networks and how networks' activity is related to perception. Specifically, we focused on studying temporal patterns of brain activity. To this end, resting-state eyes-closed EEG was recorded from partially blind patients suffering from chronic retina and/or optic-nerve damage (n = 19) and healthy controls (n = 13). Amplitude (power) of oscillatory activity and phase locking value (PLV) were used as measures of local and distant synchronization, respectively. Synchronization time series were created for the low- (7-9 Hz) and high-alpha band (11-13 Hz) and analyzed with three measures of temporal patterns: (i) length of synchronized-/desynchronized-periods, (ii) Higuchi Fractal Dimension (HFD), and (iii) Detrended Fluctuation Analysis (DFA). We revealed that patients exhibit less complex, more random and noise-like temporal dynamics of high-alpha band activity. More random temporal patterns were associated with worse performance in static (r = -.54, p = .017) and kinetic perimetry (r = .47, p = .041). We conclude that disturbed temporal patterns of neural synchronization in vision loss patients indicate disrupted communication within brain visual networks caused by prolonged deafferentation. We propose that because the state of brain networks is essential for normal perception, impaired brain synchronization in patients with vision loss might aggravate the functional consequences of reduced visual input. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Deconstruction of spatial integrity in visual stimulus detected by modulation of synchronized activity in cat visual cortex.

    PubMed

    Zhou, Zhiyi; Bernard, Melanie R; Bonds, A B

    2008-04-02

    Spatiotemporal relationships among contour segments can influence synchronization of neural responses in the primary visual cortex. We performed a systematic study to dissociate the impact of spatial and temporal factors in the signaling of contour integration via synchrony. In addition, we characterized the temporal evolution of this process to clarify potential underlying mechanisms. With a 10 x 10 microelectrode array, we recorded the simultaneous activity of multiple cells in the cat primary visual cortex while stimulating with drifting sine-wave gratings. We preserved temporal integrity and systematically degraded spatial integrity of the sine-wave gratings by adding spatial noise. Neural synchronization was analyzed in the time and frequency domains by conducting cross-correlation and coherence analyses. The general association between neural spike trains depends strongly on spatial integrity, with coherence in the gamma band (35-70 Hz) showing greater sensitivity to the change of spatial structure than other frequency bands. Analysis of the temporal dynamics of synchronization in both time and frequency domains suggests that spike timing synchronization is triggered nearly instantaneously by coherent structure in the stimuli, whereas frequency-specific oscillatory components develop more slowly, presumably through network interactions. Our results suggest that, whereas temporal integrity is required for the generation of synchrony, spatial integrity is critical in triggering subsequent gamma band synchronization.

  17. An association between auditory-visual synchrony processing and reading comprehension: Behavioral and electrophysiological evidence

    PubMed Central

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2016-01-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension. PMID:28129060

  18. An Association between Auditory-Visual Synchrony Processing and Reading Comprehension: Behavioral and Electrophysiological Evidence.

    PubMed

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2017-03-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension.

  19. Subliminal convergence of Kanji and Kana words: further evidence for functional parcellation of the posterior temporal cortex in visual word perception.

    PubMed

    Nakamura, Kimihiro; Dehaene, Stanislas; Jobert, Antoinette; Le Bihan, Denis; Kouider, Sid

    2005-06-01

    Recent evidence has suggested that the human occipitotemporal region comprises several subregions, each sensitive to a distinct processing level of visual words. To further explore the functional architecture of visual word recognition, we employed a subliminal priming method with functional magnetic resonance imaging (fMRI) during semantic judgments of words presented in two different Japanese scripts, Kanji and Kana. Each target word was preceded by a subliminal presentation of either the same or a different word, and in the same or a different script. Behaviorally, word repetition produced significant priming regardless of whether the words were presented in the same or different script. At the neural level, this cross-script priming was associated with repetition suppression in the left inferior temporal cortex anterior and dorsal to the visual word form area hypothesized for alphabetical writing systems, suggesting that cross-script convergence occurred at a semantic level. fMRI also evidenced a shared visual occipito-temporal activation for words in the two scripts, with slightly more mesial and right-predominant activation for Kanji and with greater occipital activation for Kana. These results thus allow us to separate script-specific and script-independent regions in the posterior temporal lobe, while demonstrating that both can be activated subliminally.

  20. Brain activity related to working memory for temporal order and object information.

    PubMed

    Roberts, Brooke M; Libby, Laura A; Inhoff, Marika C; Ranganath, Charan

    2017-06-08

    Maintaining items in an appropriate sequence is important for many daily activities; however, remarkably little is known about the neural basis of human temporal working memory. Prior work suggests that the prefrontal cortex (PFC) and medial temporal lobe (MTL), including the hippocampus, play a role in representing information about temporal order. The involvement of these areas in successful temporal working memory, however, is less clear. Additionally, it is unknown whether regions in the PFC and MTL support temporal working memory across different timescales, or at coarse or fine levels of temporal detail. To address these questions, participants were scanned while completing 3 working memory task conditions (Group, Position and Item) that were matched in terms of difficulty and the number of items to be actively maintained. Group and Position trials probed temporal working memory processes, requiring the maintenance of hierarchically organized coarse and fine temporal information, respectively. To isolate activation related to temporal working memory, Group and Position trials were contrasted against Item trials, which required detailed working memory maintenance of visual objects. Results revealed that working memory encoding and maintenance of temporal information relative to visual information was associated with increased activation in dorsolateral PFC (DLPFC), and perirhinal cortex (PRC). In contrast, maintenance of visual details relative to temporal information was characterized by greater activation of parahippocampal cortex (PHC), medial and anterior PFC, and retrosplenial cortex. In the hippocampus, a dissociation along the longitudinal axis was observed such that the anterior hippocampus was more active for working memory encoding and maintenance of visual detail information relative to temporal information, whereas the posterior hippocampus displayed the opposite effect. Posterior parietal cortex was the only region to show sensitivity to temporal working memory across timescales, and was particularly involved in the encoding and maintenance of fine temporal information relative to maintenance of temporal information at more coarse timescales. Collectively, these results highlight the involvement of PFC and MTL in temporal working memory processes, and suggest a dissociation in the type of working memory information represented along the longitudinal axis of the hippocampus. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Fine-grained temporal coding of visually-similar categories in the ventral visual pathway and prefrontal cortex

    PubMed Central

    Xu, Yang; D'Lauro, Christopher; Pyles, John A.; Kass, Robert E.; Tarr, Michael J.

    2013-01-01

    Humans are remarkably proficient at categorizing visually-similar objects. To better understand the cortical basis of this categorization process, we used magnetoencephalography (MEG) to record neural activity while participants learned–with feedback–to discriminate two highly-similar, novel visual categories. We hypothesized that although prefrontal regions would mediate early category learning, this role would diminish with increasing category familiarity and that regions within the ventral visual pathway would come to play a more prominent role in encoding category-relevant information as learning progressed. Early in learning we observed some degree of categorical discriminability and predictability in both prefrontal cortex and the ventral visual pathway. Predictability improved significantly above chance in the ventral visual pathway over the course of learning with the left inferior temporal and fusiform gyri showing the greatest improvement in predictability between 150 and 250 ms (M200) during category learning. In contrast, there was no comparable increase in discriminability in prefrontal cortex with the only significant post-learning effect being a decrease in predictability in the inferior frontal gyrus between 250 and 350 ms (M300). Thus, the ventral visual pathway appears to encode learned visual categories over the long term. At the same time these results add to our understanding of the cortical origins of previously reported signature temporal components associated with perceptual learning. PMID:24146656

  2. Intertrial Temporal Contextual Cuing: Association across Successive Visual Search Trials Guides Spatial Attention

    ERIC Educational Resources Information Center

    Ono, Fuminori; Jiang, Yuhong; Kawahara, Jun-ichiro

    2005-01-01

    Contextual cuing refers to the facilitation of performance in visual search due to the repetition of the same displays. Whereas previous studies have focused on contextual cuing within single-search trials, this study tested whether 1 trial facilitates visual search of the next trial. Participants searched for a T among Ls. In the training phase,…

  3. Temporal Structure and Complexity Affect Audio-Visual Correspondence Detection

    PubMed Central

    Denison, Rachel N.; Driver, Jon; Ruff, Christian C.

    2013-01-01

    Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence) to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task reproduced features of past findings based on explicit timing judgments but did not show any special advantage for perfectly synchronous streams. Importantly, the complexity of temporal patterns influences sensitivity to correspondence. Stochastic, irregular streams – with richer temporal pattern information – led to higher audio-visual matching sensitivity than predictable, rhythmic streams. Our results reveal that temporal structure and its complexity are key determinants for human detection of audio-visual correspondence. The distinctive emphasis of our new paradigms on temporal patterning could be useful for studying special populations with suspected abnormalities in audio-visual temporal perception and multisensory integration. PMID:23346067

  4. Medications influencing central cholinergic pathways affect fixation stability, saccadic response time and associated eye movement dynamics during a temporally-cued visual reaction time task.

    PubMed

    Naicker, Preshanta; Anoopkumar-Dukie, Shailendra; Grant, Gary D; Modenese, Luca; Kavanagh, Justin J

    2017-02-01

    Anticholinergic medications largely exert their effects due to actions on the muscarinic receptor, which mediates the functions of acetylcholine in the peripheral and central nervous systems. In the central nervous system, acetylcholine plays an important role in the modulation of movement. This study investigated the effects of over-the-counter medications with varying degrees of central anticholinergic properties on fixation stability, saccadic response time and the dynamics associated with this eye movement during a temporally-cued visual reaction time task, in order to establish the significance of central cholinergic pathways in influencing eye movements during reaction time tasks. Twenty-two participants were recruited into the placebo-controlled, human double-blind, four-way crossover investigation. Eye tracking technology recorded eye movements while participants reacted to visual stimuli following temporally informative and uninformative cues. The task was performed pre-ingestion as well as 0.5 and 2 h post-ingestion of promethazine hydrochloride (strong centrally acting anticholinergic), hyoscine hydrobromide (moderate centrally acting anticholinergic), hyoscine butylbromide (anticholinergic devoid of central properties) and a placebo. Promethazine decreased fixation stability during the reaction time task. In addition, promethazine was the only drug to increase saccadic response time during temporally informative and uninformative cued trials, whereby effects on response time were more pronounced following temporally informative cues. Promethazine also decreased saccadic amplitude and increased saccadic duration during the temporally-cued reaction time task. Collectively, the results of the study highlight the significant role that central cholinergic pathways play in the control of eye movements during tasks that involve stimulus identification and motor responses following temporal cues.

  5. Atypical form of Alzheimer's disease with prominent posterior cortical atrophy: a review of lesion distribution and circuit disconnection in cortical visual pathways

    NASA Technical Reports Server (NTRS)

    Hof, P. R.; Vogt, B. A.; Bouras, C.; Morrison, J. H.; Bloom, F. E. (Principal Investigator)

    1997-01-01

    In recent years, the existence of visual variants of Alzheimer's disease characterized by atypical clinical presentation at onset has been increasingly recognized. In many of these cases post-mortem neuropathological assessment revealed that correlations could be established between clinical symptoms and the distribution of neurodegenerative lesions. We have analyzed a series of Alzheimer's disease patients presenting with prominent visual symptomatology as a cardinal sign of the disease. In these cases, a shift in the distribution of pathological lesions was observed such that the primary visual areas and certain visual association areas within the occipito-parieto-temporal junction and posterior cingulate cortex had very high densities of lesions, whereas the prefrontal cortex had fewer lesions than usually observed in Alzheimer's disease. Previous quantitative analyses have demonstrated that in Alzheimer's disease, primary sensory and motor cortical areas are less damaged than the multimodal association areas of the frontal and temporal lobes, as indicated by the laminar and regional distribution patterns of neurofibrillary tangles and senile plaques. The distribution of pathological lesions in the cerebral cortex of Alzheimer's disease cases with visual symptomatology revealed that specific visual association pathways were disrupted, whereas these particular connections are likely to be affected to a less severe degree in the more common form of Alzheimer's disease. These data suggest that in some cases with visual variants of Alzheimer's disease, the neurological symptomatology may be related to the loss of certain components of the cortical visual pathways, as reflected by the particular distribution of the neuropathological markers of the disease.

  6. Modulation of Temporal Precision in Thalamic Population Responses to Natural Visual Stimuli

    PubMed Central

    Desbordes, Gaëlle; Jin, Jianzhong; Alonso, Jose-Manuel; Stanley, Garrett B.

    2010-01-01

    Natural visual stimuli have highly structured spatial and temporal properties which influence the way visual information is encoded in the visual pathway. In response to natural scene stimuli, neurons in the lateral geniculate nucleus (LGN) are temporally precise – on a time scale of 10–25 ms – both within single cells and across cells within a population. This time scale, established by non stimulus-driven elements of neuronal firing, is significantly shorter than that of natural scenes, yet is critical for the neural representation of the spatial and temporal structure of the scene. Here, a generalized linear model (GLM) that combines stimulus-driven elements with spike-history dependence associated with intrinsic cellular dynamics is shown to predict the fine timing precision of LGN responses to natural scene stimuli, the corresponding correlation structure across nearby neurons in the population, and the continuous modulation of spike timing precision and latency across neurons. A single model captured the experimentally observed neural response, across different levels of contrasts and different classes of visual stimuli, through interactions between the stimulus correlation structure and the nonlinearity in spike generation and spike history dependence. Given the sensitivity of the thalamocortical synapse to closely timed spikes and the importance of fine timing precision for the faithful representation of natural scenes, the modulation of thalamic population timing over these time scales is likely important for cortical representations of the dynamic natural visual environment. PMID:21151356

  7. Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet

    PubMed Central

    Rolls, Edmund T.

    2012-01-01

    Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy model in which invariant representations can be built by self-organizing learning based on the temporal and spatial statistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associative synaptic learning rule with a short-term memory trace, and/or it can use spatial continuity in continuous spatial transformation learning which does not require a temporal trace. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in, for example, spatial and object search tasks. The approach has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene. The approach has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus. PMID:22723777

  8. Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet.

    PubMed

    Rolls, Edmund T

    2012-01-01

    Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy model in which invariant representations can be built by self-organizing learning based on the temporal and spatial statistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associative synaptic learning rule with a short-term memory trace, and/or it can use spatial continuity in continuous spatial transformation learning which does not require a temporal trace. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in, for example, spatial and object search tasks. The approach has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene. The approach has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus.

  9. Neuroelectrical signs of selective attention to color in boys with attention-deficit hyperactivity disorder.

    PubMed

    van der Stelt, O; van der Molen, M; Boudewijn Gunning, W; Kok, A

    2001-10-01

    In order to gain insight into the functional and macroanatomical loci of visual selective processing deficits that may be basic to attention-deficit hyperactivity disorder (ADHD), the present study examined multi-channel event-related potentials (ERPs) recorded from 7- to 11-year-old boys clinically diagnosed as having ADHD (n=24) and age-matched healthy control boys (n=24) while they performed a visual (color) selective attention task. The spatio-temporal dynamics of several ERP components related to attention to color were characterized using topographic profile analysis, topographic mapping of the ERP and associated scalp current density distributions, and spatio-temporal source potential modeling. Boys with ADHD showed a lower target hit rate, a higher false-alarm rate, and a lower perceptual sensitivity than controls. Also, whereas color attention induced in the ERPs from controls a characteristic early frontally maximal selection positivity (FSP), ADHD boys displayed little or no FSP. Similarly, ADHD boys manifested P3b amplitude decrements that were partially lateralized (i.e., maximal at left temporal scalp locations) as well as affected by maturation. These results indicate that ADHD boys suffer from deficits at both relatively early (sensory) and late (semantic) levels of visual selective information processing. The data also support the hypothesis that the visual selective processing deficits observed in the ADHD boys originate from deficits in the strength of activation of a neural network comprising prefrontal and occipito-temporal brain regions. This network seems to be actively engaged during attention to color and may contain the major intracerebral generating sources of the associated scalp-recorded ERP components.

  10. The relation of object naming and other visual speech production tasks: a large scale voxel-based morphometric study.

    PubMed

    Lau, Johnny King L; Humphreys, Glyn W; Douis, Hassan; Balani, Alex; Bickerton, Wai-Ling; Rotshtein, Pia

    2015-01-01

    We report a lesion-symptom mapping analysis of visual speech production deficits in a large group (280) of stroke patients at the sub-acute stage (<120 days post-stroke). Performance on object naming was evaluated alongside three other tests of visual speech production, namely sentence production to a picture, sentence reading and nonword reading. A principal component analysis was performed on all these tests' scores and revealed a 'shared' component that loaded across all the visual speech production tasks and a 'unique' component that isolated object naming from the other three tasks. Regions for the shared component were observed in the left fronto-temporal cortices, fusiform gyrus and bilateral visual cortices. Lesions in these regions linked to both poor object naming and impairment in general visual-speech production. On the other hand, the unique naming component was potentially associated with the bilateral anterior temporal poles, hippocampus and cerebellar areas. This is in line with the models proposing that object naming relies on a left-lateralised language dominant system that interacts with a bilateral anterior temporal network. Neuropsychological deficits in object naming can reflect both the increased demands specific to the task and the more general difficulties in language processing.

  11. Crossmodal Statistical Binding of Temporal Information and Stimuli Properties Recalibrates Perception of Visual Apparent Motion

    PubMed Central

    Zhang, Yi; Chen, Lihan

    2016-01-01

    Recent studies of brain plasticity that pertain to time perception have shown that fast training of temporal discrimination in one modality, for example, the auditory modality, can improve performance of temporal discrimination in another modality, such as the visual modality. We here examined whether the perception of visual Ternus motion could be recalibrated through fast crossmodal statistical binding of temporal information and stimuli properties binding. We conducted two experiments, composed of three sessions each: pre-test, learning, and post-test. In both the pre-test and the post-test, participants classified the Ternus display as either “element motion” or “group motion.” For the training session in Experiment 1, we constructed two types of temporal structures, in which two consecutively presented sound beeps were dominantly (80%) flanked by one leading visual Ternus frame and by one lagging visual Ternus frame (VAAV) or dominantly inserted by two Ternus visual frames (AVVA). Participants were required to respond which interval (auditory vs. visual) was longer. In Experiment 2, we presented only a single auditory–visual pair but with similar temporal configurations as in Experiment 1, and asked participants to perform an audio–visual temporal order judgment. The results of these two experiments support that statistical binding of temporal information and stimuli properties can quickly and selectively recalibrate the sensitivity of perceiving visual motion, according to the protocols of the specific bindings. PMID:27065910

  12. Visual Phonetic Processing Localized Using Speech and Non-Speech Face Gestures in Video and Point-Light Displays

    PubMed Central

    Bernstein, Lynne E.; Jiang, Jintao; Pantazis, Dimitrios; Lu, Zhong-Lin; Joshi, Anand

    2011-01-01

    The talking face affords multiple types of information. To isolate cortical sites with responsibility for integrating linguistically relevant visual speech cues, speech and non-speech face gestures were presented in natural video and point-light displays during fMRI scanning at 3.0T. Participants with normal hearing viewed the stimuli and also viewed localizers for the fusiform face area (FFA), the lateral occipital complex (LOC), and the visual motion (V5/MT) regions of interest (ROIs). The FFA, the LOC, and V5/MT were significantly less activated for speech relative to non-speech and control stimuli. Distinct activation of the posterior superior temporal sulcus and the adjacent middle temporal gyrus to speech, independent of media, was obtained in group analyses. Individual analyses showed that speech and non-speech stimuli were associated with adjacent but different activations, with the speech activations more anterior. We suggest that the speech activation area is the temporal visual speech area (TVSA), and that it can be localized with the combination of stimuli used in this study. PMID:20853377

  13. Gamma activity modulated by naming of ambiguous and unambiguous images: intracranial recording

    PubMed Central

    Cho-Hisamoto, Yoshimi; Kojima, Katsuaki; Brown, Erik C; Matsuzaki, Naoyuki; Asano, Eishi

    2014-01-01

    OBJECTIVE Humans sometimes need to recognize objects based on vague and ambiguous silhouettes. Recognition of such images may require an intuitive guess. We determined the spatial-temporal characteristics of intracranially-recorded gamma activity (at 50–120 Hz) augmented differentially by naming of ambiguous and unambiguous images. METHODS We studied ten patients who underwent epilepsy surgery. Ambiguous and unambiguous images were presented during extraoperative electrocorticography recording, and patients were instructed to overtly name the object as it is first perceived. RESULTS Both naming tasks were commonly associated with gamma-augmentation sequentially involving the occipital and occipital-temporal regions, bilaterally, within 200 ms after the onset of image presentation. Naming of ambiguous images elicited gamma-augmentation specifically involving portions of the inferior-frontal, orbitofrontal, and inferior-parietal regions at 400 ms and after. Unambiguous images were associated with more intense gamma-augmentation in portions of the occipital and occipital-temporal regions. CONCLUSIONS Frontal-parietal gamma-augmentation specific to ambiguous images may reflect the additional cortical processing involved in exerting intuitive guess. Occipital gamma-augmentation enhanced during naming of unambiguous images can be explained by visual processing of stimuli with richer detail. SIGNIFICANCE Our results support the theoretical model that guessing processes in visual domain occur following the accumulation of sensory evidence resulting from the bottom-up processing in the occipital-temporal visual pathways. PMID:24815577

  14. Disturbed default mode network connectivity patterns in Alzheimer's disease associated with visual processing.

    PubMed

    Krajcovicova, Lenka; Mikl, Michal; Marecek, Radek; Rektorova, Irena

    2014-01-01

    Changes in connectivity of the posterior node of the default mode network (DMN) were studied when switching from baseline to a cognitive task using functional magnetic resonance imaging. In all, 15 patients with mild to moderate Alzheimer's disease (AD) and 18 age-, gender-, and education-matched healthy controls (HC) participated in the study. Psychophysiological interactions analysis was used to assess the specific alterations in the DMN connectivity (deactivation-based) due to psychological effects from the complex visual scene encoding task. In HC, we observed task-induced connectivity decreases between the posterior cingulate and middle temporal and occipital visual cortices. These findings imply successful involvement of the ventral visual pathway during the visual processing in our HC cohort. In AD, involvement of the areas engaged in the ventral visual pathway was observed only in a small volume of the right middle temporal gyrus. Additional connectivity changes (decreases) in AD were present between the posterior cingulate and superior temporal gyrus when switching from baseline to task condition. These changes are probably related to both disturbed visual processing and the DMN connectivity in AD and reflect deficits and compensatory mechanisms within the large scale brain networks in this patient population. Studying the DMN connectivity using psychophysiological interactions analysis may provide a sensitive tool for exploring early changes in AD and their dynamics during the disease progression.

  15. Resolution of Spatial and Temporal Visual Attention in Infants with Fragile X Syndrome

    ERIC Educational Resources Information Center

    Farzin, Faraz; Rivera, Susan M.; Whitney, David

    2011-01-01

    Fragile X syndrome is the most common cause of inherited intellectual impairment and the most common single-gene cause of autism. Individuals with fragile X syndrome present with a neurobehavioural phenotype that includes selective deficits in spatiotemporal visual perception associated with neural processing in frontal-parietal networks of the…

  16. Associated impairment of the categories of conspecifics and biological entities: cognitive and neuroanatomical aspects of a new case.

    PubMed

    Capitani, Erminio; Chieppa, Francesca; Laiacona, Marcella

    2010-05-01

    Case A.C.A. presented an associated impairment of visual recognition and semantic knowledge for celebrities and biological objects. This case was relevant for (a) the neuroanatomical correlations, and (b) the relationship between visual recognition and semantics within the biological domain and the conspecifics domain. A.C.A. was not affected by anterior temporal damage. Her bilateral vascular lesions were localized on the medial and inferior temporal gyrus on the right and on the intermediate fusiform gyrus on the left, without concomitant lesions of the parahippocampal gyrus or posterior fusiform. Data analysis was based on a novel methodology developed to estimate the rate of stored items in the visual structural description system (SDS) or in the face recognition unit. For each biological object, no particular correlation was found between the visual information accessed through the semantic system and that tapped by the picture reality judgement. Findings are discussed with reference to whether a putative resource commonality is likely between biological objects and conspecifics, and whether or not either category may depend on an exclusive neural substrate.

  17. A functional magnetic resonance imaging study mapping the episodic memory encoding network in temporal lobe epilepsy

    PubMed Central

    Sidhu, Meneka K.; Stretton, Jason; Winston, Gavin P.; Bonelli, Silvia; Centeno, Maria; Vollmar, Christian; Symms, Mark; Thompson, Pamela J.; Koepp, Matthias J.

    2013-01-01

    Functional magnetic resonance imaging has demonstrated reorganization of memory encoding networks within the temporal lobe in temporal lobe epilepsy, but little is known of the extra-temporal networks in these patients. We investigated the temporal and extra-temporal reorganization of memory encoding networks in refractory temporal lobe epilepsy and the neural correlates of successful subsequent memory formation. We studied 44 patients with unilateral temporal lobe epilepsy and hippocampal sclerosis (24 left) and 26 healthy control subjects. All participants performed a functional magnetic resonance imaging memory encoding paradigm of faces and words with subsequent out-of-scanner recognition assessments. A blocked analysis was used to investigate activations during encoding and neural correlates of subsequent memory were investigated using an event-related analysis. Event-related activations were then correlated with out-of-scanner verbal and visual memory scores. During word encoding, control subjects activated the left prefrontal cortex and left hippocampus whereas patients with left hippocampal sclerosis showed significant additional right temporal and extra-temporal activations. Control subjects displayed subsequent verbal memory effects within left parahippocampal gyrus, left orbitofrontal cortex and fusiform gyrus whereas patients with left hippocampal sclerosis activated only right posterior hippocampus, parahippocampus and fusiform gyrus. Correlational analysis showed that patients with left hippocampal sclerosis with better verbal memory additionally activated left orbitofrontal cortex, anterior cingulate cortex and left posterior hippocampus. During face encoding, control subjects showed right lateralized prefrontal cortex and bilateral hippocampal activations. Patients with right hippocampal sclerosis showed increased temporal activations within the superior temporal gyri bilaterally and no increased extra-temporal areas of activation compared with control subjects. Control subjects showed subsequent visual memory effects within right amygdala, hippocampus, fusiform gyrus and orbitofrontal cortex. Patients with right hippocampal sclerosis showed subsequent visual memory effects within right posterior hippocampus, parahippocampal and fusiform gyri, and predominantly left hemisphere extra-temporal activations within the insula and orbitofrontal cortex. Correlational analysis showed that patients with right hippocampal sclerosis with better visual memory activated the amygdala bilaterally, right anterior parahippocampal gyrus and left insula. Right sided extra-temporal areas of reorganization observed in patients with left hippocampal sclerosis during word encoding and bilateral lateral temporal reorganization in patients with right hippocampal sclerosis during face encoding were not associated with subsequent memory formation. Reorganization within the medial temporal lobe, however, is an efficient process. The orbitofrontal cortex is critical to subsequent memory formation in control subjects and patients. Activations within anterior cingulum and insula correlated with better verbal and visual subsequent memory in patients with left and right hippocampal sclerosis, respectively, representing effective extra-temporal recruitment. PMID:23674488

  18. Cortical activation during Braille reading is influenced by early visual experience in subjects with severe visual disability: a correlational fMRI study.

    PubMed

    Melzer, P; Morgan, V L; Pickens, D R; Price, R R; Wall, R S; Ebner, F F

    2001-11-01

    Functional magnetic resonance imaging was performed on blind adults resting and reading Braille. The strongest activation was found in primary somatic sensory/motor cortex on both cortical hemispheres. Additional foci of activation were situated in the parietal, temporal, and occipital lobes where visual information is processed in sighted persons. The regions were differentiated most in the correlation of their time courses of activation with resting and reading. Differences in magnitude and expanse of activation were substantially less significant. Among the traditionally visual areas, the strength of correlation was greatest in posterior parietal cortex and moderate in occipitotemporal, lateral occipital, and primary visual cortex. It was low in secondary visual cortex as well as in dorsal and ventral inferior temporal cortex and posterior middle temporal cortex. Visual experience increased the strength of correlation in all regions except dorsal inferior temporal and posterior parietal cortex. The greatest statistically significant increase, i.e., approximately 30%, was in ventral inferior temporal and posterior middle temporal cortex. In these regions, words are analyzed semantically, which may be facilitated by visual experience. In contrast, visual experience resulted in a slight, insignificant diminution of the strength of correlation in dorsal inferior temporal cortex where language is analyzed phonetically. These findings affirm that posterior temporal regions are engaged in the processing of written language. Moreover, they suggest that this function is modified by early visual experience. Furthermore, visual experience significantly strengthened the correlation of activation and Braille reading in occipital regions traditionally involved in the processing of visual features and object recognition suggesting a role for visual imagery. Copyright 2001 Wiley-Liss, Inc.

  19. Occipital cortical thickness in very low birth weight born adolescents predicts altered neural specialization of visual semantic category related neural networks.

    PubMed

    Klaver, Peter; Latal, Beatrice; Martin, Ernst

    2015-01-01

    Very low birth weight (VLBW) premature born infants have a high risk to develop visual perceptual and learning deficits as well as widespread functional and structural brain abnormalities during infancy and childhood. Whether and how prematurity alters neural specialization within visual neural networks is still unknown. We used functional and structural brain imaging to examine the visual semantic system of VLBW born (<1250 g, gestational age 25-32 weeks) adolescents (13-15 years, n = 11, 3 males) and matched term born control participants (13-15 years, n = 11, 3 males). Neurocognitive assessment revealed no group differences except for lower scores on an adaptive visuomotor integration test. All adolescents were scanned while viewing pictures of animals and tools and scrambled versions of these pictures. Both groups demonstrated animal and tool category related neural networks. Term born adolescents showed tool category related neural activity, i.e. tool pictures elicited more activity than animal pictures, in temporal and parietal brain areas. Animal category related activity was found in the occipital, temporal and frontal cortex. VLBW born adolescents showed reduced tool category related activity in the dorsal visual stream compared with controls, specifically the left anterior intraparietal sulcus, and enhanced animal category related activity in the left middle occipital gyrus and right lingual gyrus. Lower birth weight of VLBW adolescents correlated with larger thickness of the pericalcarine gyrus in the occipital cortex and smaller surface area of the superior temporal gyrus in the lateral temporal cortex. Moreover, larger thickness of the pericalcarine gyrus and smaller surface area of the superior temporal gyrus correlated with reduced tool category related activity in the parietal cortex. Together, our data suggest that very low birth weight predicts alterations of higher order visual semantic networks, particularly in the dorsal stream. The differences in neural specialization may be associated with aberrant cortical development of areas in the visual system that develop early in childhood. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Altered functional MR imaging language activation in elderly individuals with cerebral leukoaraiosis.

    PubMed

    Welker, Kirk M; De Jesus, Reordan O; Watson, Robert E; Machulda, Mary M; Jack, Clifford R

    2012-10-01

    To test the hypothesis that leukoaraiosis alters functional activation during a semantic decision language task. With institutional review board approval and written informed consent, 18 right-handed, cognitively healthy elderly participants with an aggregate leukoaraiosis lesion volume of more than 25 cm(3) and 18 age-matched control participants with less than 5 cm(3) of leukoaraiosis underwent functional MR imaging to allow comparison of activation during semantic decisions with that during visual perceptual decisions. Brain statistical maps were derived from the general linear model. Spatially normalized group t maps were created from individual contrast images. A cluster extent threshold of 215 voxels was used to correct for multiple comparisons. Intergroup random effects analysis was performed. Language laterality indexes were calculated for each participant. In control participants, semantic decisions activated the bilateral visual cortex, left posteroinferior temporal lobe, left posterior cingulate gyrus, left frontal lobe expressive language regions, and left basal ganglia. Visual perceptual decisions activated the right parietal and posterior temporal lobes. Participants with leukoaraiosis showed reduced activation in all regions associated with semantic decisions; however, activation associated with visual perceptual decisions increased in extent. Intergroup analysis showed significant activation decreases in the left anterior occipital lobe (P=.016), right posterior temporal lobe (P=.048), and right basal ganglia (P=.009) in particpants with leukoariosis. Individual participant laterality indexes showed a strong trend (P=.059) toward greater left lateralization in the leukoaraiosis group. Moderate leukoaraiosis is associated with atypical functional activation during semantic decision tasks. Consequently, leukoaraiosis is an important confounding variable in functional MR imaging studies of elderly individuals. © RSNA, 2012.

  1. Auditory and visual modulation of temporal lobe neurons in voice-sensitive and association cortices.

    PubMed

    Perrodin, Catherine; Kayser, Christoph; Logothetis, Nikos K; Petkov, Christopher I

    2014-02-12

    Effective interactions between conspecific individuals can depend upon the receiver forming a coherent multisensory representation of communication signals, such as merging voice and face content. Neuroimaging studies have identified face- or voice-sensitive areas (Belin et al., 2000; Petkov et al., 2008; Tsao et al., 2008), some of which have been proposed as candidate regions for face and voice integration (von Kriegstein et al., 2005). However, it was unclear how multisensory influences occur at the neuronal level within voice- or face-sensitive regions, especially compared with classically defined multisensory regions in temporal association cortex (Stein and Stanford, 2008). Here, we characterize auditory (voice) and visual (face) influences on neuronal responses in a right-hemisphere voice-sensitive region in the anterior supratemporal plane (STP) of Rhesus macaques. These results were compared with those in the neighboring superior temporal sulcus (STS). Within the STP, our results show auditory sensitivity to several vocal features, which was not evident in STS units. We also newly identify a functionally distinct neuronal subpopulation in the STP that appears to carry the area's sensitivity to voice identity related features. Audiovisual interactions were prominent in both the STP and STS. However, visual influences modulated the responses of STS neurons with greater specificity and were more often associated with congruent voice-face stimulus pairings than STP neurons. Together, the results reveal the neuronal processes subserving voice-sensitive fMRI activity patterns in primates, generate hypotheses for testing in the visual modality, and clarify the position of voice-sensitive areas within the unisensory and multisensory processing hierarchies.

  2. Auditory and Visual Modulation of Temporal Lobe Neurons in Voice-Sensitive and Association Cortices

    PubMed Central

    Perrodin, Catherine; Kayser, Christoph; Logothetis, Nikos K.

    2014-01-01

    Effective interactions between conspecific individuals can depend upon the receiver forming a coherent multisensory representation of communication signals, such as merging voice and face content. Neuroimaging studies have identified face- or voice-sensitive areas (Belin et al., 2000; Petkov et al., 2008; Tsao et al., 2008), some of which have been proposed as candidate regions for face and voice integration (von Kriegstein et al., 2005). However, it was unclear how multisensory influences occur at the neuronal level within voice- or face-sensitive regions, especially compared with classically defined multisensory regions in temporal association cortex (Stein and Stanford, 2008). Here, we characterize auditory (voice) and visual (face) influences on neuronal responses in a right-hemisphere voice-sensitive region in the anterior supratemporal plane (STP) of Rhesus macaques. These results were compared with those in the neighboring superior temporal sulcus (STS). Within the STP, our results show auditory sensitivity to several vocal features, which was not evident in STS units. We also newly identify a functionally distinct neuronal subpopulation in the STP that appears to carry the area's sensitivity to voice identity related features. Audiovisual interactions were prominent in both the STP and STS. However, visual influences modulated the responses of STS neurons with greater specificity and were more often associated with congruent voice-face stimulus pairings than STP neurons. Together, the results reveal the neuronal processes subserving voice-sensitive fMRI activity patterns in primates, generate hypotheses for testing in the visual modality, and clarify the position of voice-sensitive areas within the unisensory and multisensory processing hierarchies. PMID:24523543

  3. Anticipatory neural dynamics of spatial-temporal orienting of attention in younger and older adults.

    PubMed

    Heideman, Simone G; Rohenkohl, Gustavo; Chauvin, Joshua J; Palmer, Clare E; van Ede, Freek; Nobre, Anna C

    2018-05-04

    Spatial and temporal expectations act synergistically to facilitate visual perception. In the current study, we sought to investigate the anticipatory oscillatory markers of combined spatial-temporal orienting and to test whether these decline with ageing. We examined anticipatory neural dynamics associated with joint spatial-temporal orienting of attention using magnetoencephalography (MEG) in both younger and older adults. Participants performed a cued covert spatial-temporal orienting task requiring the discrimination of a visual target. Cues indicated both where and when targets would appear. In both age groups, valid spatial-temporal cues significantly enhanced perceptual sensitivity and reduced reaction times. In the MEG data, the main effect of spatial orienting was the lateralised anticipatory modulation of posterior alpha and beta oscillations. In contrast to previous reports, this modulation was not attenuated in older adults; instead it was even more pronounced. The main effect of temporal orienting was a bilateral suppression of posterior alpha and beta oscillations. This effect was restricted to younger adults. Our results also revealed a striking interaction between anticipatory spatial and temporal orienting in the gamma-band (60-75 Hz). When considering both age groups separately, this effect was only clearly evident and only survived statistical evaluation in the older adults. Together, these observations provide several new insights into the neural dynamics supporting separate as well as combined effects of spatial and temporal orienting of attention, and suggest that different neural dynamics associated with attentional orienting appear differentially sensitive to ageing. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  4. An insect-inspired model for visual binding I: learning objects and their characteristics.

    PubMed

    Northcutt, Brandon D; Dyhr, Jonathan P; Higgins, Charles M

    2017-04-01

    Visual binding is the process of associating the responses of visual interneurons in different visual submodalities all of which are responding to the same object in the visual field. Recently identified neuropils in the insect brain termed optic glomeruli reside just downstream of the optic lobes and have an internal organization that could support visual binding. Working from anatomical similarities between optic and olfactory glomeruli, we have developed a model of visual binding based on common temporal fluctuations among signals of independent visual submodalities. Here we describe and demonstrate a neural network model capable both of refining selectivity of visual information in a given visual submodality, and of associating visual signals produced by different objects in the visual field by developing inhibitory neural synaptic weights representing the visual scene. We also show that this model is consistent with initial physiological data from optic glomeruli. Further, we discuss how this neural network model may be implemented in optic glomeruli at a neuronal level.

  5. Microreact: visualizing and sharing data for genomic epidemiology and phylogeography

    PubMed Central

    Argimón, Silvia; Abudahab, Khalil; Goater, Richard J. E.; Fedosejev, Artemij; Bhai, Jyothish; Glasner, Corinna; Feil, Edward J.; Holden, Matthew T. G.; Yeats, Corin A.; Grundmann, Hajo; Spratt, Brian G.

    2016-01-01

    Visualization is frequently used to aid our interpretation of complex datasets. Within microbial genomics, visualizing the relationships between multiple genomes as a tree provides a framework onto which associated data (geographical, temporal, phenotypic and epidemiological) are added to generate hypotheses and to explore the dynamics of the system under investigation. Selected static images are then used within publications to highlight the key findings to a wider audience. However, these images are a very inadequate way of exploring and interpreting the richness of the data. There is, therefore, a need for flexible, interactive software that presents the population genomic outputs and associated data in a user-friendly manner for a wide range of end users, from trained bioinformaticians to front-line epidemiologists and health workers. Here, we present Microreact, a web application for the easy visualization of datasets consisting of any combination of trees, geographical, temporal and associated metadata. Data files can be uploaded to Microreact directly via the web browser or by linking to their location (e.g. from Google Drive/Dropbox or via API), and an integrated visualization via trees, maps, timelines and tables provides interactive querying of the data. The visualization can be shared as a permanent web link among collaborators, or embedded within publications to enable readers to explore and download the data. Microreact can act as an end point for any tool or bioinformatic pipeline that ultimately generates a tree, and provides a simple, yet powerful, visualization method that will aid research and discovery and the open sharing of datasets. PMID:28348833

  6. The sensory timecourses associated with conscious visual item memory and source memory.

    PubMed

    Thakral, Preston P; Slotnick, Scott D

    2015-09-01

    Previous event-related potential (ERP) findings have suggested that during visual item and source memory, nonconscious and conscious sensory (occipital-temporal) activity onsets may be restricted to early (0-800 ms) and late (800-1600 ms) temporal epochs, respectively. In an ERP experiment, we tested this hypothesis by separately assessing whether the onset of conscious sensory activity was restricted to the late epoch during source (location) memory and item (shape) memory. We found that conscious sensory activity had a late (>800 ms) onset during source memory and an early (<200 ms) onset during item memory. In a follow-up fMRI experiment, conscious sensory activity was localized to BA17, BA18, and BA19. Of primary importance, the distinct source memory and item memory ERP onsets contradict the hypothesis that there is a fixed temporal boundary separating nonconscious and conscious processing during all forms of visual conscious retrieval. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Temporally Scalable Visual SLAM using a Reduced Pose Graph

    DTIC Science & Technology

    2012-05-25

    m b r i d g e , m a 0 213 9 u s a — w w w. c s a i l . m i t . e d u MIT-CSAIL-TR-2012-013 May 25, 2012 Temporally Scalable Visual SLAM using a...00-00-2012 4. TITLE AND SUBTITLE Temporally Scalable Visual SLAM using a Reduced Pose Graph 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...demonstrate a system for temporally scalable visual SLAM using a reduced pose graph representation. Unlike previous visual SLAM approaches that use

  8. Neural connectivity of the lateral geniculate body in the human brain: diffusion tensor imaging study.

    PubMed

    Kwon, Hyeok Gyu; Jang, Sung Ho

    2014-08-22

    A few studies have reported on the neural connectivity of some neural structures of the visual system in the human brain. However, little is known about the neural connectivity of the lateral geniculate body (LGB). In the current study, using diffusion tensor tractography (DTT), we attempted to investigate the neural connectivity of the LGB in normal subjects. A total of 52 healthy subjects were recruited for this study. A seed region of interest was placed on the LGB using the FMRIB Software Library which is a probabilistic tractography method based on a multi-fiber model. Connectivity was defined as the incidence of connection between the LGB and target brain areas at the threshold of 5, 25, and 50 streamlines. In addition, connectivity represented the percentage of connection in all hemispheres of 52 subjects. We found the following characteristics of connectivity of the LGB at the threshold of 5 streamline: (1) high connectivity to the corpus callosum (91.3%) and the contralateral temporal cortex (56.7%) via the corpus callosum, (2) high connectivity to the ipsilateral cerebral cortex: the temporal lobe (100%), primary visual cortex (95.2%), and visual association cortex (77.9%). The LGB appeared to have high connectivity to the corpus callosum and both temporal cortexes as well as the ipsilateral occipital cortex. We believe that the results of this study would be helpful in investigation of the neural network associated with the visual system and brain plasticity of the visual system after brain injury. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  9. Visually defining and querying consistent multi-granular clinical temporal abstractions.

    PubMed

    Combi, Carlo; Oliboni, Barbara

    2012-02-01

    The main goal of this work is to propose a framework for the visual specification and query of consistent multi-granular clinical temporal abstractions. We focus on the issue of querying patient clinical information by visually defining and composing temporal abstractions, i.e., high level patterns derived from several time-stamped raw data. In particular, we focus on the visual specification of consistent temporal abstractions with different granularities and on the visual composition of different temporal abstractions for querying clinical databases. Temporal abstractions on clinical data provide a concise and high-level description of temporal raw data, and a suitable way to support decision making. Granularities define partitions on the time line and allow one to represent time and, thus, temporal clinical information at different levels of detail, according to the requirements coming from the represented clinical domain. The visual representation of temporal information has been considered since several years in clinical domains. Proposed visualization techniques must be easy and quick to understand, and could benefit from visual metaphors that do not lead to ambiguous interpretations. Recently, physical metaphors such as strips, springs, weights, and wires have been proposed and evaluated on clinical users for the specification of temporal clinical abstractions. Visual approaches to boolean queries have been considered in the last years and confirmed that the visual support to the specification of complex boolean queries is both an important and difficult research topic. We propose and describe a visual language for the definition of temporal abstractions based on a set of intuitive metaphors (striped wall, plastered wall, brick wall), allowing the clinician to use different granularities. A new algorithm, underlying the visual language, allows the physician to specify only consistent abstractions, i.e., abstractions not containing contradictory conditions on the component abstractions. Moreover, we propose a visual query language where different temporal abstractions can be composed to build complex queries: temporal abstractions are visually connected through the usual logical connectives AND, OR, and NOT. The proposed visual language allows one to simply define temporal abstractions by using intuitive metaphors, and to specify temporal intervals related to abstractions by using different temporal granularities. The physician can interact with the designed and implemented tool by point-and-click selections, and can visually compose queries involving several temporal abstractions. The evaluation of the proposed granularity-related metaphors consisted in two parts: (i) solving 30 interpretation exercises by choosing the correct interpretation of a given screenshot representing a possible scenario, and (ii) solving a complex exercise, by visually specifying through the interface a scenario described only in natural language. The exercises were done by 13 subjects. The percentage of correct answers to the interpretation exercises were slightly different with respect to the considered metaphors (54.4--striped wall, 73.3--plastered wall, 61--brick wall, and 61--no wall), but post hoc statistical analysis on means confirmed that differences were not statistically significant. The result of the user's satisfaction questionnaire related to the evaluation of the proposed granularity-related metaphors ratified that there are no preferences for one of them. The evaluation of the proposed logical notation consisted in two parts: (i) solving five interpretation exercises provided by a screenshot representing a possible scenario and by three different possible interpretations, of which only one was correct, and (ii) solving five exercises, by visually defining through the interface a scenario described only in natural language. Exercises had an increasing difficulty. The evaluation involved a total of 31 subjects. Results related to this evaluation phase confirmed us about the soundness of the proposed solution even in comparison with a well known proposal based on a tabular query form (the only significant difference is that our proposal requires more time for the training phase: 21 min versus 14 min). In this work we have considered the issue of visually composing and querying temporal clinical patient data. In this context we have proposed a visual framework for the specification of consistent temporal abstractions with different granularities and for the visual composition of different temporal abstractions to build (possibly) complex queries on clinical databases. A new algorithm has been proposed to check the consistency of the specified granular abstraction. From the evaluation of the proposed metaphors and interfaces and from the comparison of the visual query language with a well known visual method for boolean queries, the soundness of the overall system has been confirmed; moreover, pros and cons and possible improvements emerged from the comparison of different visual metaphors and solutions. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Individual variation in the propensity for prospective thought is associated with functional integration between visual and retrosplenial cortex.

    PubMed

    Villena-Gonzalez, Mario; Wang, Hao-Ting; Sormaz, Mladen; Mollo, Giovanna; Margulies, Daniel S; Jefferies, Elizabeth A; Smallwood, Jonathan

    2018-02-01

    It is well recognized that the default mode network (DMN) is involved in states of imagination, although the cognitive processes that this association reflects are not well understood. The DMN includes many regions that function as cortical "hubs", including the posterior cingulate/retrosplenial cortex, anterior temporal lobe and the hippocampus. This suggests that the role of the DMN in cognition may reflect a process of cortical integration. In the current study we tested whether functional connectivity from uni-modal regions of cortex into the DMN is linked to features of imaginative thought. We found that strong intrinsic communication between visual and retrosplenial cortex was correlated with the degree of social thoughts about the future. Using an independent dataset, we show that the same region of retrosplenial cortex is functionally coupled to regions of primary visual cortex as well as core regions that make up the DMN. Finally, we compared the functional connectivity of the retrosplenial cortex, with a region of medial prefrontal cortex implicated in the integration of information from regions of the temporal lobe associated with future thought in a prior study. This analysis shows that the retrosplenial cortex is preferentially coupled to medial occipital, temporal lobe regions and the angular gyrus, areas linked to episodic memory, scene construction and navigation. In contrast, the medial prefrontal cortex shows preferential connectivity with motor cortex and lateral temporal and prefrontal regions implicated in language, motor processes and working memory. Together these findings suggest that integrating neural information from visual cortex into retrosplenial cortex may be important for imagining the future and may do so by creating a mental scene in which prospective simulations play out. We speculate that the role of the DMN in imagination may emerge from its capacity to bind together distributed representations from across the cortex in a coherent manner. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Temporal windows in visual processing: "prestimulus brain state" and "poststimulus phase reset" segregate visual transients on different temporal scales.

    PubMed

    Wutz, Andreas; Weisz, Nathan; Braun, Christoph; Melcher, David

    2014-01-22

    Dynamic vision requires both stability of the current perceptual representation and sensitivity to the accumulation of sensory evidence over time. Here we study the electrophysiological signatures of this intricate balance between temporal segregation and integration in vision. Within a forward masking paradigm with short and long stimulus onset asynchronies (SOA), we manipulated the temporal overlap of the visual persistence of two successive transients. Human observers enumerated the items presented in the second target display as a measure of the informational capacity read-out from this partly temporally integrated visual percept. We observed higher β-power immediately before mask display onset in incorrect trials, in which enumeration failed due to stronger integration of mask and target visual information. This effect was timescale specific, distinguishing between segregation and integration of visual transients that were distant in time (long SOA). Conversely, for short SOA trials, mask onset evoked a stronger visual response when mask and targets were correctly segregated in time. Examination of the target-related response profile revealed the importance of an evoked α-phase reset for the segregation of those rapid visual transients. Investigating this precise mapping of the temporal relationships of visual signals onto electrophysiological responses highlights how the stream of visual information is carved up into discrete temporal windows that mediate between segregated and integrated percepts. Fragmenting the stream of visual information provides a means to stabilize perceptual events within one instant in time.

  12. Pre-Orthographic Character String Processing and Parietal Cortex: A Role for Visual Attention in Reading?

    ERIC Educational Resources Information Center

    Lobier, Muriel; Peyrin, Carole; Le Bas, Jean-Francois; Valdois, Sylviane

    2012-01-01

    The visual front-end of reading is most often associated with orthographic processing. The left ventral occipito-temporal cortex seems to be preferentially tuned for letter string and word processing. In contrast, little is known of the mechanisms responsible for pre-orthographic processing: the processing of character strings regardless of…

  13. Cortical Integration of Audio-Visual Information

    PubMed Central

    Vander Wyk, Brent C.; Ramsay, Gordon J.; Hudac, Caitlin M.; Jones, Warren; Lin, David; Klin, Ami; Lee, Su Mei; Pelphrey, Kevin A.

    2013-01-01

    We investigated the neural basis of audio-visual processing in speech and non-speech stimuli. Physically identical auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses) were used in this fMRI experiment. Relative to unimodal stimuli, each of the multimodal conjunctions showed increased activation in largely non-overlapping areas. The conjunction of Ellipse and Speech, which most resembles naturalistic audiovisual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. The conjunction of Circle and Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. The conjunction of Circle and Speech showed activation in lateral occipital cortex, and the conjunction of Ellipse and Tone did not show increased activation relative to unimodal stimuli. Further analysis revealed that middle temporal regions, although identified as multimodal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multimodal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which speech or non-speech percepts are evoked. PMID:20709442

  14. Audio visual speech source separation via improved context dependent association model

    NASA Astrophysics Data System (ADS)

    Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz

    2014-12-01

    In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.

  15. Interactions between the spatial and temporal stimulus factors that influence multisensory integration in human performance.

    PubMed

    Stevenson, Ryan A; Fister, Juliane Krueger; Barnett, Zachary P; Nidiffer, Aaron R; Wallace, Mark T

    2012-05-01

    In natural environments, human sensory systems work in a coordinated and integrated manner to perceive and respond to external events. Previous research has shown that the spatial and temporal relationships of sensory signals are paramount in determining how information is integrated across sensory modalities, but in ecologically plausible settings, these factors are not independent. In the current study, we provide a novel exploration of the impact on behavioral performance for systematic manipulations of the spatial location and temporal synchrony of a visual-auditory stimulus pair. Simple auditory and visual stimuli were presented across a range of spatial locations and stimulus onset asynchronies (SOAs), and participants performed both a spatial localization and simultaneity judgment task. Response times in localizing paired visual-auditory stimuli were slower in the periphery and at larger SOAs, but most importantly, an interaction was found between the two factors, in which the effect of SOA was greater in peripheral as opposed to central locations. Simultaneity judgments also revealed a novel interaction between space and time: individuals were more likely to judge stimuli as synchronous when occurring in the periphery at large SOAs. The results of this study provide novel insights into (a) how the speed of spatial localization of an audiovisual stimulus is affected by location and temporal coincidence and the interaction between these two factors and (b) how the location of a multisensory stimulus impacts judgments concerning the temporal relationship of the paired stimuli. These findings provide strong evidence for a complex interdependency between spatial location and temporal structure in determining the ultimate behavioral and perceptual outcome associated with a paired multisensory (i.e., visual-auditory) stimulus.

  16. Illusory Reversal of Causality between Touch and Vision has No Effect on Prism Adaptation Rate.

    PubMed

    Tanaka, Hirokazu; Homma, Kazuhiro; Imamizu, Hiroshi

    2012-01-01

    Learning, according to Oxford Dictionary, is "to gain knowledge or skill by studying, from experience, from being taught, etc." In order to learn from experience, the central nervous system has to decide what action leads to what consequence, and temporal perception plays a critical role in determining the causality between actions and consequences. In motor adaptation, causality between action and consequence is implicitly assumed so that a subject adapts to a new environment based on the consequence caused by her action. Adaptation to visual displacement induced by prisms is a prime example; the visual error signal associated with the motor output contributes to the recovery of accurate reaching, and a delayed feedback of visual error can decrease the adaptation rate. Subjective feeling of temporal order of action and consequence, however, can be modified or even reversed when her sense of simultaneity is manipulated with an artificially delayed feedback. Our previous study (Tanaka et al., 2011; Exp. Brain Res.) demonstrated that the rate of prism adaptation was unaffected when the subjective delay of visual feedback was shortened. This study asked whether subjects could adapt to prism adaptation and whether the rate of prism adaptation was affected when the subjective temporal order was illusory reversed. Adapting to additional 100 ms delay and its sudden removal caused a positive shift of point of simultaneity in a temporal order judgment experiment, indicating an illusory reversal of action and consequence. We found that, even in this case, the subjects were able to adapt to prism displacement with the learning rate that was statistically indistinguishable to that without temporal adaptation. This result provides further evidence to the dissociation between conscious temporal perception and motor adaptation.

  17. Functional network connectivity underlying food processing: disturbed salience and visual processing in overweight and obese adults.

    PubMed

    Kullmann, Stephanie; Pape, Anna-Antonia; Heni, Martin; Ketterer, Caroline; Schick, Fritz; Häring, Hans-Ulrich; Fritsche, Andreas; Preissl, Hubert; Veit, Ralf

    2013-05-01

    In order to adequately explore the neurobiological basis of eating behavior of humans and their changes with body weight, interactions between brain areas or networks need to be investigated. In the current functional magnetic resonance imaging study, we examined the modulating effects of stimulus category (food vs. nonfood), caloric content of food, and body weight on the time course and functional connectivity of 5 brain networks by means of independent component analysis in healthy lean and overweight/obese adults. These functional networks included motor sensory, default-mode, extrastriate visual, temporal visual association, and salience networks. We found an extensive modulation elicited by food stimuli in the 2 visual and salience networks, with a dissociable pattern in the time course and functional connectivity between lean and overweight/obese subjects. Specifically, only in lean subjects, the temporal visual association network was modulated by the stimulus category and the salience network by caloric content, whereas overweight and obese subjects showed a generalized augmented response in the salience network. Furthermore, overweight/obese subjects showed changes in functional connectivity in networks important for object recognition, motivational salience, and executive control. These alterations could potentially lead to top-down deficiencies driving the overconsumption of food in the obese population.

  18. Clinical utility of the Wechsler Memory Scale--Fourth Edition (WMS-IV) in predicting laterality of temporal lobe epilepsy among surgical candidates.

    PubMed

    Soble, Jason R; Eichstaedt, Katie E; Waseem, Hena; Mattingly, Michelle L; Benbadis, Selim R; Bozorg, Ali M; Vale, Fernando L; Schoenberg, Mike R

    2014-12-01

    This study evaluated the accuracy of the Wechsler Memory Scale--Fourth Edition (WMS-IV) in identifying functional cognitive deficits associated with seizure laterality in localization-related temporal lobe epilepsy (TLE) relative to a previously established measure, the Rey Auditory Verbal Learning Test (RAVLT). Emerging WMS-IV studies have highlighted psychometric improvements that may enhance its ability to identify lateralized memory deficits. Data from 57 patients with video-EEG-confirmed unilateral TLE who were administered the WMS-IV and RAVLT as part of a comprehensive presurgical neuropsychological evaluation for temporal resection were retrospectively reviewed. We examined the predictive accuracy of the WMS-IV not only in terms of verbal versus visual composite scores but also using individual subtests. A series of hierarchal logistic regression models were developed, including the RAVLT, WMS-IV delayed subtests (Logical Memory, Verbal Paired Associates, Designs, Visual Reproduction), and a WMS-IV verbal-visual memory difference score. Analyses showed that the RAVLT significantly predicted laterality with overall classification rates of 69.6% to 70.2%, whereas neither the individual WMS-IV subtests nor the verbal-visual memory difference score accounted for additional significant variance. Similar to previous versions of the WMS, findings cast doubt as to whether the WMS-IV offers significant incremental validity in discriminating seizure laterality in TLE beyond what can be obtained from the RAVLT. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Visual analysis as a method of interpretation of the results of satellite ionospheric measurements for exploratory problems

    NASA Astrophysics Data System (ADS)

    Korneva, N. N.; Mogilevskii, M. M.; Nazarov, V. N.

    2016-05-01

    Traditional methods of time series analysis of satellite ionospheric measurements have some limitations and disadvantages that are mainly associated with the complex nonstationary signal structure. In this paper, the possibility of identifying and studying the temporal characteristics of signals via visual analysis is considered. The proposed approach is illustrated by the example of the visual analysis of wave measurements on the DEMETER microsatellite during its passage over the HAARP facility.

  20. The loss of short-term visual representations over time: decay or temporal distinctiveness?

    PubMed

    Mercer, Tom

    2014-12-01

    There has been much recent interest in the loss of visual short-term memories over the passage of time. According to decay theory, visual representations are gradually forgotten as time passes, reflecting a slow and steady distortion of the memory trace. However, this is controversial and decay effects can be explained in other ways. The present experiment aimed to reexamine the maintenance and loss of visual information over the short term. Decay and temporal distinctiveness models were tested using a delayed discrimination task, in which participants compared complex and novel objects over unfilled retention intervals of variable length. Experiment 1 found no significant change in the accuracy of visual memory from 2 to 6 s, but the gap separating trials reliably influenced task performance. Experiment 2 found evidence for information loss at a 10-s retention interval, but temporally separating trials restored the fidelity of visual memory, possibly because temporally isolated representations are distinct from older memory traces. In conclusion, visual representations lose accuracy at some point after 6 s, but only within temporally crowded contexts. These findings highlight the importance of temporal distinctiveness within visual short-term memory. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  1. Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence

    PubMed Central

    Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude

    2016-01-01

    The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain. PMID:27282108

  2. Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence.

    PubMed

    Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude

    2016-06-10

    The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain.

  3. What’s the Gist? The influence of schemas on the neural correlates underlying true and false memories

    PubMed Central

    Webb, Christina E.; Turney, Indira C.; Dennis, Nancy A.

    2017-01-01

    The current study used a novel scene paradigm to investigate the role of encoding schemas on memory. Specifically, the study examined the influence of a strong encoding schema on retrieval of both schematic and non-schematic information, as well as false memories for information associated with the schema. Additionally, the separate roles of recollection and familiarity in both veridical and false memory retrieval were examined. The study identified several novel results. First, while many common neural regions mediated both schematic and non-schematic retrieval success, schematic recollection exhibited greater activation in visual cortex and hippocampus, regions commonly shown to mediate detailed retrieval. More effortful cognitive control regions in the prefrontal and parietal cortices, on the other hand, supported non-schematic recollection, while lateral temporal cortices supported familiarity-based retrieval of non-schematic items. Second, both true and false recollection, as well as familiarity, were mediated by activity in left middle temporal gyrus, a region associated with semantic processing and retrieval of schematic gist. Moreover, activity in this region was greater for both false recollection and false familiarity, suggesting a greater reliance on lateral temporal cortices for retrieval of illusory memories, irrespective of memory strength. Consistent with previous false memory studies, visual cortex showed increased activity for true compared to false recollection, suggesting that visual cortices are critical for distinguishing between previously viewed targets and related lures at retrieval. Additionally, the absence of common visual activity between true and false retrieval suggests that, unlike previous studies utilizing visual stimuli, when false memories are predicated on schematic gist and not perceptual overlap, there is little reliance on visual processes during false memory retrieval. Finally, the medial temporal lobe exhibited an interesting dissociation, showing greater activity for true compared to false recollection, as well as for false compared to true familiarity. These results provided an indication as to how different types of items are retrieved when studied within a highly schematic context. Results both replicate and extend previous true and false memory findings, supporting the Fuzzy Trace Theory. PMID:27697593

  4. What's the gist? The influence of schemas on the neural correlates underlying true and false memories.

    PubMed

    Webb, Christina E; Turney, Indira C; Dennis, Nancy A

    2016-12-01

    The current study used a novel scene paradigm to investigate the role of encoding schemas on memory. Specifically, the study examined the influence of a strong encoding schema on retrieval of both schematic and non-schematic information, as well as false memories for information associated with the schema. Additionally, the separate roles of recollection and familiarity in both veridical and false memory retrieval were examined. The study identified several novel results. First, while many common neural regions mediated both schematic and non-schematic retrieval success, schematic recollection exhibited greater activation in visual cortex and hippocampus, regions commonly shown to mediate detailed retrieval. More effortful cognitive control regions in the prefrontal and parietal cortices, on the other hand, supported non-schematic recollection, while lateral temporal cortices supported familiarity-based retrieval of non-schematic items. Second, both true and false recollection, as well as familiarity, were mediated by activity in left middle temporal gyrus, a region associated with semantic processing and retrieval of schematic gist. Moreover, activity in this region was greater for both false recollection and false familiarity, suggesting a greater reliance on lateral temporal cortices for retrieval of illusory memories, irrespective of memory strength. Consistent with previous false memory studies, visual cortex showed increased activity for true compared to false recollection, suggesting that visual cortices are critical for distinguishing between previously viewed targets and related lures at retrieval. Additionally, the absence of common visual activity between true and false retrieval suggests that, unlike previous studies utilizing visual stimuli, when false memories are predicated on schematic gist and not perceptual overlap, there is little reliance on visual processes during false memory retrieval. Finally, the medial temporal lobe exhibited an interesting dissociation, showing greater activity for true compared to false recollection, as well as for false compared to true familiarity. These results provided an indication as to how different types of items are retrieved when studied within a highly schematic context. Results both replicate and extend previous true and false memory findings, supporting the Fuzzy Trace Theory. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Route Learning Impairment in Temporal Lobe Epilepsy

    PubMed Central

    Bell, Brian D.

    2012-01-01

    Memory impairment on neuropsychological tests is relatively common in temporal lobe epilepsy (TLE) patients. But memory rarely has been evaluated in more naturalistic settings. This study assessed TLE (n = 19) and control (n = 32) groups on a real-world route learning (RL) test. Compared to the controls, the TLE group committed significantly more total errors across the three RL test trials. RL errors correlated significantly with standardized auditory and visual memory and visual-perceptual test scores in the TLE group. In the TLE subset for whom hippocampal data were available (n = 14), RL errors also correlated significantly with left hippocampal volume. This is one of the first studies to demonstrate real-world memory impairment in TLE patients and its association with both mesial temporal lobe integrity and standardized memory test performance. The results support the ecological validity of clinical neuropsychological assessment. PMID:23041173

  6. How can knowledge discovery methods uncover spatio-temporal patterns in environmental data?

    NASA Astrophysics Data System (ADS)

    Wachowicz, Monica

    2000-04-01

    This paper proposes the integration of KDD, GVis and STDB as a long-term strategy, which will allow users to apply knowledge discovery methods for uncovering spatio-temporal patterns in environmental data. The main goal is to combine innovative techniques and associated tools for exploring very large environmental data sets in order to arrive at valid, novel, potentially useful, and ultimately understandable spatio-temporal patterns. The GeoInsight approach is described using the principles and key developments in the research domains of KDD, GVis, and STDB. The GeoInsight approach aims at the integration of these research domains in order to provide tools for performing information retrieval, exploration, analysis, and visualization. The result is a knowledge-based design, which involves visual thinking (perceptual-cognitive process) and automated information processing (computer-analytical process).

  7. STRAD Wheel: Web-Based Library for Visualizing Temporal Data.

    PubMed

    Fernondez-Prieto, Diana; Naranjo-Valero, Carol; Hernandez, Jose Tiberio; Hagen, Hans

    2017-01-01

    Recent advances in web development, including the introduction of HTML5, have opened a door for visualization researchers and developers to quickly access larger audiences worldwide. Open source libraries for the creation of interactive visualizations are becoming more specialized but also modular, which makes them easy to incorporate in domain-specific applications. In this context, the authors developed STRAD (Spatio-Temporal-Radar) Wheel, a web-based library that focuses on the visualization and interactive query of temporal data in a compact view with multiple temporal granularities. This article includes two application examples in urban planning to help illustrate the proposed visualization's use in practice.

  8. The effect of synesthetic associations between the visual and auditory modalities on the Colavita effect.

    PubMed

    Stekelenburg, Jeroen J; Keetels, Mirjam

    2016-05-01

    The Colavita effect refers to the phenomenon that when confronted with an audiovisual stimulus, observers report more often to have perceived the visual than the auditory component. The Colavita effect depends on low-level stimulus factors such as spatial and temporal proximity between the unimodal signals. Here, we examined whether the Colavita effect is modulated by synesthetic congruency between visual size and auditory pitch. If the Colavita effect depends on synesthetic congruency, we expect a larger Colavita effect for synesthetically congruent size/pitch (large visual stimulus/low-pitched tone; small visual stimulus/high-pitched tone) than synesthetically incongruent (large visual stimulus/high-pitched tone; small visual stimulus/low-pitched tone) combinations. Participants had to identify stimulus type (visual, auditory or audiovisual). The study replicated the Colavita effect because participants reported more often the visual than auditory component of the audiovisual stimuli. Synesthetic congruency had, however, no effect on the magnitude of the Colavita effect. EEG recordings to congruent and incongruent audiovisual pairings showed a late frontal congruency effect at 400-550 ms and an occipitoparietal effect at 690-800 ms with neural sources in the anterior cingulate and premotor cortex for the 400- to 550-ms window and premotor cortex, inferior parietal lobule and the posterior middle temporal gyrus for the 690- to 800-ms window. The electrophysiological data show that synesthetic congruency was probably detected in a processing stage subsequent to the Colavita effect. We conclude that-in a modality detection task-the Colavita effect can be modulated by low-level structural factors but not by higher-order associations between auditory and visual inputs.

  9. Verbal Dominant Memory Impairment and Low Risk for Post-operative Memory Worsening in Both Left and Right Temporal Lobe Epilepsy Associated with Hippocampal Sclerosis.

    PubMed

    Khalil, Amr Farid; Iwasaki, Masaki; Nishio, Yoshiyuki; Jin, Kazutaka; Nakasato, Nobukazu; Tominaga, Teiji

    2016-11-15

    Post-operative memory changes after temporal lobe surgery have been established mainly by group analysis of cognitive outcome. This study investigated individual patient-based memory outcome in surgically-treated patients with mesial temporal lobe epilepsy (TLE). This study included 84 consecutive patients with intractable TLE caused by unilateral hippocampal sclerosis (HS) who underwent epilepsy surgery (47 females, 41 left [Lt] TLE). Memory functions were evaluated with the Wechsler Memory Scale-Revised before and at 1 year after surgery. Pre-operative memory function was classified into three patterns: verbal dominant memory impairment (Verb-D), visual dominant impairment (Vis-D), and no material-specific impairment. Post-operative changes in verbal and visual memory indices were classified into meaningful improvement, worsening, or no significant changes. Pre-operative patterns and post-operative changes in verbal and visual memory function were compared between the Lt and right (Rt) TLE groups. Pre-operatively, Verb-D was the most common type of impairment in both the Lt and Rt TLE groups (65.9 and 48.8%), and verbal memory indices were lower than visual memory indices, especially in the Lt compared with Rt TLE group. Vis-D was observed only in 11.6% of Rt and 7.3% of Lt TLE patients. Post-operatively, meaningful improvement of memory indices was observed in 23.3-36.6% of the patients, and the memory improvement was equivalent between Lt and Rt TLE groups and between verbal and visual materials. In conclusion, Verb-D is most commonly observed in patients with both the Lt and Rt TLE associated with HS. Hippocampectomy can improve memory indices in such patients regardless of the side of surgery and the function impaired.

  10. Temporal characteristics of audiovisual information processing.

    PubMed

    Fuhrmann Alpert, Galit; Hein, Grit; Tsai, Nancy; Naumer, Marcus J; Knight, Robert T

    2008-05-14

    In complex natural environments, auditory and visual information often have to be processed simultaneously. Previous functional magnetic resonance imaging (fMRI) studies focused on the spatial localization of brain areas involved in audiovisual (AV) information processing, but the temporal characteristics of AV information flow in these regions remained unclear. In this study, we used fMRI and a novel information-theoretic approach to study the flow of AV sensory information. Subjects passively perceived sounds and images of objects presented either alone or simultaneously. Applying the measure of mutual information, we computed for each voxel the latency in which the blood oxygenation level-dependent signal had the highest information content about the preceding stimulus. The results indicate that, after AV stimulation, the earliest informative activity occurs in right Heschl's gyrus, left primary visual cortex, and the posterior portion of the superior temporal gyrus, which is known as a region involved in object-related AV integration. Informative activity in the anterior portion of superior temporal gyrus, middle temporal gyrus, right occipital cortex, and inferior frontal cortex was found at a later latency. Moreover, AV presentation resulted in shorter latencies in multiple cortical areas compared with isolated auditory or visual presentation. The results provide evidence for bottom-up processing from primary sensory areas into higher association areas during AV integration in humans and suggest that AV presentation shortens processing time in early sensory cortices.

  11. An fMRI Study of Episodic Memory: Retrieval of Object, Spatial, and Temporal Information

    PubMed Central

    Hayes, Scott M.; Ryan, Lee; Schnyer, David M.; Nadel, Lynn

    2011-01-01

    Sixteen participants viewed a videotaped tour of 4 houses, highlighting a series of objects and their spatial locations. Participants were tested for memory of object, spatial, and temporal order information while undergoing functional Magnetic Resonance Imaging. Preferential activation was observed in right parahippocampal gyrus during the retrieval of spatial location information. Retrieval of contextual information (spatial location and temporal order) was associated with activation in right dorsolateral prefrontal cortex. In bilateral posterior parietal regions, greater activation was associated with processing of visual scenes, regardless of the memory judgment. These findings support current theories positing roles for frontal and medial temporal regions during episodic retrieval and suggest a specific role for the hippocampal complex in the retrieval of spatial location information PMID:15506871

  12. Task relevance modulates the behavioural and neural effects of sensory predictions

    PubMed Central

    Friston, Karl J.; Nobre, Anna C.

    2017-01-01

    The brain is thought to generate internal predictions to optimize behaviour. However, it is unclear whether predictions signalling is an automatic brain function or depends on task demands. Here, we manipulated the spatial/temporal predictability of visual targets, and the relevance of spatial/temporal information provided by auditory cues. We used magnetoencephalography (MEG) to measure participants’ brain activity during task performance. Task relevance modulated the influence of predictions on behaviour: spatial/temporal predictability improved spatial/temporal discrimination accuracy, but not vice versa. To explain these effects, we used behavioural responses to estimate subjective predictions under an ideal-observer model. Model-based time-series of predictions and prediction errors (PEs) were associated with dissociable neural responses: predictions correlated with cue-induced beta-band activity in auditory regions and alpha-band activity in visual regions, while stimulus-bound PEs correlated with gamma-band activity in posterior regions. Crucially, task relevance modulated these spectral correlates, suggesting that current goals influence PE and prediction signalling. PMID:29206225

  13. The absence or temporal offset of visual feedback does not influence adaptation to novel movement dynamics.

    PubMed

    McKenna, Erin; Bray, Laurence C Jayet; Zhou, Weiwei; Joiner, Wilsaan M

    2017-10-01

    Delays in transmitting and processing sensory information require correctly associating delayed feedback to issued motor commands for accurate error compensation. The flexibility of this alignment between motor signals and feedback has been demonstrated for movement recalibration to visual manipulations, but the alignment dependence for adapting movement dynamics is largely unknown. Here we examined the effect of visual feedback manipulations on force-field adaptation. Three subject groups used a manipulandum while experiencing a lag in the corresponding cursor motion (0, 75, or 150 ms). When the offset was applied at the start of the session (continuous condition), adaptation was not significantly different between groups. However, these similarities may be due to acclimation to the offset before motor adaptation. We tested additional subjects who experienced the same delays concurrent with the introduction of the perturbation (abrupt condition). In this case adaptation was statistically indistinguishable from the continuous condition, indicating that acclimation to feedback delay was not a factor. In addition, end-point errors were not significantly different across the delay or onset conditions, but end-point correction (e.g., deceleration duration) was influenced by the temporal offset. As an additional control, we tested a group of subjects who performed without visual feedback and found comparable movement adaptation results. These results suggest that visual feedback manipulation (absence or temporal misalignment) does not affect adaptation to novel dynamics, independent of both acclimation and perceptual awareness. These findings could have implications for modeling how the motor system adjusts to errors despite concurrent delays in sensory feedback information. NEW & NOTEWORTHY A temporal offset between movement and distorted visual feedback (e.g., visuomotor rotation) influences the subsequent motor recalibration, but the effects of this offset for altered movement dynamics are largely unknown. Here we examined the influence of 1 ) delayed and 2 ) removed visual feedback on the adaptation to novel movement dynamics. These results contribute to understanding of the control strategies that compensate for movement errors when there is a temporal separation between motion state and sensory information. Copyright © 2017 the American Physiological Society.

  14. Visuomotor adaptation to a visual rotation is gravity dependent.

    PubMed

    Toma, Simone; Sciutti, Alessandra; Papaxanthis, Charalambos; Pozzo, Thierry

    2015-03-15

    Humans perform vertical and horizontal arm motions with different temporal patterns. The specific velocity profiles are chosen by the central nervous system by integrating the gravitational force field to minimize energy expenditure. However, what happens when a visuomotor rotation is applied, so that a motion performed in the horizontal plane is perceived as vertical? We investigated the dynamic of the adaptation of the spatial and temporal properties of a pointing motion during prolonged exposure to a 90° visuomotor rotation, where a horizontal movement was associated with a vertical visual feedback. We found that participants immediately adapted the spatial parameters of motion to the conflicting visual scene in order to keep their arm trajectory straight. In contrast, the initial symmetric velocity profiles specific for a horizontal motion were progressively modified during the conflict exposure, becoming more asymmetric and similar to those appropriate for a vertical motion. Importantly, this visual effect that increased with repetitions was not followed by a consistent aftereffect when the conflicting visual feedback was absent (catch and washout trials). In a control experiment we demonstrated that an intrinsic representation of the temporal structure of perceived vertical motions could provide the error signal allowing for this progressive adaptation of motion timing. These findings suggest that gravity strongly constrains motor learning and the reweighting process between visual and proprioceptive sensory inputs, leading to the selection of a motor plan that is suboptimal in terms of energy expenditure. Copyright © 2015 the American Physiological Society.

  15. Possible Quantum Absorber Effects in Cortical Synchronization

    NASA Astrophysics Data System (ADS)

    Kämpf, Uwe

    The Wheeler-Feynman transactional "absorber" approach was proposed originally to account for anomalous resonance coupling between spatio-temporally distant measurement partners in entangled quantum states of so-called Einstein-Podolsky-Rosen paradoxes, e.g. of spatio-temporal non-locality, quantum teleportation, etc. Applied to quantum brain dynamics, however, this view provides an anticipative resonance coupling model for aspects of cortical synchronization and recurrent visual action control. It is proposed to consider the registered activation patterns of neuronal loops in so-called synfire chains not as a result of retarded brain communication processes, but rather as surface effects of a system of standing waves generated in the depth of visual processing. According to this view, they arise from a counterbalance between the actual input's delayed bottom-up data streams and top-down recurrent information-processing of advanced anticipative signals in a Wheeler-Feynman-type absorber mode. In the framework of a "time-loop" model, findings about mirror neurons in the brain cortex are suggested to be at least partially associated with temporal rather than spatial mirror functions of visual processing, similar to phase conjugate adaptive resonance-coupling in nonlinear optics.

  16. Temporal Processing Capacity in High-Level Visual Cortex Is Domain Specific.

    PubMed

    Stigliani, Anthony; Weiner, Kevin S; Grill-Spector, Kalanit

    2015-09-09

    Prevailing hierarchical models propose that temporal processing capacity--the amount of information that a brain region processes in a unit time--decreases at higher stages in the ventral stream regardless of domain. However, it is unknown if temporal processing capacities are domain general or domain specific in human high-level visual cortex. Using a novel fMRI paradigm, we measured temporal capacities of functional regions in high-level visual cortex. Contrary to hierarchical models, our data reveal domain-specific processing capacities as follows: (1) regions processing information from different domains have differential temporal capacities within each stage of the visual hierarchy and (2) domain-specific regions display the same temporal capacity regardless of their position in the processing hierarchy. In general, character-selective regions have the lowest capacity, face- and place-selective regions have an intermediate capacity, and body-selective regions have the highest capacity. Notably, domain-specific temporal processing capacities are not apparent in V1 and have perceptual implications. Behavioral testing revealed that the encoding capacity of body images is higher than that of characters, faces, and places, and there is a correspondence between peak encoding rates and cortical capacities for characters and bodies. The present evidence supports a model in which the natural statistics of temporal information in the visual world may affect domain-specific temporal processing and encoding capacities. These findings suggest that the functional organization of high-level visual cortex may be constrained by temporal characteristics of stimuli in the natural world, and this temporal capacity is a characteristic of domain-specific networks in high-level visual cortex. Significance statement: Visual stimuli bombard us at different rates every day. For example, words and scenes are typically stationary and vary at slow rates. In contrast, bodies are dynamic and typically change at faster rates. Using a novel fMRI paradigm, we measured temporal processing capacities of functional regions in human high-level visual cortex. Contrary to prevailing theories, we find that different regions have different processing capacities, which have behavioral implications. In general, character-selective regions have the lowest capacity, face- and place-selective regions have an intermediate capacity, and body-selective regions have the highest capacity. These results suggest that temporal processing capacity is a characteristic of domain-specific networks in high-level visual cortex and contributes to the segregation of cortical regions. Copyright © 2015 the authors 0270-6474/15/3512412-13$15.00/0.

  17. Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap.

    PubMed

    Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin'ya

    2013-01-01

    It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; see Experiment 1) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see Experiment 2) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap.

  18. Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap

    PubMed Central

    Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin’Ya

    2013-01-01

    It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; see Experiment 1) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see Experiment 2) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap. PMID:23658549

  19. The time-course of activation in the dorsal and ventral visual streams during landmark cueing and perceptual discrimination tasks.

    PubMed

    Lambert, Anthony J; Wootton, Adrienne

    2017-08-01

    Different patterns of high density EEG activity were elicited by the same peripheral stimuli, in the context of Landmark Cueing and Perceptual Discrimination tasks. The C1 component of the visual event-related potential (ERP) at parietal - occipital electrode sites was larger in the Landmark Cueing task, and source localisation suggested greater activation in the superior parietal lobule (SPL) in this task, compared to the Perceptual Discrimination task, indicating stronger early recruitment of the dorsal visual stream. In the Perceptual Discrimination task, source localisation suggested widespread activation of the inferior temporal gyrus (ITG) and fusiform gyrus (FFG), structures associated with the ventral visual stream, during the early phase of the P1 ERP component. Moreover, during a later epoch (171-270ms after stimulus onset) increased temporal-occipital negativity, and stronger recruitment of ITG and FFG were observed in the Perceptual Discrimination task. These findings illuminate the contrasting functions of the dorsal and ventral visual streams, to support rapid shifts of attention in response to contextual landmarks, and conscious discrimination, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Mouse V1 population correlates of visual detection rely on heterogeneity within neuronal response patterns

    PubMed Central

    Montijn, Jorrit S; Goltstein, Pieter M; Pennartz, Cyriel MA

    2015-01-01

    Previous studies have demonstrated the importance of the primary sensory cortex for the detection, discrimination, and awareness of visual stimuli, but it is unknown how neuronal populations in this area process detected and undetected stimuli differently. Critical differences may reside in the mean strength of responses to visual stimuli, as reflected in bulk signals detectable in functional magnetic resonance imaging, electro-encephalogram, or magnetoencephalography studies, or may be more subtly composed of differentiated activity of individual sensory neurons. Quantifying single-cell Ca2+ responses to visual stimuli recorded with in vivo two-photon imaging, we found that visual detection correlates more strongly with population response heterogeneity rather than overall response strength. Moreover, neuronal populations showed consistencies in activation patterns across temporally spaced trials in association with hit responses, but not during nondetections. Contrary to models relying on temporally stable networks or bulk signaling, these results suggest that detection depends on transient differentiation in neuronal activity within cortical populations. DOI: http://dx.doi.org/10.7554/eLife.10163.001 PMID:26646184

  1. The role of primary auditory and visual cortices in temporal processing: A tDCS approach.

    PubMed

    Mioni, G; Grondin, S; Forgione, M; Fracasso, V; Mapelli, D; Stablum, F

    2016-10-15

    Many studies showed that visual stimuli are frequently experienced as shorter than equivalent auditory stimuli. These findings suggest that timing is distributed across many brain areas and that "different clocks" might be involved in temporal processing. The aim of this study is to investigate, with the application of tDCS over V1 and A1, the specific role of primary sensory cortices (either visual or auditory) in temporal processing. Forty-eight University students were included in the study. Twenty-four participants were stimulated over A1 and 24 participants were stimulated over V1. Participants performed time bisection tasks, in the visual and the auditory modalities, involving standard durations lasting 300ms (short) and 900ms (long). When tDCS was delivered over A1, no effect of stimulation was observed on perceived duration but we observed higher temporal variability under anodic stimulation compared to sham and higher variability in the visual compared to the auditory modality. When tDCS was delivered over V1, an under-estimation of perceived duration and higher variability was observed in the visual compared to the auditory modality. Our results showed more variability of visual temporal processing under tDCS stimulation. These results suggest a modality independent role of A1 in temporal processing and a modality specific role of V1 in the processing of temporal intervals in the visual modality. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Integrating sentiment analysis and term associations with geo-temporal visualizations on customer feedback streams

    NASA Astrophysics Data System (ADS)

    Hao, Ming; Rohrdantz, Christian; Janetzko, Halldór; Keim, Daniel; Dayal, Umeshwar; Haug, Lars-Erik; Hsu, Mei-Chun

    2012-01-01

    Twitter currently receives over 190 million tweets (small text-based Web posts) and manufacturing companies receive over 10 thousand web product surveys a day, in which people share their thoughts regarding a wide range of products and their features. A large number of tweets and customer surveys include opinions about products and services. However, with Twitter being a relatively new phenomenon, these tweets are underutilized as a source for determining customer sentiments. To explore high-volume customer feedback streams, we integrate three time series-based visual analysis techniques: (1) feature-based sentiment analysis that extracts, measures, and maps customer feedback; (2) a novel idea of term associations that identify attributes, verbs, and adjectives frequently occurring together; and (3) new pixel cell-based sentiment calendars, geo-temporal map visualizations and self-organizing maps to identify co-occurring and influential opinions. We have combined these techniques into a well-fitted solution for an effective analysis of large customer feedback streams such as for movie reviews (e.g., Kung-Fu Panda) or web surveys (buyers).

  3. The role of temporal structure in human vision.

    PubMed

    Blake, Randolph; Lee, Sang-Hun

    2005-03-01

    Gestalt psychologists identified several stimulus properties thought to underlie visual grouping and figure/ground segmentation, and among those properties was common fate: the tendency to group together individual objects that move together in the same direction at the same speed. Recent years have witnessed an upsurge of interest in visual grouping based on other time-dependent sources of visual information, including synchronized changes in luminance, in motion direction, and in figure/ ground relations. These various sources of temporal grouping information can be subsumed under the rubric temporal structure. In this article, the authors review evidence bearing on the effectiveness of temporal structure in visual grouping. They start with an overview of evidence bearing on temporal acuity of human vision, covering studies dealing with temporal integration and temporal differentiation. They then summarize psychophysical studies dealing with figure/ground segregation based on temporal phase differences in deterministic and stochastic events. The authors conclude with a brief discussion of neurophysiological implications of these results.

  4. Temporal lobe surgery in childhood and neuroanatomical predictors of long-term declarative memory outcome

    PubMed Central

    Skirrow, Caroline; Cross, J. Helen; Harrison, Sue; Cormack, Francesca; Harkness, William; Coleman, Rosie; Meierotto, Ellen; Gaiottino, Johanna; Vargha-Khadem, Faraneh

    2015-01-01

    The temporal lobes play a prominent role in declarative memory function, including episodic memory (memory for events) and semantic memory (memory for facts and concepts). Surgical resection for medication-resistant and well-localized temporal lobe epilepsy has good prognosis for seizure freedom, but is linked to memory difficulties in adults, especially when the removal is on the left side. Children may benefit most from surgery, because brain plasticity may facilitate post-surgical reorganization, and seizure cessation may promote cognitive development. However, the long-term impact of this intervention in children is not known. We examined memory function in 53 children (25 males, 28 females) who were evaluated for epilepsy surgery: 42 underwent unilateral temporal lobe resections (25 left, 17 right, mean age at surgery 13.8 years), 11 were treated only pharmacologically. Average follow-up was 9 years (range 5–15). Post-surgical change in visual and verbal episodic memory, and semantic memory at follow-up were examined. Pre- and post-surgical T1-weighted MRI brain scans were analysed to extract hippocampal and resection volumes, and evaluate post-surgical temporal lobe integrity. Language lateralization indices were derived from functional magnetic resonance imaging. There were no significant pre- to postoperative decrements in memory associated with surgery. In contrast, gains in verbal episodic memory were seen after right temporal lobe surgery, and visual episodic memory improved after left temporal lobe surgery, indicating a functional release in the unoperated temporal lobe after seizure reduction or cessation. Pre- to post-surgical change in memory function was not associated with any indices of brain structure derived from MRI. However, better verbal memory at follow-up was linked to greater post-surgical residual hippocampal volumes, most robustly in left surgical participants. Better semantic memory at follow-up was associated with smaller resection volumes and greater temporal pole integrity after left temporal surgery. Results were independent of post-surgical intellectual function and language lateralization. Our findings indicate post-surgical, hemisphere-dependent material-specific improvement in memory functions in the intact temporal lobe. However, outcome was linked to the anatomical integrity of the temporal lobe memory system, indicating that compensatory mechanisms are constrained by the amount of tissue which remains in the operated temporal lobe. Careful tailoring of resections for children undergoing epilepsy surgery may enhance long-term memory outcome. PMID:25392199

  5. Modulation of auditory stimulus processing by visual spatial or temporal cue: an event-related potentials study.

    PubMed

    Tang, Xiaoyu; Li, Chunlin; Li, Qi; Gao, Yulin; Yang, Weiping; Yang, Jingjing; Ishikawa, Soushirou; Wu, Jinglong

    2013-10-11

    Utilizing the high temporal resolution of event-related potentials (ERPs), we examined how visual spatial or temporal cues modulated the auditory stimulus processing. The visual spatial cue (VSC) induces orienting of attention to spatial locations; the visual temporal cue (VTC) induces orienting of attention to temporal intervals. Participants were instructed to respond to auditory targets. Behavioral responses to auditory stimuli following VSC were faster and more accurate than those following VTC. VSC and VTC had the same effect on the auditory N1 (150-170 ms after stimulus onset). The mean amplitude of the auditory P1 (90-110 ms) in VSC condition was larger than that in VTC condition, and the mean amplitude of late positivity (300-420 ms) in VTC condition was larger than that in VSC condition. These findings suggest that modulation of auditory stimulus processing by visually induced spatial or temporal orienting of attention were different, but partially overlapping. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  6. Unitary vs multiple semantics: PET studies of word and picture processing.

    PubMed

    Bright, P; Moss, H; Tyler, L K

    2004-06-01

    In this paper we examine a central issue in cognitive neuroscience: are there separate conceptual representations associated with different input modalities (e.g., Paivio, 1971, 1986; Warrington & Shallice, 1984) or do inputs from different modalities converge on to the same set of representations (e.g., Caramazza, Hillis, Rapp, & Romani, 1990; Lambon Ralph, Graham, Patterson, & Hodges, 1999; Rapp, Hillis, & Caramazza, 1993)? We present an analysis of four PET studies (three semantic categorisation tasks and one lexical decision task), two of which employ words as stimuli and two of which employ pictures. Using conjunction analyses, we found robust semantic activation, common to both input modalities in anterior and medial aspects of the left fusiform gyrus, left parahippocampal and perirhinal cortices, and left inferior frontal gyrus (BA 47). There were modality-specific activations in both temporal poles (words) and occipitotemporal cortices (pictures). We propose that the temporal poles are involved in processing both words and pictures, but their engagement might be primarily determined by the level of specificity at which an object is processed. Activation in posterior temporal regions associated with picture processing most likely reflects intermediate, pre-semantic stages of visual processing. Our data are most consistent with a hierarchically structured, unitary system of semantic representations for both verbal and visual modalities, subserved by anterior regions of the inferior temporal cortex.

  7. Reduction of Interhemispheric Functional Brain Connectivity in Early Blindness: A Resting-State fMRI Study

    PubMed Central

    2017-01-01

    Objective The purpose of this study was to investigate the resting-state interhemispheric functional connectivity in early blindness by using voxel-mirrored homotopic connectivity (VMHC). Materials and Methods Sixteen early blind patients (EB group) and sixteen age- and gender-matched sighted control volunteers (SC group) were recruited in this study. We used VMHC to identify brain areas with significant differences in functional connectivity between different groups and used voxel-based morphometry (VBM) to calculate the individual gray matter volume (GMV). Results VMHC analysis showed a significantly lower connectivity in primary visual cortex, visual association cortex, and somatosensory association cortex in EB group compared to sighted controls. Additionally, VBM analysis revealed that GMV was reduced in the left lateral calcarine cortices in EB group compared to sighted controls, while it was increased in the left lateral middle occipital gyri. Statistical analysis showed the duration of blindness negatively correlated with VMHC in the bilateral middle frontal gyri, middle temporal gyri, and inferior temporal gyri. Conclusions Our findings help elucidate the pathophysiological mechanisms of EB. The interhemispheric functional connectivity was impaired in EB patients. Additionally, the middle frontal gyri, middle temporal gyri, and inferior temporal gyri may be potential target regions for rehabilitation. PMID:28656145

  8. The contribution of visual information to the perception of speech in noise with and without informative temporal fine structure

    PubMed Central

    Stacey, Paula C.; Kitterick, Pádraig T.; Morris, Saffron D.; Sumner, Christian J.

    2017-01-01

    Understanding what is said in demanding listening situations is assisted greatly by looking at the face of a talker. Previous studies have observed that normal-hearing listeners can benefit from this visual information when a talker's voice is presented in background noise. These benefits have also been observed in quiet listening conditions in cochlear-implant users, whose device does not convey the informative temporal fine structure cues in speech, and when normal-hearing individuals listen to speech processed to remove these informative temporal fine structure cues. The current study (1) characterised the benefits of visual information when listening in background noise; and (2) used sine-wave vocoding to compare the size of the visual benefit when speech is presented with or without informative temporal fine structure. The accuracy with which normal-hearing individuals reported words in spoken sentences was assessed across three experiments. The availability of visual information and informative temporal fine structure cues was varied within and across the experiments. The results showed that visual benefit was observed using open- and closed-set tests of speech perception. The size of the benefit increased when informative temporal fine structure cues were removed. This finding suggests that visual information may play an important role in the ability of cochlear-implant users to understand speech in many everyday situations. Models of audio-visual integration were able to account for the additional benefit of visual information when speech was degraded and suggested that auditory and visual information was being integrated in a similar way in all conditions. The modelling results were consistent with the notion that audio-visual benefit is derived from the optimal combination of auditory and visual sensory cues. PMID:27085797

  9. PRODIGEN: visualizing the probability landscape of stochastic gene regulatory networks in state and time space.

    PubMed

    Ma, Chihua; Luciani, Timothy; Terebus, Anna; Liang, Jie; Marai, G Elisabeta

    2017-02-15

    Visualizing the complex probability landscape of stochastic gene regulatory networks can further biologists' understanding of phenotypic behavior associated with specific genes. We present PRODIGEN (PRObability DIstribution of GEne Networks), a web-based visual analysis tool for the systematic exploration of probability distributions over simulation time and state space in such networks. PRODIGEN was designed in collaboration with bioinformaticians who research stochastic gene networks. The analysis tool combines in a novel way existing, expanded, and new visual encodings to capture the time-varying characteristics of probability distributions: spaghetti plots over one dimensional projection, heatmaps of distributions over 2D projections, enhanced with overlaid time curves to display temporal changes, and novel individual glyphs of state information corresponding to particular peaks. We demonstrate the effectiveness of the tool through two case studies on the computed probabilistic landscape of a gene regulatory network and of a toggle-switch network. Domain expert feedback indicates that our visual approach can help biologists: 1) visualize probabilities of stable states, 2) explore the temporal probability distributions, and 3) discover small peaks in the probability landscape that have potential relation to specific diseases.

  10. Visual event-related potentials to biological motion stimuli in autism spectrum disorders

    PubMed Central

    Bletsch, Anke; Krick, Christoph; Siniatchkin, Michael; Jarczok, Tomasz A.; Freitag, Christine M.; Bender, Stephan

    2014-01-01

    Atypical visual processing of biological motion contributes to social impairments in autism spectrum disorders (ASD). However, the exact temporal sequence of deficits of cortical biological motion processing in ASD has not been studied to date. We used 64-channel electroencephalography to study event-related potentials associated with human motion perception in 17 children and adolescents with ASD and 21 typical controls. A spatio-temporal source analysis was performed to assess the brain structures involved in these processes. We expected altered activity already during early stimulus processing and reduced activity during subsequent biological motion specific processes in ASD. In response to both, random and biological motion, the P100 amplitude was decreased suggesting unspecific deficits in visual processing, and the occipito-temporal N200 showed atypical lateralization in ASD suggesting altered hemispheric specialization. A slow positive deflection after 400 ms, reflecting top-down processes, and human motion-specific dipole activation differed slightly between groups, with reduced and more diffuse activation in the ASD-group. The latter could be an indicator of a disrupted neuronal network for biological motion processing in ADS. Furthermore, early visual processing (P100) seems to be correlated to biological motion-specific activation. This emphasizes the relevance of early sensory processing for higher order processing deficits in ASD. PMID:23887808

  11. Increases in the autistic trait of attention to detail are associated with decreased multisensory temporal adaptation.

    PubMed

    Stevenson, Ryan A; Toulmin, Jennifer K; Youm, Ariana; Besney, Richard M A; Schulz, Samantha E; Barense, Morgan D; Ferber, Susanne

    2017-10-30

    Recent empirical evidence suggests that autistic individuals perceive the world differently than their typically-developed peers. One theoretical account, the predictive coding hypothesis, posits that autistic individuals show a decreased reliance on previous perceptual experiences, which may relate to autism symptomatology. We tested this through a well-characterized, audiovisual statistical-learning paradigm in which typically-developed participants were first adapted to consistent temporal relationships between audiovisual stimulus pairs (audio-leading, synchronous, visual-leading) and then performed a simultaneity judgement task with audiovisual stimulus pairs varying in temporal offset from auditory-leading to visual-leading. Following exposure to the visual-leading adaptation phase, participants' perception of synchrony was biased towards visual-leading presentations, reflecting the statistical regularities of their previously experienced environment. Importantly, the strength of adaptation was significantly related to the level of autistic traits that the participant exhibited, measured by the Autism Quotient (AQ). This was specific to the Attention to Detail subscale of the AQ that assesses the perceptual propensity to focus on fine-grain aspects of sensory input at the expense of more integrative perceptions. More severe Attention to Detail was related to weaker adaptation. These results support the predictive coding framework, and suggest that changes in sensory perception commonly reported in autism may contribute to autistic symptomatology.

  12. On the cyclic nature of perception in vision versus audition

    PubMed Central

    VanRullen, Rufin; Zoefel, Benedikt; Ilhan, Barkin

    2014-01-01

    Does our perceptual awareness consist of a continuous stream, or a discrete sequence of perceptual cycles, possibly associated with the rhythmic structure of brain activity? This has been a long-standing question in neuroscience. We review recent psychophysical and electrophysiological studies indicating that part of our visual awareness proceeds in approximately 7–13 Hz cycles rather than continuously. On the other hand, experimental attempts at applying similar tools to demonstrate the discreteness of auditory awareness have been largely unsuccessful. We argue and demonstrate experimentally that visual and auditory perception are not equally affected by temporal subsampling of their respective input streams: video sequences remain intelligible at sampling rates of two to three frames per second, whereas audio inputs lose their fine temporal structure, and thus all significance, below 20–30 samples per second. This does not mean, however, that our auditory perception must proceed continuously. Instead, we propose that audition could still involve perceptual cycles, but the periodic sampling should happen only after the stage of auditory feature extraction. In addition, although visual perceptual cycles can follow one another at a spontaneous pace largely independent of the visual input, auditory cycles may need to sample the input stream more flexibly, by adapting to the temporal structure of the auditory inputs. PMID:24639585

  13. 3D Shape Perception in Posterior Cortical Atrophy: A Visual Neuroscience Perspective

    PubMed Central

    Gillebert, Céline R.; Schaeverbeke, Jolien; Bastin, Christine; Neyens, Veerle; Bruffaerts, Rose; De Weer, An-Sofie; Seghers, Alexandra; Sunaert, Stefan; Van Laere, Koen; Versijpt, Jan; Vandenbulcke, Mathieu; Salmon, Eric; Todd, James T.; Orban, Guy A.

    2015-01-01

    Posterior cortical atrophy (PCA) is a rare focal neurodegenerative syndrome characterized by progressive visuoperceptual and visuospatial deficits, most often due to atypical Alzheimer's disease (AD). We applied insights from basic visual neuroscience to analyze 3D shape perception in humans affected by PCA. Thirteen PCA patients and 30 matched healthy controls participated, together with two patient control groups with diffuse Lewy body dementia (DLBD) and an amnestic-dominant phenotype of AD, respectively. The hierarchical study design consisted of 3D shape processing for 4 cues (shading, motion, texture, and binocular disparity) with corresponding 2D and elementary feature extraction control conditions. PCA and DLBD exhibited severe 3D shape-processing deficits and AD to a lesser degree. In PCA, deficient 3D shape-from-shading was associated with volume loss in the right posterior inferior temporal cortex. This region coincided with a region of functional activation during 3D shape-from-shading in healthy controls. In PCA patients who performed the same fMRI paradigm, response amplitude during 3D shape-from-shading was reduced in this region. Gray matter volume in this region also correlated with 3D shape-from-shading in AD. 3D shape-from-disparity in PCA was associated with volume loss slightly more anteriorly in posterior inferior temporal cortex as well as in ventral premotor cortex. The findings in right posterior inferior temporal cortex and right premotor cortex are consistent with neurophysiologically based models of the functional anatomy of 3D shape processing. However, in DLBD, 3D shape deficits rely on mechanisms distinct from inferior temporal structural integrity. SIGNIFICANCE STATEMENT Posterior cortical atrophy (PCA) is a neurodegenerative syndrome characterized by progressive visuoperceptual dysfunction and most often an atypical presentation of Alzheimer's disease (AD) affecting the ventral and dorsal visual streams rather than the medial temporal system. We applied insights from fundamental visual neuroscience to analyze 3D shape perception in PCA. 3D shape-processing deficits were affected beyond what could be accounted for by lower-order processing deficits. For shading and disparity, this was related to volume loss in regions previously implicated in 3D shape processing in the intact human and nonhuman primate brain. Typical amnestic-dominant AD patients also exhibited 3D shape deficits. Advanced visual neuroscience provides insight into the pathogenesis of PCA that also bears relevance for vision in typical AD. PMID:26377458

  14. Toward the influence of temporal attention on the selection of targets in a visual search task: An ERP study.

    PubMed

    Rolke, Bettina; Festl, Freya; Seibold, Verena C

    2016-11-01

    We used ERPs to investigate whether temporal attention interacts with spatial attention and feature-based attention to enhance visual processing. We presented a visual search display containing one singleton stimulus among a set of homogenous distractors. Participants were asked to respond only to target singletons of a particular color and shape that were presented in an attended spatial position. We manipulated temporal attention by presenting a warning signal before each search display and varying the foreperiod (FP) between the warning signal and the search display in a blocked manner. We observed distinctive ERP effects of both spatial and temporal attention. The amplitudes for the N2pc, SPCN, and P3 were enhanced by spatial attention indicating a processing benefit of relevant stimulus features at the attended side. Temporal attention accelerated stimulus processing; this was indexed by an earlier onset of the N2pc component and a reduction in reaction times to targets. Most importantly, temporal attention did not interact with spatial attention or stimulus features to influence visual processing. Taken together, the results suggest that temporal attention fosters visual perceptual processing in a visual search task independently from spatial attention and feature-based attention; this provides support for the nonspecific enhancement hypothesis of temporal attention. © 2016 Society for Psychophysiological Research.

  15. Spatial and temporal coherence in perceptual binding

    PubMed Central

    Blake, Randolph; Yang, Yuede

    1997-01-01

    Component visual features of objects are registered by distributed patterns of activity among neurons comprising multiple pathways and visual areas. How these distributed patterns of activity give rise to unified representations of objects remains unresolved, although one recent, controversial view posits temporal coherence of neural activity as a binding agent. Motivated by the possible role of temporal coherence in feature binding, we devised a novel psychophysical task that requires the detection of temporal coherence among features comprising complex visual images. Results show that human observers can more easily detect synchronized patterns of temporal contrast modulation within hybrid visual images composed of two components when those components are drawn from the same original picture. Evidently, time-varying changes within spatially coherent features produce more salient neural signals. PMID:9192701

  16. Evidence for Deficits in the Temporal Attention Span of Poor Readers

    PubMed Central

    Visser, Troy A. W.

    2014-01-01

    Background While poor reading is often associated with phonological deficits, many studies suggest that visual processing might also be impaired. In particular, recent research has indicated that poor readers show impaired spatial visual attention spans in partial and whole report tasks. Given the similarities between competition-based accounts for reduced visual attention span and similar explanations for impairments in sequential object processing, the present work examined whether poor readers show deficits in their “temporal attention span” – that is, their ability to rapidly and accurately process sequences of consecutive target items. Methodology/Principal Findings Poor and normal readers monitored a sequential stream of visual items for two (TT condition) or three (TTT condition) consecutive target digits. Target identification was examined using both unconditional and conditional measures of accuracy in order to gauge the overall likelihood of identifying a target and the likelihood of identifying a target given successful identification of previous items. Compared to normal readers, poor readers showed small but consistent deficits in identification across targets whether unconditional or conditional accuracy was used. Additionally, in the TTT condition, final-target conditional accuracy was poorer than unconditional accuracy, particularly for poor readers, suggesting a substantial cost arising from processing the previous two targets that was not present in normal readers. Conclusions/Significance Mirroring the differences found between poor and normal readers in spatial visual attention span, the present findings suggest two principal differences between the temporal attention spans of poor and normal readers. First, the consistent pattern of reduced performance across targets suggests increased competition amongst items within the same span for poor readers. Second, the steeper decline in final target performance amongst poor readers in the TTT condition suggests a reduction in the extent of their temporal attention span. PMID:24651313

  17. Evidence for deficits in the temporal attention span of poor readers.

    PubMed

    Visser, Troy A W

    2014-01-01

    While poor reading is often associated with phonological deficits, many studies suggest that visual processing might also be impaired. In particular, recent research has indicated that poor readers show impaired spatial visual attention spans in partial and whole report tasks. Given the similarities between competition-based accounts for reduced visual attention span and similar explanations for impairments in sequential object processing, the present work examined whether poor readers show deficits in their "temporal attention span"--that is, their ability to rapidly and accurately process sequences of consecutive target items. Poor and normal readers monitored a sequential stream of visual items for two (TT condition) or three (TTT condition) consecutive target digits. Target identification was examined using both unconditional and conditional measures of accuracy in order to gauge the overall likelihood of identifying a target and the likelihood of identifying a target given successful identification of previous items. Compared to normal readers, poor readers showed small but consistent deficits in identification across targets whether unconditional or conditional accuracy was used. Additionally, in the TTT condition, final-target conditional accuracy was poorer than unconditional accuracy, particularly for poor readers, suggesting a substantial cost arising from processing the previous two targets that was not present in normal readers. Mirroring the differences found between poor and normal readers in spatial visual attention span, the present findings suggest two principal differences between the temporal attention spans of poor and normal readers. First, the consistent pattern of reduced performance across targets suggests increased competition amongst items within the same span for poor readers. Second, the steeper decline in final target performance amongst poor readers in the TTT condition suggests a reduction in the extent of their temporal attention span.

  18. Brain signal complexity rises with repetition suppression in visual learning.

    PubMed

    Lafontaine, Marc Philippe; Lacourse, Karine; Lina, Jean-Marc; McIntosh, Anthony R; Gosselin, Frédéric; Théoret, Hugo; Lippé, Sarah

    2016-06-21

    Neuronal activity associated with visual processing of an unfamiliar face gradually diminishes when it is viewed repeatedly. This process, known as repetition suppression (RS), is involved in the acquisition of familiarity. Current models suggest that RS results from interactions between visual information processing areas located in the occipito-temporal cortex and higher order areas, such as the dorsolateral prefrontal cortex (DLPFC). Brain signal complexity, which reflects information dynamics of cortical networks, has been shown to increase as unfamiliar faces become familiar. However, the complementarity of RS and increases in brain signal complexity have yet to be demonstrated within the same measurements. We hypothesized that RS and brain signal complexity increase occur simultaneously during learning of unfamiliar faces. Further, we expected alteration of DLPFC function by transcranial direct current stimulation (tDCS) to modulate RS and brain signal complexity over the occipito-temporal cortex. Participants underwent three tDCS conditions in random order: right anodal/left cathodal, right cathodal/left anodal and sham. Following tDCS, participants learned unfamiliar faces, while an electroencephalogram (EEG) was recorded. Results revealed RS over occipito-temporal electrode sites during learning, reflected by a decrease in signal energy, a measure of amplitude. Simultaneously, as signal energy decreased, brain signal complexity, as estimated with multiscale entropy (MSE), increased. In addition, prefrontal tDCS modulated brain signal complexity over the right occipito-temporal cortex during the first presentation of faces. These results suggest that although RS may reflect a brain mechanism essential to learning, complementary processes reflected by increases in brain signal complexity, may be instrumental in the acquisition of novel visual information. Such processes likely involve long-range coordinated activity between prefrontal and lower order visual areas. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  19. Region-Specific Slowing of Alpha Oscillations is Associated with Visual-Perceptual Abilities in Children Born Very Preterm

    PubMed Central

    Doesburg, Sam M.; Moiseev, Alexander; Herdman, Anthony T.; Ribary, Urs; Grunau, Ruth E.

    2013-01-01

    Children born very preterm (≤32 weeks gestational age) without major intellectual or neurological impairments often express selective deficits in visual-perceptual abilities. The alterations in neurophysiological development underlying these problems, however, remain poorly understood. Recent research has indicated that spontaneous alpha oscillations are slowed in children born very preterm, and that atypical alpha-mediated functional network connectivity may underlie selective developmental difficulties in visual-perceptual ability in this group. The present study provides the first source-resolved analysis of slowing of spontaneous alpha oscillations in very preterm children, indicating alterations in a distributed set of brain regions concentrated in areas of posterior parietal and inferior temporal regions associated with visual perception, as well as prefrontal cortical regions and thalamus. We also uniquely demonstrate that slowing of alpha oscillations is associated with selective difficulties in visual-perceptual ability in very preterm children. These results indicate that region-specific slowing of alpha oscillations contribute to selective developmental difficulties prevalent in this population. PMID:24298250

  20. Coastal On-line Assessment and Synthesis Tool 2.0

    NASA Technical Reports Server (NTRS)

    Brown, Richard; Navard, Andrew; Nguyen, Beth

    2011-01-01

    COAST (Coastal On-line Assessment and Synthesis Tool) is a 3D, open-source Earth data browser developed by leveraging and enhancing previous NASA open-source tools. These tools use satellite imagery and elevation data in a way that allows any user to zoom from orbit view down into any place on Earth, and enables the user to experience Earth terrain in a visually rich 3D view. The benefits associated with taking advantage of an open-source geo-browser are that it is free, extensible, and offers a worldwide developer community that is available to provide additional development and improvement potential. What makes COAST unique is that it simplifies the process of locating and accessing data sources, and allows a user to combine them into a multi-layered and/or multi-temporal visual analytical look into possible data interrelationships and coeffectors for coastal environment phenomenology. COAST provides users with new data visual analytic capabilities. COAST has been upgraded to maximize use of open-source data access, viewing, and data manipulation software tools. The COAST 2.0 toolset has been developed to increase access to a larger realm of the most commonly implemented data formats used by the coastal science community. New and enhanced functionalities that upgrade COAST to COAST 2.0 include the development of the Temporal Visualization Tool (TVT) plug-in, the Recursive Online Remote Data-Data Mapper (RECORD-DM) utility, the Import Data Tool (IDT), and the Add Points Tool (APT). With these improvements, users can integrate their own data with other data sources, and visualize the resulting layers of different data types (such as spatial and spectral, for simultaneous visual analysis), and visualize temporal changes in areas of interest.

  1. Representations of temporal information in short-term memory: Are they modality-specific?

    PubMed

    Bratzke, Daniel; Quinn, Katrina R; Ulrich, Rolf; Bausenhart, Karin M

    2016-10-01

    Rattat and Picard (2012) reported that the coding of temporal information in short-term memory is modality-specific, that is, temporal information received via the visual (auditory) modality is stored as a visual (auditory) code. This conclusion was supported by modality-specific interference effects on visual and auditory duration discrimination, which were induced by secondary tasks (visual tracking or articulatory suppression), presented during a retention interval. The present study assessed the stability of these modality-specific interference effects. Our study did not replicate the selective interference pattern but rather indicated that articulatory suppression not only impairs short-term memory for auditory but also for visual durations. This result pattern supports a crossmodal or an abstract view of temporal encoding. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Flexibility and Stability in Sensory Processing Revealed Using Visual-to-Auditory Sensory Substitution

    PubMed Central

    Hertz, Uri; Amedi, Amir

    2015-01-01

    The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. PMID:24518756

  3. Flexibility and Stability in Sensory Processing Revealed Using Visual-to-Auditory Sensory Substitution.

    PubMed

    Hertz, Uri; Amedi, Amir

    2015-08-01

    The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. © The Author 2014. Published by Oxford University Press.

  4. Person perception involves functional integration between the extrastriate body area and temporal pole.

    PubMed

    Greven, Inez M; Ramsey, Richard

    2017-02-01

    The majority of human neuroscience research has focussed on understanding functional organisation within segregated patches of cortex. The ventral visual stream has been associated with the detection of physical features such as faces and body parts, whereas the theory-of-mind network has been associated with making inferences about mental states and underlying character, such as whether someone is friendly, selfish, or generous. To date, however, it is largely unknown how such distinct processing components integrate neural signals. Using functional magnetic resonance imaging and connectivity analyses, we investigated the contribution of functional integration to social perception. During scanning, participants observed bodies that had previously been associated with trait-based or neutral information. Additionally, we independently localised the body perception and theory-of-mind networks. We demonstrate that when observing someone who cues the recall of stored social knowledge compared to non-social knowledge, a node in the ventral visual stream (extrastriate body area) shows greater coupling with part of the theory-of-mind network (temporal pole). These results show that functional connections provide an interface between perceptual and inferential processing components, thus providing neurobiological evidence that supports the view that understanding the visual environment involves interplay between conceptual knowledge and perceptual processing. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Words in Context: The Effects of Length, Frequency, and Predictability on Brain Responses During Natural Reading

    PubMed Central

    Schuster, Sarah; Hawelka, Stefan; Hutzler, Florian; Kronbichler, Martin; Richlan, Fabio

    2016-01-01

    Word length, frequency, and predictability count among the most influential variables during reading. Their effects are well-documented in eye movement studies, but pertinent evidence from neuroimaging primarily stem from single-word presentations. We investigated the effects of these variables during reading of whole sentences with simultaneous eye-tracking and functional magnetic resonance imaging (fixation-related fMRI). Increasing word length was associated with increasing activation in occipital areas linked to visual analysis. Additionally, length elicited a U-shaped modulation (i.e., least activation for medium-length words) within a brain stem region presumably linked to eye movement control. These effects, however, were diminished when accounting for multiple fixation cases. Increasing frequency was associated with decreasing activation within left inferior frontal, superior parietal, and occipito-temporal regions. The function of the latter region—hosting the putative visual word form area—was originally considered as limited to sublexical processing. An exploratory analysis revealed that increasing predictability was associated with decreasing activation within middle temporal and inferior frontal regions previously implicated in memory access and unification. The findings are discussed with regard to their correspondence with findings from single-word presentations and with regard to neurocognitive models of visual word recognition, semantic processing, and eye movement control during reading. PMID:27365297

  6. Imaging systems level consolidation of novel associate memories: A longitudinal neuroimaging study

    PubMed Central

    Smith, Jason F; Alexander, Gene E; Chen, Kewei; Husain, Fatima T; Kim, Jieun; Pajor, Nathan; Horwitz, Barry

    2010-01-01

    Previously, a standard theory of systems level memory consolidation was developed to describe how memory recall becomes independent of the medial temporal memory system. More recently, an extended consolidation theory was proposed that predicts seven changes in regional neural activity and inter-regional functional connectivity. Using longitudinal event related functional magnetic resonance imaging of an associate memory task, we simultaneously tested all predictions and additionally tested for consolidation related changes in recall of associate memories at a sub-trial temporal resolution, analyzing cue, delay and target periods of each trial separately. Results consistent with the theoretical predictions were observed though two inconsistent results were also obtained. In particular, while recall-related delay period activity decreased with consolidation as predicted, visual cue activity increased for consolidated memories. Though the extended theory of memory consolidation is largely supported by our study, these results suggest the extended theory needs further refinement and the medial temporal memory system has multiple, temporally distinct roles in associate memory recall. Neuroimaging analysis at a sub-trial temporal resolution, as used here, may further clarify the role of the hippocampal complex in memory consolidation. PMID:19948227

  7. Audiovisual Modulation in Mouse Primary Visual Cortex Depends on Cross-Modal Stimulus Configuration and Congruency.

    PubMed

    Meijer, Guido T; Montijn, Jorrit S; Pennartz, Cyriel M A; Lansink, Carien S

    2017-09-06

    The sensory neocortex is a highly connected associative network that integrates information from multiple senses, even at the level of the primary sensory areas. Although a growing body of empirical evidence supports this view, the neural mechanisms of cross-modal integration in primary sensory areas, such as the primary visual cortex (V1), are still largely unknown. Using two-photon calcium imaging in awake mice, we show that the encoding of audiovisual stimuli in V1 neuronal populations is highly dependent on the features of the stimulus constituents. When the visual and auditory stimulus features were modulated at the same rate (i.e., temporally congruent), neurons responded with either an enhancement or suppression compared with unisensory visual stimuli, and their prevalence was balanced. Temporally incongruent tones or white-noise bursts included in audiovisual stimulus pairs resulted in predominant response suppression across the neuronal population. Visual contrast did not influence multisensory processing when the audiovisual stimulus pairs were congruent; however, when white-noise bursts were used, neurons generally showed response suppression when the visual stimulus contrast was high whereas this effect was absent when the visual contrast was low. Furthermore, a small fraction of V1 neurons, predominantly those located near the lateral border of V1, responded to sound alone. These results show that V1 is involved in the encoding of cross-modal interactions in a more versatile way than previously thought. SIGNIFICANCE STATEMENT The neural substrate of cross-modal integration is not limited to specialized cortical association areas but extends to primary sensory areas. Using two-photon imaging of large groups of neurons, we show that multisensory modulation of V1 populations is strongly determined by the individual and shared features of cross-modal stimulus constituents, such as contrast, frequency, congruency, and temporal structure. Congruent audiovisual stimulation resulted in a balanced pattern of response enhancement and suppression compared with unisensory visual stimuli, whereas incongruent or dissimilar stimuli at full contrast gave rise to a population dominated by response-suppressing neurons. Our results indicate that V1 dynamically integrates nonvisual sources of information while still attributing most of its resources to coding visual information. Copyright © 2017 the authors 0270-6474/17/378783-14$15.00/0.

  8. Dysfunctional visual word form processing in progressive alexia

    PubMed Central

    Rising, Kindle; Stib, Matthew T.; Rapcsak, Steven Z.; Beeson, Pélagie M.

    2013-01-01

    Progressive alexia is an acquired reading deficit caused by degeneration of brain regions that are essential for written word processing. Functional imaging studies have shown that early processing of the visual word form depends on a hierarchical posterior-to-anterior processing stream in occipito-temporal cortex, whereby successive areas code increasingly larger and more complex perceptual attributes of the letter string. A region located in the left lateral occipito-temporal sulcus and adjacent fusiform gyrus shows maximal selectivity for words and has been dubbed the ‘visual word form area’. We studied two patients with progressive alexia in order to determine whether their reading deficits were associated with structural and/or functional abnormalities in this visual word form system. Voxel-based morphometry showed left-lateralized occipito-temporal atrophy in both patients, very mild in one, but moderate to severe in the other. The two patients, along with 10 control subjects, were scanned with functional magnetic resonance imaging as they viewed rapidly presented words, false font strings, or a fixation crosshair. This paradigm was optimized to reliably map brain regions involved in orthographic processing in individual subjects. All 10 control subjects showed a posterior-to-anterior gradient of selectivity for words, and all 10 showed a functionally defined visual word form area in the left hemisphere that was activated for words relative to false font strings. In contrast, neither of the two patients with progressive alexia showed any evidence for a selectivity gradient or for word-specific activation of the visual word form area. The patient with mild atrophy showed normal responses to both words and false font strings in the posterior part of the visual word form system, but a failure to develop selectivity for words in the more anterior part of the system. In contrast, the patient with moderate to severe atrophy showed minimal activation of any part of the visual word form system for either words or false font strings. Our results suggest that progressive alexia is associated with a dysfunctional visual word form system, with or without substantial cortical atrophy. Furthermore, these findings demonstrate that functional MRI has the potential to reveal the neural bases of cognitive deficits in neurodegenerative patients at very early stages, in some cases before the development of extensive atrophy. PMID:23471694

  9. Dysfunctional visual word form processing in progressive alexia.

    PubMed

    Wilson, Stephen M; Rising, Kindle; Stib, Matthew T; Rapcsak, Steven Z; Beeson, Pélagie M

    2013-04-01

    Progressive alexia is an acquired reading deficit caused by degeneration of brain regions that are essential for written word processing. Functional imaging studies have shown that early processing of the visual word form depends on a hierarchical posterior-to-anterior processing stream in occipito-temporal cortex, whereby successive areas code increasingly larger and more complex perceptual attributes of the letter string. A region located in the left lateral occipito-temporal sulcus and adjacent fusiform gyrus shows maximal selectivity for words and has been dubbed the 'visual word form area'. We studied two patients with progressive alexia in order to determine whether their reading deficits were associated with structural and/or functional abnormalities in this visual word form system. Voxel-based morphometry showed left-lateralized occipito-temporal atrophy in both patients, very mild in one, but moderate to severe in the other. The two patients, along with 10 control subjects, were scanned with functional magnetic resonance imaging as they viewed rapidly presented words, false font strings, or a fixation crosshair. This paradigm was optimized to reliably map brain regions involved in orthographic processing in individual subjects. All 10 control subjects showed a posterior-to-anterior gradient of selectivity for words, and all 10 showed a functionally defined visual word form area in the left hemisphere that was activated for words relative to false font strings. In contrast, neither of the two patients with progressive alexia showed any evidence for a selectivity gradient or for word-specific activation of the visual word form area. The patient with mild atrophy showed normal responses to both words and false font strings in the posterior part of the visual word form system, but a failure to develop selectivity for words in the more anterior part of the system. In contrast, the patient with moderate to severe atrophy showed minimal activation of any part of the visual word form system for either words or false font strings. Our results suggest that progressive alexia is associated with a dysfunctional visual word form system, with or without substantial cortical atrophy. Furthermore, these findings demonstrate that functional MRI has the potential to reveal the neural bases of cognitive deficits in neurodegenerative patients at very early stages, in some cases before the development of extensive atrophy.

  10. Visualization of Spatio-Temporal Relations in Movement Event Using Multi-View

    NASA Astrophysics Data System (ADS)

    Zheng, K.; Gu, D.; Fang, F.; Wang, Y.; Liu, H.; Zhao, W.; Zhang, M.; Li, Q.

    2017-09-01

    Spatio-temporal relations among movement events extracted from temporally varying trajectory data can provide useful information about the evolution of individual or collective movers, as well as their interactions with their spatial and temporal contexts. However, the pure statistical tools commonly used by analysts pose many difficulties, due to the large number of attributes embedded in multi-scale and multi-semantic trajectory data. The need for models that operate at multiple scales to search for relations at different locations within time and space, as well as intuitively interpret what these relations mean, also presents challenges. Since analysts do not know where or when these relevant spatio-temporal relations might emerge, these models must compute statistical summaries of multiple attributes at different granularities. In this paper, we propose a multi-view approach to visualize the spatio-temporal relations among movement events. We describe a method for visualizing movement events and spatio-temporal relations that uses multiple displays. A visual interface is presented, and the user can interactively select or filter spatial and temporal extents to guide the knowledge discovery process. We also demonstrate how this approach can help analysts to derive and explain the spatio-temporal relations of movement events from taxi trajectory data.

  11. Dynamic functional connectivity shapes individual differences in associative learning.

    PubMed

    Fatima, Zainab; Kovacevic, Natasha; Misic, Bratislav; McIntosh, Anthony Randal

    2016-11-01

    Current neuroscientific research has shown that the brain reconfigures its functional interactions at multiple timescales. Here, we sought to link transient changes in functional brain networks to individual differences in behavioral and cognitive performance by using an active learning paradigm. Participants learned associations between pairs of unrelated visual stimuli by using feedback. Interindividual behavioral variability was quantified with a learning rate measure. By using a multivariate statistical framework (partial least squares), we identified patterns of network organization across multiple temporal scales (within a trial, millisecond; across a learning session, minute) and linked these to the rate of change in behavioral performance (fast and slow). Results indicated that posterior network connectivity was present early in the trial for fast, and later in the trial for slow performers. In contrast, connectivity in an associative memory network (frontal, striatal, and medial temporal regions) occurred later in the trial for fast, and earlier for slow performers. Time-dependent changes in the posterior network were correlated with visual/spatial scores obtained from independent neuropsychological assessments, with fast learners performing better on visual/spatial subtests. No relationship was found between functional connectivity dynamics in the memory network and visual/spatial test scores indicative of cognitive skill. By using a comprehensive set of measures (behavioral, cognitive, and neurophysiological), we report that individual variations in learning-related performance change are supported by differences in cognitive ability and time-sensitive connectivity in functional neural networks. Hum Brain Mapp 37:3911-3928, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  12. Structural and functional analyses of human cerebral cortex using a surface-based atlas

    NASA Technical Reports Server (NTRS)

    Van Essen, D. C.; Drury, H. A.

    1997-01-01

    We have analyzed the geometry, geography, and functional organization of human cerebral cortex using surface reconstructions and cortical flat maps of the left and right hemispheres generated from a digital atlas (the Visible Man). The total surface area of the reconstructed Visible Man neocortex is 1570 cm2 (both hemispheres), approximately 70% of which is buried in sulci. By linking the Visible Man cerebrum to the Talairach stereotaxic coordinate space, the locations of activation foci reported in neuroimaging studies can be readily visualized in relation to the cortical surface. The associated spatial uncertainty was empirically shown to have a radius in three dimensions of approximately 10 mm. Application of this approach to studies of visual cortex reveals the overall patterns of activation associated with different aspects of visual function and the relationship of these patterns to topographically organized visual areas. Our analysis supports a distinction between an anterior region in ventral occipito-temporal cortex that is selectively involved in form processing and a more posterior region (in or near areas VP and V4v) involved in both form and color processing. Foci associated with motion processing are mainly concentrated in a region along the occipito-temporal junction, the ventral portion of which overlaps with foci also implicated in form processing. Comparisons between flat maps of human and macaque monkey cerebral cortex indicate significant differences as well as many similarities in the relative sizes and positions of cortical regions known or suspected to be homologous in the two species.

  13. Visual pattern image sequence coding

    NASA Technical Reports Server (NTRS)

    Silsbee, Peter; Bovik, Alan C.; Chen, Dapang

    1990-01-01

    The visual pattern image coding (VPIC) configurable digital image-coding process is capable of coding with visual fidelity comparable to the best available techniques, at compressions which (at 30-40:1) exceed all other technologies. These capabilities are associated with unprecedented coding efficiencies; coding and decoding operations are entirely linear with respect to image size and entail a complexity that is 1-2 orders of magnitude faster than any previous high-compression technique. The visual pattern image sequence coding to which attention is presently given exploits all the advantages of the static VPIC in the reduction of information from an additional, temporal dimension, to achieve unprecedented image sequence coding performance.

  14. Verbal creativity in semantic variant primary progressive aphasia.

    PubMed

    Wu, Teresa Q; Miller, Zachary A; Adhimoolam, Babu; Zackey, Diana D; Khan, Baber K; Ketelle, Robin; Rankin, Katherine P; Miller, Bruce L

    2015-02-01

    Emergence of visual and musical creativity in the setting of neurologic disease has been reported in patients with semantic variant primary progressive aphasia (svPPA), also called semantic dementia (SD). It is hypothesized that loss of left anterior frontotemporal function facilitates activity of the right posterior hemispheric structures, leading to de novo creativity observed in visual artistic representation. We describe creativity in the verbal domain, for the first time, in three patients with svPPA. Clinical presentations are carefully described in three svPPA patients exhibiting verbal creativity, including neuropsychology, neurologic exam, and structural magnetic resonance imaging (MRI). Voxel-based morphometry (VBM) was performed to quantify brain atrophy patterns in these patients against age-matched healthy controls. All three patients displayed new-onset creative writing behavior and produced extensive original work during the course of disease. Patient A developed interest in wordplay and generated a large volume of poetry. Patient B became fascinated with rhyming and punning. Patient C wrote and published a lifestyle guidebook. An overlap of their structural MR scans showed uniform sparing in the lateral portions of the language-dominant temporal lobe (superior and middle gyri) and atrophy in the medial temporal cortex (amygdala, limbic cortex). New-onset creativity in svPPA may represent a paradoxical functional facilitation. A similar drive for production is found in visually artistic and verbally creative patients. Mirroring the imaging findings in visually artistic patients, verbal preoccupation and creativity may be associated with medial atrophy in the language-dominant temporal lobe, but sparing of lateral dominant temporal and non-dominant posterior cortices.

  15. Selective verbal recognition memory impairments are associated with atrophy of the language network in non-semantic variants of primary progressive aphasia.

    PubMed

    Nilakantan, Aneesha S; Voss, Joel L; Weintraub, Sandra; Mesulam, M-Marsel; Rogalski, Emily J

    2017-06-01

    Primary progressive aphasia (PPA) is clinically defined by an initial loss of language function and preservation of other cognitive abilities, including episodic memory. While PPA primarily affects the left-lateralized perisylvian language network, some clinical neuropsychological tests suggest concurrent initial memory loss. The goal of this study was to test recognition memory of objects and words in the visual and auditory modality to separate language-processing impairments from retentive memory in PPA. Individuals with non-semantic PPA had longer reaction times and higher false alarms for auditory word stimuli compared to visual object stimuli. Moreover, false alarms for auditory word recognition memory were related to cortical thickness within the left inferior frontal gyrus and left temporal pole, while false alarms for visual object recognition memory was related to cortical thickness within the right-temporal pole. This pattern of results suggests that specific vulnerability in processing verbal stimuli can hinder episodic memory in PPA, and provides evidence for differential contributions of the left and right temporal poles in word and object recognition memory. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Retinal thinning is uniquely associated with medial temporal lobe atrophy in neurologically normal older adults

    PubMed Central

    Casaletto, Kaitlin B.; Ward, Michael E.; Baker, Nicholas S.; Bettcher, Brianne M.; Gelfand, Jeffrey M.; Li, Yaqiao; Chen, Robert; Dutt, Shubir; Miller, Bruce; Kramer, Joel H.; Green, Ari J.

    2017-01-01

    Given the converging pathologic and epidemiologic data indicating a relationship between retinal integrity and neurodegeneration, including Alzheimer’s disease (AD), we aimed to determine if retinal structure correlates with medial temporal lobe (MTL) structure and function in neurologically normal older adults. Spectral-domain optical coherence tomography, verbal and visual memory testing, and 3T-magnetic resonance imaging of the brain were performed in 79 neurologically normal adults enrolled in a healthy aging cohort study. Retinal nerve fiber thinning and reduced total macular and macular ganglion cell volumes were each associated with smaller MTL volumes (ps < 0.04). Notably, these markers of retinal structure were not associated with primary motor cortex or basal ganglia volumes (regions relatively unaffected in AD) (ps > 0.70), or frontal, precuneus, or temporoparietal volumes (regions affected in later AD Braak staging ps > 0.20). Retinal structure was not significantly associated with verbal or visual memory consolidation performances (ps > 0.14). Retinal structure was associated with MTL volumes, but not memory performances, in otherwise neurologically normal older adults. Given that MTL atrophy is a neuropathological hallmark of AD, retinal integrity may be an early marker of ongoing AD-related brain health. PMID:28068565

  17. Navigation ability dependent neural activation in the human brain: an fMRI study.

    PubMed

    Ohnishi, Takashi; Matsuda, Hiroshi; Hirakata, Makiko; Ugawa, Yoshikazu

    2006-08-01

    Visual-spatial navigation in familiar and unfamiliar environments is an essential requirement of daily life. Animal studies indicated the importance of the hippocampus for navigation. Neuroimaging studies demonstrated gender difference or strategies dependent difference of neural substrates for navigation. Using functional magnetic resonance imaging, we measured brain activity related to navigation in four groups of normal volunteers: good navigators (males and females) and poor navigators (males and females). In a whole group analysis, task related activity was noted in the hippocampus, parahippocampal gyrus, posterior cingulate cortex, precuneus, parietal association areas, and the visual association areas. In group comparisons, good navigators showed a stronger activation in the medial temporal area and precuneus than poor navigators. There was neither sex effect nor interaction effect between sex and navigation ability. The activity in the left medial temporal areas was positively correlated with task performance, whereas activity in the right parietal area was negatively correlated with task performance. Furthermore, the activity in the bilateral medial temporal areas was positively correlated with scores reflecting preferred navigation strategies, whereas activity in the bilateral superior parietal lobules was negatively correlated with them. Our data suggest that different brain activities related to navigation should reflect navigation skill and strategies.

  18. Pacing Visual Attention: Temporal Structure Effects

    DTIC Science & Technology

    1993-06-01

    of perception and motor action: Ideomotor compatibility and interference in divided attention . Journal of Motor Behavior, 2, (3), 155-162. Kwak, H...1993 Dissertation, Jun 89 - Jun 93 4. TITLE AND SUBTITLE S. FUNDING NUMBERS Pacing Visual Attention : Temporal Structure Effects PE - 62202F 6. AUTHOR(S...that persisting temporal relationships may be an important factor in the external (exogenous) control of visual attention , at least to some extent, was

  19. An exploratory study of temporal integration in the peripheral retina of myopes

    NASA Astrophysics Data System (ADS)

    Macedo, Antonio F.; Encarnação, Tito J.; Vilarinho, Daniel; Baptista, António M. G.

    2017-08-01

    The visual system takes time to respond to visual stimuli, neurons need to accumulate information over a time span in order to fire. Visual information perceived by the peripheral retina might be impaired by imperfect peripheral optics leading to myopia development. This study explored the effect of eccentricity, moderate myopia and peripheral refraction in temporal visual integration. Myopes and emmetropes showed similar performance at detecting briefly flashed stimuli in different retinal locations. Our results show evidence that moderate myopes have normal visual integration when refractive errors are corrected with contact lens; however, the tendency to increased temporal integration thresholds observed in myopes deserves further investigation.

  20. Crossmodal attention switching: auditory dominance in temporal discrimination tasks.

    PubMed

    Lukas, Sarah; Philipp, Andrea M; Koch, Iring

    2014-11-01

    Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Cannabis cue-induced brain activation correlates with drug craving in limbic and visual salience regions: Preliminary results

    PubMed Central

    Charboneau, Evonne J.; Dietrich, Mary S.; Park, Sohee; Cao, Aize; Watkins, Tristan J; Blackford, Jennifer U; Benningfield, Margaret M.; Martin, Peter R.; Buchowski, Maciej S.; Cowan, Ronald L.

    2013-01-01

    Craving is a major motivator underlying drug use and relapse but the neural correlates of cannabis craving are not well understood. This study sought to determine whether visual cannabis cues increase cannabis craving and whether cue-induced craving is associated with regional brain activation in cannabis-dependent individuals. Cannabis craving was assessed in 16 cannabis-dependent adult volunteers while they viewed cannabis cues during a functional MRI (fMRI) scan. The Marijuana Craving Questionnaire was administered immediately before and after each of three cannabis cue-exposure fMRI runs. FMRI blood-oxygenation-level-dependent (BOLD) signal intensity was determined in regions activated by cannabis cues to examine the relationship of regional brain activation to cannabis craving. Craving scores increased significantly following exposure to visual cannabis cues. Visual cues activated multiple brain regions, including inferior orbital frontal cortex, posterior cingulate gyrus, parahippocampal gyrus, hippocampus, amygdala, superior temporal pole, and occipital cortex. Craving scores at baseline and at the end of all three runs were significantly correlated with brain activation during the first fMRI run only, in the limbic system (including amygdala and hippocampus) and paralimbic system (superior temporal pole), and visual regions (occipital cortex). Cannabis cues increased craving in cannabis-dependent individuals and this increase was associated with activation in the limbic, paralimbic, and visual systems during the first fMRI run, but not subsequent fMRI runs. These results suggest that these regions may mediate visually cued aspects of drug craving. This study provides preliminary evidence for the neural basis of cue-induced cannabis craving and suggests possible neural targets for interventions targeted at treating cannabis dependence. PMID:24035535

  2. The timing of associative memory formation: frontal lobe and anterior medial temporal lobe activity at associative binding predicts memory

    PubMed Central

    Hales, J. B.

    2011-01-01

    The process of associating items encountered over time and across variable time delays is fundamental for creating memories in daily life, such as for stories and episodes. Forming associative memory for temporally discontiguous items involves medial temporal lobe structures and additional neocortical processing regions, including prefrontal cortex, parietal lobe, and lateral occipital regions. However, most prior memory studies, using concurrently presented stimuli, have failed to examine the temporal aspect of successful associative memory formation to identify when activity in these brain regions is predictive of associative memory formation. In the current study, functional MRI data were acquired while subjects were shown pairs of sequentially presented visual images with a fixed interitem delay within pairs. This design allowed the entire time course of the trial to be analyzed, starting from onset of the first item, across the 5.5-s delay period, and through offset of the second item. Subjects then completed a postscan recognition test for the items and associations they encoded during the scan and their confidence for each. After controlling for item-memory strength, we isolated brain regions selectively involved in associative encoding. Consistent with prior findings, increased regional activity predicting subsequent associative memory success was found in anterior medial temporal lobe regions of left perirhinal and entorhinal cortices and in left prefrontal cortex and lateral occipital regions. The temporal separation within each pair, however, allowed extension of these findings by isolating the timing of regional involvement, showing that increased response in these regions occurs during binding but not during maintenance. PMID:21248058

  3. Natural image sequences constrain dynamic receptive fields and imply a sparse code.

    PubMed

    Häusler, Chris; Susemihl, Alex; Nawrot, Martin P

    2013-11-06

    In their natural environment, animals experience a complex and dynamic visual scenery. Under such natural stimulus conditions, neurons in the visual cortex employ a spatially and temporally sparse code. For the input scenario of natural still images, previous work demonstrated that unsupervised feature learning combined with the constraint of sparse coding can predict physiologically measured receptive fields of simple cells in the primary visual cortex. This convincingly indicated that the mammalian visual system is adapted to the natural spatial input statistics. Here, we extend this approach to the time domain in order to predict dynamic receptive fields that can account for both spatial and temporal sparse activation in biological neurons. We rely on temporal restricted Boltzmann machines and suggest a novel temporal autoencoding training procedure. When tested on a dynamic multi-variate benchmark dataset this method outperformed existing models of this class. Learning features on a large dataset of natural movies allowed us to model spatio-temporal receptive fields for single neurons. They resemble temporally smooth transformations of previously obtained static receptive fields and are thus consistent with existing theories. A neuronal spike response model demonstrates how the dynamic receptive field facilitates temporal and population sparseness. We discuss the potential mechanisms and benefits of a spatially and temporally sparse representation of natural visual input. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  4. Realigning thunder and lightning: temporal adaptation to spatiotemporally distant events.

    PubMed

    Navarra, Jordi; Fernández-Prieto, Irune; Garcia-Morera, Joel

    2013-01-01

    The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exposure to visual and auditory stimuli that were separated by 706 ms and appeared either from the same (Experiment 1) or from different spatial positions (Experiment 2). A simultaneity judgment task (SJ) was administered right afterwards. Temporal realignment between vision and audition was observed, in both Experiment 1 and 2, when comparing the participants' SJs after this exposure phase with those obtained after a baseline exposure to audiovisual synchrony. However, this effect was present only when the visual stimuli preceded the auditory stimuli during the exposure to asynchrony. A similar pattern of results (temporal realignment after exposure to visual-leading asynchrony but not after exposure to auditory-leading asynchrony) was obtained using temporal order judgments (TOJs) instead of SJs (Experiment 3). Taken together, these results suggest that temporal recalibration still occurs for visual and auditory stimuli that fall clearly outside the so-called temporal window for multisensory integration and appear from different spatial positions. This temporal realignment may be modulated by long-term experience with the kind of asynchrony (vision-leading) that we most frequently encounter in the outside world (e.g., while perceiving distant events).

  5. Neural pathways for visual speech perception

    PubMed Central

    Bernstein, Lynne E.; Liebenthal, Einat

    2014-01-01

    This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody) can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns of activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1) The visual perception of speech relies on visual pathway representations of speech qua speech. (2) A proposed site of these representations, the temporal visual speech area (TVSA) has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS). (3) Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA. PMID:25520611

  6. Multifocal Visual Evoked Potential in Eyes With Temporal Hemianopia From Chiasmal Compression: Correlation With Standard Automated Perimetry and OCT Findings.

    PubMed

    Sousa, Rafael M; Oyamada, Maria K; Cunha, Leonardo P; Monteiro, Mário L R

    2017-09-01

    To verify whether multifocal visual evoked potential (mfVEP) can differentiate eyes with temporal hemianopia due to chiasmal compression from healthy controls. To assess the relationship between mfVEP, standard automated perimetry (SAP), and Fourier domain-optical coherence tomography (FD-OCT) macular and peripapillary retinal nerve fiber layer (RNFL) thickness measurements. Twenty-seven eyes with permanent temporal visual field (VF) defects from chiasmal compression on SAP and 43 eyes of healthy controls were submitted to mfVEP and FD-OCT scanning. Multifocal visual evoked potential was elicited using a stimulus pattern of 60 sectors and the responses were averaged for the four quadrants and two hemifields. Optical coherence tomography macular measurements were averaged in quadrants and halves, while peripapillary RNFL thickness was averaged in four sectors around the disc. Visual field loss was estimated in four quadrants and each half of the 24-2 strategy test points. Multifocal visual evoked potential measurements in the two groups were compared using generalized estimated equations, and the correlations between mfVEP, VF, and OCT findings were quantified. Multifocal visual evoked potential-measured temporal P1 and N2 amplitudes were significantly smaller in patients than in controls. No significant difference in amplitude was observed for nasal parameters. A significant correlation was found between mfVEP amplitudes and temporal VF loss, and between mfVEP amplitudes and the corresponding OCT-measured macular and RNFL thickness parameters. Multifocal visual evoked potential amplitude parameters were able to differentiate eyes with temporal hemianopia from controls and were significantly correlated with VF and OCT findings, suggesting mfVEP is a useful tool for the detection of visual abnormalities in patients with chiasmal compression.

  7. A matter of time: improvement of visual temporal processing during training-induced restoration of light detection performance

    PubMed Central

    Poggel, Dorothe A.; Treutwein, Bernhard; Sabel, Bernhard A.; Strasburger, Hans

    2015-01-01

    The issue of how basic sensory and temporal processing are related is still unresolved. We studied temporal processing, as assessed by simple visual reaction times (RT) and double-pulse resolution (DPR), in patients with partial vision loss after visual pathway lesions and investigated whether vision restoration training (VRT), a training program designed to improve light detection performance, would also affect temporal processing. Perimetric and campimetric visual field tests as well as maps of DPR thresholds and RT were acquired before and after a 3 months training period with VRT. Patient performance was compared to that of age-matched healthy subjects. Intact visual field size increased during training. Averaged across the entire visual field, DPR remained constant while RT improved slightly. However, in transition zones between the blind and intact areas (areas of residual vision) where patients had shown between 20 and 80% of stimulus detection probability in pre-training visual field tests, both DPR and RT improved markedly. The magnitude of improvement depended on the defect depth (or degree of intactness) of the respective region at baseline. Inter-individual training outcome variability was very high, with some patients showing little change and others showing performance approaching that of healthy controls. Training-induced improvement of light detection in patients with visual field loss thus generalized to dynamic visual functions. The findings suggest that similar neural mechanisms may underlie the impairment and subsequent training-induced functional recovery of both light detection and temporal processing. PMID:25717307

  8. More is still not better: testing the perturbation model of temporal reference memory across different modalities and tasks.

    PubMed

    Ogden, Ruth S; Jones, Luke A

    2009-05-01

    The ability of the perturbation model (Jones & Wearden, 2003) to account for reference memory function in a visual temporal generalization task and auditory and visual reproduction tasks was examined. In all tasks the number of presentations of the standard was manipulated (1, 3, or 5), and its effect on performance was compared. In visual temporal generalization the number of presentations of the standard did not affect the number of times the standard was correctly identified, nor did it affect the overall temporal generalization gradient. In auditory reproduction there was no effect of the number of times the standard was presented on mean reproductions. In visual reproduction mean reproductions were shorter when the standard was only presented once; however, this effect was reduced when a visual cue was provided before the first presentation of the standard. Whilst the results of all experiments are best accounted for by the perturbation model there appears to be some attentional benefit to multiple presentations of the standard in visual reproduction.

  9. Levetiracetam reduces abnormal network activations in temporal lobe epilepsy.

    PubMed

    Wandschneider, Britta; Stretton, Jason; Sidhu, Meneka; Centeno, Maria; Kozák, Lajos R; Symms, Mark; Thompson, Pamela J; Duncan, John S; Koepp, Matthias J

    2014-10-21

    We used functional MRI (fMRI) and a left-lateralizing verbal and a right-lateralizing visual-spatial working memory (WM) paradigm to investigate the effects of levetiracetam (LEV) on cognitive network activations in patients with drug-resistant temporal lobe epilepsy (TLE). In a retrospective study, we compared task-related fMRI activations and deactivations in 53 patients with left and 54 patients with right TLE treated with (59) or without (48) LEV. In patients on LEV, activation patterns were correlated with the daily LEV dose. We isolated task- and syndrome-specific effects. Patients on LEV showed normalization of functional network deactivations in the right temporal lobe in right TLE during the right-lateralizing visual-spatial task and in the left temporal lobe in left TLE during the verbal task. In a post hoc analysis, a significant dose-dependent effect was demonstrated in right TLE during the visual-spatial WM task: the lower the LEV dose, the greater the abnormal right hippocampal activation. At a less stringent threshold (p < 0.05, uncorrected for multiple comparisons), a similar dose effect was observed in left TLE during the verbal task: both hippocampi were more abnormally activated in patients with lower doses, but more prominently on the left. Our findings suggest that LEV is associated with restoration of normal activation patterns. Longitudinal studies are necessary to establish whether the neural patterns translate to drug response. This study provides Class III evidence that in patients with drug-resistant TLE, levetiracetam has a dose-dependent facilitation of deactivation of mesial temporal structures. © 2014 American Academy of Neurology.

  10. Magnocellular-dorsal pathway and sub-lexical route in developmental dyslexia

    PubMed Central

    Gori, Simone; Cecchini, Paolo; Bigoni, Anna; Molteni, Massimo; Facoetti, Andrea

    2014-01-01

    Although developmental dyslexia (DD) is frequently associate with a phonological deficit, the underlying neurobiological cause remains undetermined. Recently, a new model, called “temporal sampling framework” (TSF), provided an innovative prospect in the DD study. TSF suggests that deficits in syllabic perception at a specific temporal frequencies are the critical basis for the poor reading performance in DD. This approach was presented as a possible neurobiological substrate of the phonological deficit of DD but the TSF can also easily be applied to the visual modality deficits. The deficit in the magnocellular-dorsal (M-D) pathway - often found in individuals with DD - fits well with a temporal oscillatory deficit specifically related to this visual pathway. This study investigated the visual M-D and parvocellular-ventral (P-V) pathways in dyslexic and in chronological age and IQ-matched normally reading children by measuring temporal (frequency doubling illusion) and static stimuli sensitivity, respectively. A specific deficit in M-D temporal oscillation was found. Importantly, the M-D deficit was selectively shown in poor phonological decoders. M-D deficit appears to be frequent because 75% of poor pseudo-word readers were at least 1 SD below the mean of the controls. Finally, a replication study by using a new group of poor phonological decoders and reading level controls suggested a crucial role of M-D deficit in DD. These results showed that a M-D deficit might impair the sub-lexical mechanisms that are critical for reading development. The possible link between these findings and TSF is discussed. PMID:25009484

  11. A computational theory of visual receptive fields.

    PubMed

    Lindeberg, Tony

    2013-12-01

    A receptive field constitutes a region in the visual field where a visual cell or a visual operator responds to visual stimuli. This paper presents a theory for what types of receptive field profiles can be regarded as natural for an idealized vision system, given a set of structural requirements on the first stages of visual processing that reflect symmetry properties of the surrounding world. These symmetry properties include (i) covariance properties under scale changes, affine image deformations, and Galilean transformations of space-time as occur for real-world image data as well as specific requirements of (ii) temporal causality implying that the future cannot be accessed and (iii) a time-recursive updating mechanism of a limited temporal buffer of the past as is necessary for a genuine real-time system. Fundamental structural requirements are also imposed to ensure (iv) mutual consistency and a proper handling of internal representations at different spatial and temporal scales. It is shown how a set of families of idealized receptive field profiles can be derived by necessity regarding spatial, spatio-chromatic, and spatio-temporal receptive fields in terms of Gaussian kernels, Gaussian derivatives, or closely related operators. Such image filters have been successfully used as a basis for expressing a large number of visual operations in computer vision, regarding feature detection, feature classification, motion estimation, object recognition, spatio-temporal recognition, and shape estimation. Hence, the associated so-called scale-space theory constitutes a both theoretically well-founded and general framework for expressing visual operations. There are very close similarities between receptive field profiles predicted from this scale-space theory and receptive field profiles found by cell recordings in biological vision. Among the family of receptive field profiles derived by necessity from the assumptions, idealized models with very good qualitative agreement are obtained for (i) spatial on-center/off-surround and off-center/on-surround receptive fields in the fovea and the LGN, (ii) simple cells with spatial directional preference in V1, (iii) spatio-chromatic double-opponent neurons in V1, (iv) space-time separable spatio-temporal receptive fields in the LGN and V1, and (v) non-separable space-time tilted receptive fields in V1, all within the same unified theory. In addition, the paper presents a more general framework for relating and interpreting these receptive fields conceptually and possibly predicting new receptive field profiles as well as for pre-wiring covariance under scaling, affine, and Galilean transformations into the representations of visual stimuli. This paper describes the basic structure of the necessity results concerning receptive field profiles regarding the mathematical foundation of the theory and outlines how the proposed theory could be used in further studies and modelling of biological vision. It is also shown how receptive field responses can be interpreted physically, as the superposition of relative variations of surface structure and illumination variations, given a logarithmic brightness scale, and how receptive field measurements will be invariant under multiplicative illumination variations and exposure control mechanisms.

  12. Distinct patterns of brain atrophy in Genetic Frontotemporal Dementia Initiative (GENFI) cohort revealed by visual rating scales.

    PubMed

    Fumagalli, Giorgio G; Basilico, Paola; Arighi, Andrea; Bocchetta, Martina; Dick, Katrina M; Cash, David M; Harding, Sophie; Mercurio, Matteo; Fenoglio, Chiara; Pietroboni, Anna M; Ghezzi, Laura; van Swieten, John; Borroni, Barbara; de Mendonça, Alexandre; Masellis, Mario; Tartaglia, Maria C; Rowe, James B; Graff, Caroline; Tagliavini, Fabrizio; Frisoni, Giovanni B; Laforce, Robert; Finger, Elizabeth; Sorbi, Sandro; Scarpini, Elio; Rohrer, Jonathan D; Galimberti, Daniela

    2018-05-24

    In patients with frontotemporal dementia, it has been shown that brain atrophy occurs earliest in the anterior cingulate, insula and frontal lobes. We used visual rating scales to investigate whether identifying atrophy in these areas may be helpful in distinguishing symptomatic patients carrying different causal mutations in the microtubule-associated protein tau (MAPT), progranulin (GRN) and chromosome 9 open reading frame (C9ORF72) genes. We also analysed asymptomatic carriers to see whether it was possible to visually identify brain atrophy before the appearance of symptoms. Magnetic resonance imaging of 343 subjects (63 symptomatic mutation carriers, 132 presymptomatic mutation carriers and 148 control subjects) from the Genetic Frontotemporal Dementia Initiative study were analysed by two trained raters using a protocol of six visual rating scales that identified atrophy in key regions of the brain (orbitofrontal, anterior cingulate, frontoinsula, anterior and medial temporal lobes and posterior cortical areas). Intra- and interrater agreement were greater than 0.73 for all the scales. Voxel-based morphometric analysis demonstrated a strong correlation between the visual rating scale scores and grey matter atrophy in the same region for each of the scales. Typical patterns of atrophy were identified: symmetric anterior and medial temporal lobe involvement for MAPT, asymmetric frontal and parietal loss for GRN, and a more widespread pattern for C9ORF72. Presymptomatic MAPT carriers showed greater atrophy in the medial temporal region than control subjects, but the visual rating scales could not identify presymptomatic atrophy in GRN or C9ORF72 carriers. These simple-to-use and reproducible scales may be useful tools in the clinical setting for the discrimination of different mutations of frontotemporal dementia, and they may even help to identify atrophy prior to onset in those with MAPT mutations.

  13. Brain activation for reading and listening comprehension: An fMRI study of modality effects and individual differences in language comprehension

    PubMed Central

    Buchweitz, Augusto; Mason, Robert A.; Tomitch, Lêda M. B.; Just, Marcel Adam

    2010-01-01

    The study compared the brain activation patterns associated with the comprehension of written and spoken Portuguese sentences. An fMRI study measured brain activity while participants read and listened to sentences about general world knowledge. Participants had to decide if the sentences were true or false. To mirror the transient nature of spoken sentences, visual input was presented in rapid serial visual presentation format. The results showed a common core of amodal left inferior frontal and middle temporal gyri activation, as well as modality specific brain activation associated with listening and reading comprehension. Reading comprehension was associated with more left-lateralized activation and with left inferior occipital cortex (including fusiform gyrus) activation. Listening comprehension was associated with extensive bilateral temporal cortex activation and more overall activation of the whole cortex. Results also showed individual differences in brain activation for reading comprehension. Readers with lower working memory capacity showed more activation of right-hemisphere areas (spillover of activation) and more activation in the prefrontal cortex, potentially associated with more demand placed on executive control processes. Readers with higher working memory capacity showed more activation in a frontal-posterior network of areas (left angular and precentral gyri, and right inferior frontal gyrus). The activation of this network may be associated with phonological rehearsal of linguistic information when reading text presented in rapid serial visual format. The study demonstrates the modality fingerprints for language comprehension and indicates how low- and high working memory capacity readers deal with reading text presented in serial format. PMID:21526132

  14. Does the Sound of a Barking Dog Activate its Corresponding Visual Form? An fMRI Investigation of Modality-Specific Semantic Access

    PubMed Central

    Reilly, Jamie; Garcia, Amanda; Binney, Richard J.

    2016-01-01

    Much remains to be learned about the neural architecture underlying word meaning. Fully distributed models of semantic memory predict that the sound of a barking dog will conjointly engage a network of distributed sensorimotor spokes. An alternative framework holds that modality-specific features additionally converge within transmodal hubs. Participants underwent functional MRI while covertly naming familiar objects versus newly learned novel objects from only one of their constituent semantic features (visual form, characteristic sound, or point-light motion representation). Relative to the novel object baseline, familiar concepts elicited greater activation within association regions specific to that presentation modality. Furthermore, visual form elicited activation within high-level auditory association cortex. Conversely, environmental sounds elicited activation in regions proximal to visual association cortex. Both conditions commonly engaged a putative hub region within lateral anterior temporal cortex. These results support hybrid semantic models in which local hubs and distributed spokes are dually engaged in service of semantic memory. PMID:27289210

  15. 3D Shape Perception in Posterior Cortical Atrophy: A Visual Neuroscience Perspective.

    PubMed

    Gillebert, Céline R; Schaeverbeke, Jolien; Bastin, Christine; Neyens, Veerle; Bruffaerts, Rose; De Weer, An-Sofie; Seghers, Alexandra; Sunaert, Stefan; Van Laere, Koen; Versijpt, Jan; Vandenbulcke, Mathieu; Salmon, Eric; Todd, James T; Orban, Guy A; Vandenberghe, Rik

    2015-09-16

    Posterior cortical atrophy (PCA) is a rare focal neurodegenerative syndrome characterized by progressive visuoperceptual and visuospatial deficits, most often due to atypical Alzheimer's disease (AD). We applied insights from basic visual neuroscience to analyze 3D shape perception in humans affected by PCA. Thirteen PCA patients and 30 matched healthy controls participated, together with two patient control groups with diffuse Lewy body dementia (DLBD) and an amnestic-dominant phenotype of AD, respectively. The hierarchical study design consisted of 3D shape processing for 4 cues (shading, motion, texture, and binocular disparity) with corresponding 2D and elementary feature extraction control conditions. PCA and DLBD exhibited severe 3D shape-processing deficits and AD to a lesser degree. In PCA, deficient 3D shape-from-shading was associated with volume loss in the right posterior inferior temporal cortex. This region coincided with a region of functional activation during 3D shape-from-shading in healthy controls. In PCA patients who performed the same fMRI paradigm, response amplitude during 3D shape-from-shading was reduced in this region. Gray matter volume in this region also correlated with 3D shape-from-shading in AD. 3D shape-from-disparity in PCA was associated with volume loss slightly more anteriorly in posterior inferior temporal cortex as well as in ventral premotor cortex. The findings in right posterior inferior temporal cortex and right premotor cortex are consistent with neurophysiologically based models of the functional anatomy of 3D shape processing. However, in DLBD, 3D shape deficits rely on mechanisms distinct from inferior temporal structural integrity. Posterior cortical atrophy (PCA) is a neurodegenerative syndrome characterized by progressive visuoperceptual dysfunction and most often an atypical presentation of Alzheimer's disease (AD) affecting the ventral and dorsal visual streams rather than the medial temporal system. We applied insights from fundamental visual neuroscience to analyze 3D shape perception in PCA. 3D shape-processing deficits were affected beyond what could be accounted for by lower-order processing deficits. For shading and disparity, this was related to volume loss in regions previously implicated in 3D shape processing in the intact human and nonhuman primate brain. Typical amnestic-dominant AD patients also exhibited 3D shape deficits. Advanced visual neuroscience provides insight into the pathogenesis of PCA that also bears relevance for vision in typical AD. Copyright © 2015 Gillebert, Schaeverbeke et al.

  16. Comparative visual ecophysiology of mid-Atlantic temperate reef fishes

    PubMed Central

    Horodysky, Andrij Z.; Brill, Richard W.; Crawford, Kendyl C.; Seagroves, Elizabeth S.; Johnson, Andrea K.

    2013-01-01

    Summary The absolute light sensitivities, temporal properties, and spectral sensitivities of the visual systems of three mid-Atlantic temperate reef fishes (Atlantic spadefish [Ephippidae: Chaetodipterus faber], tautog [Labridae: Tautoga onitis], and black sea bass [Serranidae: Centropristis striata]) were studied via electroretinography (ERG). Pelagic Atlantic spadefish exhibited higher temporal resolution but a narrower dynamic range than the two more demersal foragers. The higher luminous sensitivities of tautog and black sea bass were similar to other benthic and demersal coastal mid-Atlantic fishes. Flicker fusion frequency experiments revealed significant interspecific differences at maximum intensities that correlated with lifestyle and habitat. Spectral responses of the three species spanned 400–610 nm, with high likelihood of cone dichromacy providing the basis for color and contrast discrimination. Significant day-night differences in spectral responses were evident in spadefish and black sea bass but not tautog, a labrid with characteristic structure-associated nocturnal torpor. Atlantic spadefish responded to a wider range of wavelengths than did deeper-dwelling tautog or black sea bass. Collectively, these results suggest that temperate reef-associated fishes are well-adapted to their gradient of brighter to dimmer photoclimates, representative of their unique ecologies and life histories. Continuing anthropogenic degradation of water quality in coastal environments, at a pace faster than the evolution of visual systems, may however impede visual foraging and reproductive signaling in temperate reef fishes. PMID:24285711

  17. Temporal order and processing acuity of visual, auditory, and tactile perception in developmentally dyslexic young adults.

    PubMed

    Laasonen, M; Service, E; Virsu, V

    2001-12-01

    We studied the temporal acuity of 16 developmentally dyslexic young adults in three perceptual modalities. The control group consisted of 16 age- and IQ-matched normal readers. Two methods were used. In the temporal order judgment (TOJ) method, the stimuli were spatially separate fingertip indentations in the tactile system, tone bursts of different pitches in audition, and light flashes in vision. Participants indicated which one of two stimuli appeared first. To test temporal processing acuity (TPA), the same 8-msec nonspeech stimuli were presented as two parallel sequences of three stimulus pulses. Participants indicated, without order judgments, whether the pulses of the two sequences were simultaneous or nonsimultaneous. The dyslexic readers were somewhat inferior to the normal readers in all six temporal acuity tasks on average. Thus, our results agreed with the existence of a pansensory temporal processing deficit associated with dyslexia in a language with shallow orthography (Finnish) and in well-educated adults. The dyslexic and normal readers' temporal acuities overlapped so much, however, that acuity deficits alone would not allow dyslexia diagnoses. It was irrelevant whether or not the acuity task required order judgments. The groups did not differ in the nontemporal aspects of our experiments. Correlations between temporal acuity and reading-related tasks suggested that temporal acuity is associated with phonological awareness.

  18. Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs

    PubMed Central

    ten Oever, Sanne; Sack, Alexander T.; Wheat, Katherine L.; Bien, Nina; van Atteveldt, Nienke

    2013-01-01

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception. PMID:23805110

  19. Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs.

    PubMed

    Ten Oever, Sanne; Sack, Alexander T; Wheat, Katherine L; Bien, Nina; van Atteveldt, Nienke

    2013-01-01

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.

  20. Effects of Hand Proximity and Movement Direction in Spatial and Temporal Gap Discrimination.

    PubMed

    Wiemers, Michael; Fischer, Martin H

    2016-01-01

    Previous research on the interplay between static manual postures and visual attention revealed enhanced visual selection near the hands (near-hand effect). During active movements there is also superior visual performance when moving toward compared to away from the stimulus (direction effect). The "modulated visual pathways" hypothesis argues that differential involvement of magno- and parvocellular visual processing streams causes the near-hand effect. The key finding supporting this hypothesis is an increase in temporal and a reduction in spatial processing in near-hand space (Gozli et al., 2012). Since this hypothesis has, so far, only been tested with static hand postures, we provide a conceptual replication of Gozli et al.'s (2012) result with moving hands, thus also probing the generality of the direction effect. Participants performed temporal or spatial gap discriminations while their right hand was moving below the display. In contrast to Gozli et al. (2012), temporal gap discrimination was superior at intermediate and not near hand proximity. In spatial gap discrimination, a direction effect without hand proximity effect suggests that pragmatic attentional maps overshadowed temporal/spatial processing biases for far/near-hand space.

  1. Electrophysiological evidence for Audio-visuo-lingual speech integration.

    PubMed

    Treille, Avril; Vilain, Coriandre; Schwartz, Jean-Luc; Hueber, Thomas; Sato, Marc

    2018-01-31

    Recent neurophysiological studies demonstrate that audio-visual speech integration partly operates through temporal expectations and speech-specific predictions. From these results, one common view is that the binding of auditory and visual, lipread, speech cues relies on their joint probability and prior associative audio-visual experience. The present EEG study examined whether visual tongue movements integrate with relevant speech sounds, despite little associative audio-visual experience between the two modalities. A second objective was to determine possible similarities and differences of audio-visual speech integration between unusual audio-visuo-lingual and classical audio-visuo-labial modalities. To this aim, participants were presented with auditory, visual, and audio-visual isolated syllables, with the visual presentation related to either a sagittal view of the tongue movements or a facial view of the lip movements of a speaker, with lingual and facial movements previously recorded by an ultrasound imaging system and a video camera. In line with previous EEG studies, our results revealed an amplitude decrease and a latency facilitation of P2 auditory evoked potentials in both audio-visual-lingual and audio-visuo-labial conditions compared to the sum of unimodal conditions. These results argue against the view that auditory and visual speech cues solely integrate based on prior associative audio-visual perceptual experience. Rather, they suggest that dynamic and phonetic informational cues are sharable across sensory modalities, possibly through a cross-modal transfer of implicit articulatory motor knowledge. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Temporal and Spatial Predictability of an Irrelevant Event Differently Affect Detection and Memory of Items in a Visual Sequence

    PubMed Central

    Ohyama, Junji; Watanabe, Katsumi

    2016-01-01

    We examined how the temporal and spatial predictability of a task-irrelevant visual event affects the detection and memory of a visual item embedded in a continuously changing sequence. Participants observed 11 sequentially presented letters, during which a task-irrelevant visual event was either present or absent. Predictabilities of spatial location and temporal position of the event were controlled in 2 × 2 conditions. In the spatially predictable conditions, the event occurred at the same location within the stimulus sequence or at another location, while, in the spatially unpredictable conditions, it occurred at random locations. In the temporally predictable conditions, the event timing was fixed relative to the order of the letters, while in the temporally unpredictable condition; it could not be predicted from the letter order. Participants performed a working memory task and a target detection reaction time (RT) task. Memory accuracy was higher for a letter simultaneously presented at the same location as the event in the temporally unpredictable conditions, irrespective of the spatial predictability of the event. On the other hand, the detection RTs were only faster for a letter simultaneously presented at the same location as the event when the event was both temporally and spatially predictable. Thus, to facilitate ongoing detection processes, an event must be predictable both in space and time, while memory processes are enhanced by temporally unpredictable (i.e., surprising) events. Evidently, temporal predictability has differential effects on detection and memory of a visual item embedded in a sequence of images. PMID:26869966

  3. Temporal and Spatial Predictability of an Irrelevant Event Differently Affect Detection and Memory of Items in a Visual Sequence.

    PubMed

    Ohyama, Junji; Watanabe, Katsumi

    2016-01-01

    We examined how the temporal and spatial predictability of a task-irrelevant visual event affects the detection and memory of a visual item embedded in a continuously changing sequence. Participants observed 11 sequentially presented letters, during which a task-irrelevant visual event was either present or absent. Predictabilities of spatial location and temporal position of the event were controlled in 2 × 2 conditions. In the spatially predictable conditions, the event occurred at the same location within the stimulus sequence or at another location, while, in the spatially unpredictable conditions, it occurred at random locations. In the temporally predictable conditions, the event timing was fixed relative to the order of the letters, while in the temporally unpredictable condition; it could not be predicted from the letter order. Participants performed a working memory task and a target detection reaction time (RT) task. Memory accuracy was higher for a letter simultaneously presented at the same location as the event in the temporally unpredictable conditions, irrespective of the spatial predictability of the event. On the other hand, the detection RTs were only faster for a letter simultaneously presented at the same location as the event when the event was both temporally and spatially predictable. Thus, to facilitate ongoing detection processes, an event must be predictable both in space and time, while memory processes are enhanced by temporally unpredictable (i.e., surprising) events. Evidently, temporal predictability has differential effects on detection and memory of a visual item embedded in a sequence of images.

  4. Realigning Thunder and Lightning: Temporal Adaptation to Spatiotemporally Distant Events

    PubMed Central

    Navarra, Jordi; Fernández-Prieto, Irune; Garcia-Morera, Joel

    2013-01-01

    The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exposure to visual and auditory stimuli that were separated by 706 ms and appeared either from the same (Experiment 1) or from different spatial positions (Experiment 2). A simultaneity judgment task (SJ) was administered right afterwards. Temporal realignment between vision and audition was observed, in both Experiment 1 and 2, when comparing the participants’ SJs after this exposure phase with those obtained after a baseline exposure to audiovisual synchrony. However, this effect was present only when the visual stimuli preceded the auditory stimuli during the exposure to asynchrony. A similar pattern of results (temporal realignment after exposure to visual-leading asynchrony but not after exposure to auditory-leading asynchrony) was obtained using temporal order judgments (TOJs) instead of SJs (Experiment 3). Taken together, these results suggest that temporal recalibration still occurs for visual and auditory stimuli that fall clearly outside the so-called temporal window for multisensory integration and appear from different spatial positions. This temporal realignment may be modulated by long-term experience with the kind of asynchrony (vision-leading) that we most frequently encounter in the outside world (e.g., while perceiving distant events). PMID:24391928

  5. The neural response in short-term visual recognition memory for perceptual conjunctions.

    PubMed

    Elliott, R; Dolan, R J

    1998-01-01

    Short-term visual memory has been widely studied in humans and animals using delayed matching paradigms. The present study used positron emission tomography (PET) to determine the neural substrates of delayed matching to sample for complex abstract patterns over a 5-s delay. More specifically, the study assessed any differential neural response associated with remembering individual perceptual properties (color only and shape only) compared to conjunction between these properties. Significant activations associated with short-term visual memory (all memory conditions compared to perceptuomotor control) were observed in extrastriate cortex, medial and lateral parietal cortex, anterior cingulate, inferior frontal gyrus, and the thalamus. Significant deactivations were observed throughout the temporal cortex. Although the requirement to remember color compared to shape was associated with subtly different patterns of blood flow, the requirement to remember perceptual conjunctions between these features was not associated with additional specific activations. These data suggest that visual memory over a delay of the order of 5 s is mainly dependent on posterior perceptual regions of the cortex, with the exact regions depending on the perceptual aspect of the stimuli to be remembered.

  6. Early, but not late visual distractors affect movement synchronization to a temporal-spatial visual cue.

    PubMed

    Booth, Ashley J; Elliott, Mark T

    2015-01-01

    The ease of synchronizing movements to a rhythmic cue is dependent on the modality of the cue presentation: timing accuracy is much higher when synchronizing with discrete auditory rhythms than an equivalent visual stimulus presented through flashes. However, timing accuracy is improved if the visual cue presents spatial as well as temporal information (e.g., a dot following an oscillatory trajectory). Similarly, when synchronizing with an auditory target metronome in the presence of a second visual distracting metronome, the distraction is stronger when the visual cue contains spatial-temporal information rather than temporal only. The present study investigates individuals' ability to synchronize movements to a temporal-spatial visual cue in the presence of same-modality temporal-spatial distractors. Moreover, we investigated how increasing the number of distractor stimuli impacted on maintaining synchrony with the target cue. Participants made oscillatory vertical arm movements in time with a vertically oscillating white target dot centered on a large projection screen. The target dot was surrounded by 2, 8, or 14 distractor dots, which had an identical trajectory to the target but at a phase lead or lag of 0, 100, or 200 ms. We found participants' timing performance was only affected in the phase-lead conditions and when there were large numbers of distractors present (8 and 14). This asymmetry suggests participants still rely on salient events in the stimulus trajectory to synchronize movements. Subsequently, distractions occurring in the window of attention surrounding those events have the maximum impact on timing performance.

  7. Precortical dysfunction of spatial and temporal visual processing in migraine.

    PubMed Central

    Coleston, D M; Chronicle, E; Ruddock, K H; Kennard, C

    1994-01-01

    This paper examines spatial and temporal processing in migraineurs (diagnosed according to International Headache Society criteria, 1988), using psychophysical tests that measure spatial and temporal responses. These tests are considered to specifically assess precortical mechanisms. Results suggest precortical dysfunction for processing of spatial and temporal visual stimuli in 11 migraineurs with visual aura and 13 migraineurs without aura; the two groups could not be distinguished. As precortical dysfunction seems to be common to both groups of patients, it is suggested that symptoms that are experienced by both groups, such as blurring of vision and photophobia, may have their basis at a precortical level. PMID:7931382

  8. Functional Connectivity of the Amygdala Is Disrupted in Preschool-Aged Children With Autism Spectrum Disorder.

    PubMed

    Shen, Mark D; Li, Deana D; Keown, Christopher L; Lee, Aaron; Johnson, Ryan T; Angkustsiri, Kathleen; Rogers, Sally J; Müller, Ralph-Axel; Amaral, David G; Nordahl, Christine Wu

    2016-09-01

    The objective of this study was to determine whether functional connectivity of the amygdala is altered in preschool-age children with autism spectrum disorder (ASD) and to assess the clinical relevance of observed alterations in amygdala connectivity. A resting-state functional connectivity magnetic resonance imaging study of the amygdala (and a parallel study of primary visual cortex) was conducted in 72 boys (mean age 3.5 years; n = 43 with ASD; n = 29 age-matched controls). The ASD group showed significantly weaker connectivity between the amygdala and several brain regions involved in social communication and repetitive behaviors, including bilateral medial prefrontal cortex, temporal lobes, and striatum (p < .05, corrected). Weaker connectivity between the amygdala and frontal and temporal lobes was significantly correlated with increased autism severity in the ASD group (p < .05). In a parallel analysis examining the functional connectivity of primary visual cortex, the ASD group showed significantly weaker connectivity between visual cortex and sensorimotor regions (p < .05, corrected). Weaker connectivity between visual cortex and sensorimotor regions was not correlated with core autism symptoms, but instead was correlated with increased sensory hypersensitivity in the visual/auditory domain (p < .05). These findings indicate that preschool-age children with ASD have disrupted functional connectivity between the amygdala and regions of the brain important for social communication and language, which might be clinically relevant because weaker connectivity was associated with increased autism severity. Moreover, although amygdala connectivity was associated with behavioral domains that are diagnostic of ASD, altered connectivity of primary visual cortex was related to sensory hypersensitivity. Copyright © 2016 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.

  9. VAUD: A Visual Analysis Approach for Exploring Spatio-Temporal Urban Data.

    PubMed

    Chen, Wei; Huang, Zhaosong; Wu, Feiran; Zhu, Minfeng; Guan, Huihua; Maciejewski, Ross

    2017-10-02

    Urban data is massive, heterogeneous, and spatio-temporal, posing a substantial challenge for visualization and analysis. In this paper, we design and implement a novel visual analytics approach, Visual Analyzer for Urban Data (VAUD), that supports the visualization, querying, and exploration of urban data. Our approach allows for cross-domain correlation from multiple data sources by leveraging spatial-temporal and social inter-connectedness features. Through our approach, the analyst is able to select, filter, aggregate across multiple data sources and extract information that would be hidden to a single data subset. To illustrate the effectiveness of our approach, we provide case studies on a real urban dataset that contains the cyber-, physical-, and socialinformation of 14 million citizens over 22 days.

  10. Uncovering neurodevelopmental windows of susceptibility to manganese exposure using dentine microspatial analyses.

    PubMed

    Claus Henn, Birgit; Austin, Christine; Coull, Brent A; Schnaas, Lourdes; Gennings, Chris; Horton, Megan K; Hernández-Ávila, Mauricio; Hu, Howard; Téllez-Rojo, Martha Maria; Wright, Robert O; Arora, Manish

    2018-02-01

    Associations between manganese (Mn) and neurodevelopment may depend on dose and exposure timing, but most studies cannot measure exposure variability over time well. We apply temporally informative tooth-matrix biomarkers to uncover windows of susceptibility in early life when Mn is associated with visual motor ability in childhood. We also explore effect modification by lead (Pb) and child sex. Participants were drawn from the ELEMENT (Early Life Exposures in MExico and NeuroToxicology) longitudinal birth cohort studies. We reconstructed dose and timing of prenatal and early postnatal Mn and Pb exposures for 138 children by analyzing deciduous teeth using laser ablation-inductively coupled plasma-mass spectrometry. Neurodevelopment was assessed between 6 and 16 years of age using the Wide Range Assessment of Visual Motor Abilities (WRAVMA). Mn associations with total WRAVMA scores and subscales were estimated with multivariable generalized additive mixed models. We examined Mn interactions with Pb and child sex in stratified models. Levels of dentine Mn were highest in the second trimester and declined steeply over the prenatal period, with a slower rate of decline after birth. Mn was positively associated with visual spatial and total WRAVMA scores in the second trimester, among children with lower (< median) tooth Pb levels: one standard deviation (SD) increase in ln-transformed dentine Mn at 150 days before birth was associated with a 0.15 [95% CI: 0.04, 0.26] SD increase in total score. This positive association was not observed at high Pb levels. In contrast to the prenatal period, significant negative associations were found in the postnatal period from ~ 6 to 12 months of age, among boys only: one SD increase in ln-transformed dentine Mn was associated with a 0.11 [95% CI: - 0.001, - 0.22] to 0.16 [95% CI: - 0.04, - 0.28] SD decrease in visual spatial score. Using tooth-matrix biomarkers with fine scale temporal profiles of exposure, we found discrete developmental windows in which Mn was associated with visual-spatial abilities. Our results suggest that Mn associations are driven in large part by exposure timing, with beneficial effects found for prenatal levels and toxic effects found for postnatal levels. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Reduced sensitivity of the N400 and late positive component to semantic congruity and word repetition in left temporal lobe epilepsy.

    PubMed

    Olichney, John M; Riggins, Brock R; Hillert, Dieter G; Nowacki, Ralph; Tecoma, Evelyn; Kutas, Marta; Iragui, Vicente J

    2002-07-01

    We studied 14 patients with well-characterized refractory temporal lobe epilepsy (TLE), 7 with right temporal lobe epilepsy (RTE) and 7 with left temporal lobe epilepsy (LTE), on a word repetition ERP experiment. Much prior literature supports the view that patients with left TLE are more likely to develop verbal memory deficits, often attributable to left hippocampal sclerosis. Our main objectives were to test if abnormalities of the N400 or Late Positive Component (LPC, P600) were associated with a left temporal seizure focus, or left temporal lobe dysfunction. A minimum of 19 channels of EEG/EOG data were collected while subjects performed a semantic categorization task. Auditory category statements were followed by a visual target word, which were 50% "congruous" (category exemplars) and 50% "incongruous" (non-category exemplars) with the preceding semantic context. These auditory-visual pairings were repeated pseudo-randomly at time intervals ranging from approximately 10-140 seconds later. The ERP data were submitted to repeated-measures ANOVAs, which showed the RTE group had generally normal effects of word repetition on the LPC and the N400. Also, the N400 component was larger to incongruous than congruous new words, as is normally the case. In contrast, the LTE group did not have statistically significant effects of either word repetition or congruity on their ERPs (N400 or LPC), suggesting that this ERP semantic categorization paradigm is sensitive to left temporal lobe dysfunction. Further studies are ongoing to determine if these ERP abnormalities predict hippocampal sclerosis on histopathology, or outcome after anterior temporal lobectomy.

  12. Preserving information in neural transmission.

    PubMed

    Sincich, Lawrence C; Horton, Jonathan C; Sharpee, Tatyana O

    2009-05-13

    Along most neural pathways, the spike trains transmitted from one neuron to the next are altered. In the process, neurons can either achieve a more efficient stimulus representation, or extract some biologically important stimulus parameter, or succeed at both. We recorded the inputs from single retinal ganglion cells and the outputs from connected lateral geniculate neurons in the macaque to examine how visual signals are relayed from retina to cortex. We found that geniculate neurons re-encoded multiple temporal stimulus features to yield output spikes that carried more information about stimuli than was available in each input spike. The coding transformation of some relay neurons occurred with no decrement in information rate, despite output spike rates that averaged half the input spike rates. This preservation of transmitted information was achieved by the short-term summation of inputs that geniculate neurons require to spike. A reduced model of the retinal and geniculate visual responses, based on two stimulus features and their associated nonlinearities, could account for >85% of the total information available in the spike trains and the preserved information transmission. These results apply to neurons operating on a single time-varying input, suggesting that synaptic temporal integration can alter the temporal receptive field properties to create a more efficient representation of visual signals in the thalamus than the retina.

  13. Storyline Visualizations of Eye Tracking of Movie Viewing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balint, John T.; Arendt, Dustin L.; Blaha, Leslie M.

    Storyline visualizations offer an approach that promises to capture the spatio-temporal characteristics of individual observers and simultaneously illustrate emerging group behaviors. We develop a visual analytics approach to parsing, aligning, and clustering fixation sequences from eye tracking data. Visualization of the results captures the similarities and differences across a group of observers performing a common task. We apply our storyline approach to visualize gaze patterns of people watching dynamic movie clips. Storylines mitigate some of the shortcomings of existent spatio-temporal visualization techniques and, importantly, continue to highlight individual observer behavioral dynamics.

  14. Spatiotemporal Processing in Crossmodal Interactions for Perception of the External World: A Review

    PubMed Central

    Hidaka, Souta; Teramoto, Wataru; Sugita, Yoichi

    2015-01-01

    Research regarding crossmodal interactions has garnered much interest in the last few decades. A variety of studies have demonstrated that multisensory information (vision, audition, tactile sensation, and so on) can perceptually interact with each other in the spatial and temporal domains. Findings regarding crossmodal interactions in the spatiotemporal domain (i.e., motion processing) have also been reported, with updates in the last few years. In this review, we summarize past and recent findings on spatiotemporal processing in crossmodal interactions regarding perception of the external world. A traditional view regarding crossmodal interactions holds that vision is superior to audition in spatial processing, but audition is dominant over vision in temporal processing. Similarly, vision is considered to have dominant effects over the other sensory modalities (i.e., visual capture) in spatiotemporal processing. However, recent findings demonstrate that sound could have a driving effect on visual motion perception. Moreover, studies regarding perceptual associative learning reported that, after association is established between a sound sequence without spatial information and visual motion information, the sound sequence could trigger visual motion perception. Other sensory information, such as motor action or smell, has also exhibited similar driving effects on visual motion perception. Additionally, recent brain imaging studies demonstrate that similar activation patterns could be observed in several brain areas, including the motion processing areas, between spatiotemporal information from different sensory modalities. Based on these findings, we suggest that multimodal information could mutually interact in spatiotemporal processing in the percept of the external world and that common perceptual and neural underlying mechanisms would exist for spatiotemporal processing. PMID:26733827

  15. The Tölz Temporal Topography Study: mapping the visual field across the life span. Part II: cognitive factors shaping visual field maps.

    PubMed

    Poggel, Dorothe A; Treutwein, Bernhard; Calmanti, Claudia; Strasburger, Hans

    2012-08-01

    Part I described the topography of visual performance over the life span. Performance decline was explained only partly by deterioration of the optical apparatus. Part II therefore examines the influence of higher visual and cognitive functions. Visual field maps for 95 healthy observers of static perimetry, double-pulse resolution (DPR), reaction times, and contrast thresholds, were correlated with measures of visual attention (alertness, divided attention, spatial cueing), visual search, and the size of the attention focus. Correlations with the attentional variables were substantial, particularly for variables of temporal processing. DPR thresholds depended on the size of the attention focus. The extraction of cognitive variables from the correlations between topographical variables and participant age substantially reduced those correlations. There is a systematic top-down influence on the aging of visual functions, particularly of temporal variables, that largely explains performance decline and the change of the topography over the life span.

  16. A new visually evoked cerebral blood flow response analysis using a low-frequency estimation.

    PubMed

    Rey, Beatriz; Naranjo, Valery; Parkhutik, Vera; Tembl, José; Alcañiz, Mariano

    2010-03-01

    Transcranial Doppler (TCD) has been widely used to monitor cerebral blood flow velocity (BFV) during the performance of cognitive tasks compared with repose periods. Although one of its main advantages is its high temporal resolution, only some of the previous functional TCD studies have focused on the analysis of the temporal evolution of the BFV signal and none of them has performed a spectral analysis of the signal. In this study, maximum BFV data in both posterior cerebral arteries was monitored during a visual perception task (10 cycles of alternating darkness and illumination) for 23 subjects. A peak was located in the low-frequency band of the spectrum of the maximum BFV of each subject both during visual stimulation and repose periods. The frequency of this peak was in the range between 0.037 and 0.098Hz, depending on the subject, the vessel and the experimental condition. The component of the signal at this frequency, which is associated with the slow variations caused by the visual stimuli, was estimated. That way, the variations in BFV caused by the experimental stimuli were isolated from the variations caused by other factors. This low-frequency estimation signal was used to obtain parameters about the temporal evolution and the magnitude variations of the BFV in a reliable way, thus, characterizing the neurovascular coupling of the participants. Copyright 2010 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  17. Anatomy of the Temporal Lobe

    PubMed Central

    Kiernan, J. A.

    2012-01-01

    Only primates have temporal lobes, which are largest in man, accommodating 17% of the cerebral cortex and including areas with auditory, olfactory, vestibular, visual and linguistic functions. The hippocampal formation, on the medial side of the lobe, includes the parahippocampal gyrus, subiculum, hippocampus, dentate gyrus, and associated white matter, notably the fimbria, whose fibres continue into the fornix. The hippocampus is an inrolled gyrus that bulges into the temporal horn of the lateral ventricle. Association fibres connect all parts of the cerebral cortex with the parahippocampal gyrus and subiculum, which in turn project to the dentate gyrus. The largest efferent projection of the subiculum and hippocampus is through the fornix to the hypothalamus. The choroid fissure, alongside the fimbria, separates the temporal lobe from the optic tract, hypothalamus and midbrain. The amygdala comprises several nuclei on the medial aspect of the temporal lobe, mostly anterior the hippocampus and indenting the tip of the temporal horn. The amygdala receives input from the olfactory bulb and from association cortex for other modalities of sensation. Its major projections are to the septal area and prefrontal cortex, mediating emotional responses to sensory stimuli. The temporal lobe contains much subcortical white matter, with such named bundles as the anterior commissure, arcuate fasciculus, inferior longitudinal fasciculus and uncinate fasciculus, and Meyer's loop of the geniculocalcarine tract. This article also reviews arterial supply, venous drainage, and anatomical relations of the temporal lobe to adjacent intracranial and tympanic structures. PMID:22934160

  18. Cross-Modal Matching of Audio-Visual German and French Fluent Speech in Infancy

    PubMed Central

    Kubicek, Claudia; Hillairet de Boisferon, Anne; Dupierrix, Eve; Pascalis, Olivier; Lœvenbruck, Hélène; Gervain, Judit; Schwarzer, Gudrun

    2014-01-01

    The present study examined when and how the ability to cross-modally match audio-visual fluent speech develops in 4.5-, 6- and 12-month-old German-learning infants. In Experiment 1, 4.5- and 6-month-old infants’ audio-visual matching ability of native (German) and non-native (French) fluent speech was assessed by presenting auditory and visual speech information sequentially, that is, in the absence of temporal synchrony cues. The results showed that 4.5-month-old infants were capable of matching native as well as non-native audio and visual speech stimuli, whereas 6-month-olds perceived the audio-visual correspondence of native language stimuli only. This suggests that intersensory matching narrows for fluent speech between 4.5 and 6 months of age. In Experiment 2, auditory and visual speech information was presented simultaneously, therefore, providing temporal synchrony cues. Here, 6-month-olds were found to match native as well as non-native speech indicating facilitation of temporal synchrony cues on the intersensory perception of non-native fluent speech. Intriguingly, despite the fact that audio and visual stimuli cohered temporally, 12-month-olds matched the non-native language only. Results were discussed with regard to multisensory perceptual narrowing during the first year of life. PMID:24586651

  19. Alterations in audiovisual simultaneity perception in amblyopia

    PubMed Central

    2017-01-01

    Amblyopia is a developmental visual impairment that is increasingly recognized to affect higher-level perceptual and multisensory processes. To further investigate the audiovisual (AV) perceptual impairments associated with this condition, we characterized the temporal interval in which asynchronous auditory and visual stimuli are perceived as simultaneous 50% of the time (i.e., the AV simultaneity window). Adults with unilateral amblyopia (n = 17) and visually normal controls (n = 17) judged the simultaneity of a flash and a click presented with both eyes viewing. The signal onset asynchrony (SOA) varied from 0 ms to 450 ms for auditory-lead and visual-lead conditions. A subset of participants with amblyopia (n = 6) was tested monocularly. Compared to the control group, the auditory-lead side of the AV simultaneity window was widened by 48 ms (36%; p = 0.002), whereas that of the visual-lead side was widened by 86 ms (37%; p = 0.02). The overall mean window width was 500 ms, compared to 366 ms among controls (37% wider; p = 0.002). Among participants with amblyopia, the simultaneity window parameters were unchanged by viewing condition, but subgroup analysis revealed differential effects on the parameters by amblyopia severity, etiology, and foveal suppression status. Possible mechanisms to explain these findings include visual temporal uncertainty, interocular perceptual latency asymmetry, and disruption of normal developmental tuning of sensitivity to audiovisual asynchrony. PMID:28598996

  20. Alterations in audiovisual simultaneity perception in amblyopia.

    PubMed

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2017-01-01

    Amblyopia is a developmental visual impairment that is increasingly recognized to affect higher-level perceptual and multisensory processes. To further investigate the audiovisual (AV) perceptual impairments associated with this condition, we characterized the temporal interval in which asynchronous auditory and visual stimuli are perceived as simultaneous 50% of the time (i.e., the AV simultaneity window). Adults with unilateral amblyopia (n = 17) and visually normal controls (n = 17) judged the simultaneity of a flash and a click presented with both eyes viewing. The signal onset asynchrony (SOA) varied from 0 ms to 450 ms for auditory-lead and visual-lead conditions. A subset of participants with amblyopia (n = 6) was tested monocularly. Compared to the control group, the auditory-lead side of the AV simultaneity window was widened by 48 ms (36%; p = 0.002), whereas that of the visual-lead side was widened by 86 ms (37%; p = 0.02). The overall mean window width was 500 ms, compared to 366 ms among controls (37% wider; p = 0.002). Among participants with amblyopia, the simultaneity window parameters were unchanged by viewing condition, but subgroup analysis revealed differential effects on the parameters by amblyopia severity, etiology, and foveal suppression status. Possible mechanisms to explain these findings include visual temporal uncertainty, interocular perceptual latency asymmetry, and disruption of normal developmental tuning of sensitivity to audiovisual asynchrony.

  1. The Tölz Temporal Topography Study: mapping the visual field across the life span. Part I: the topography of light detection and temporal-information processing.

    PubMed

    Poggel, Dorothe A; Treutwein, Bernhard; Calmanti, Claudia; Strasburger, Hans

    2012-08-01

    Temporal performance parameters vary across the visual field. Their topographical distributions relative to each other and relative to basic visual performance measures and their relative change over the life span are unknown. Our goal was to characterize the topography and age-related change of temporal performance. We acquired visual field maps in 95 healthy participants (age: 10-90 years): perimetric thresholds, double-pulse resolution (DPR), reaction times (RTs), and letter contrast thresholds. DPR and perimetric thresholds increased with eccentricity and age; the periphery showed a more pronounced age-related increase than the center. RT increased only slightly and uniformly with eccentricity. It remained almost constant up to the age of 60, a marked change occurring only above 80. Overall, age was a poor predictor of functionality. Performance decline could be explained only in part by the aging of the retina and optic media. In Part II, we therefore examine higher visual and cognitive functions.

  2. Fornix and medial temporal lobe lesions lead to comparable deficits in complex visual perception.

    PubMed

    Lech, Robert K; Koch, Benno; Schwarz, Michael; Suchan, Boris

    2016-05-04

    Recent research dealing with the structures of the medial temporal lobe (MTL) has shifted away from exclusively investigating memory-related processes and has repeatedly incorporated the investigation of complex visual perception. Several studies have demonstrated that higher level visual tasks can recruit structures like the hippocampus and perirhinal cortex in order to successfully perform complex visual discriminations, leading to a perceptual-mnemonic or representational view of the medial temporal lobe. The current study employed a complex visual discrimination paradigm in two patients suffering from brain lesions with differing locations and origin. Both patients, one with extensive medial temporal lobe lesions (VG) and one with a small lesion of the anterior fornix (HJK), were impaired in complex discriminations while showing otherwise mostly intact cognitive functions. The current data confirmed previous results while also extending the perceptual-mnemonic theory of the MTL to the main output structure of the hippocampus, the fornix. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. Role of temporal processing stages by inferior temporal neurons in facial recognition.

    PubMed

    Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Kawano, Kenji

    2011-01-01

    In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition.

  4. Role of Temporal Processing Stages by Inferior Temporal Neurons in Facial Recognition

    PubMed Central

    Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Kawano, Kenji

    2011-01-01

    In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition. PMID:21734904

  5. The Audiovisual Temporal Binding Window Narrows in Early Childhood

    ERIC Educational Resources Information Center

    Lewkowicz, David J.; Flom, Ross

    2014-01-01

    Binding is key in multisensory perception. This study investigated the audio-visual (A-V) temporal binding window in 4-, 5-, and 6-year-old children (total N = 120). Children watched a person uttering a syllable whose auditory and visual components were either temporally synchronized or desynchronized by 366, 500, or 666 ms. They were asked…

  6. Improving exposure assessment in environmental epidemiology: Application of spatio-temporal visualization tools

    NASA Astrophysics Data System (ADS)

    Meliker, Jaymie R.; Slotnick, Melissa J.; Avruskin, Gillian A.; Kaufmann, Andrew; Jacquez, Geoffrey M.; Nriagu, Jerome O.

    2005-05-01

    A thorough assessment of human exposure to environmental agents should incorporate mobility patterns and temporal changes in human behaviors and concentrations of contaminants; yet the temporal dimension is often under-emphasized in exposure assessment endeavors, due in part to insufficient tools for visualizing and examining temporal datasets. Spatio-temporal visualization tools are valuable for integrating a temporal component, thus allowing for examination of continuous exposure histories in environmental epidemiologic investigations. An application of these tools to a bladder cancer case-control study in Michigan illustrates continuous exposure life-lines and maps that display smooth, continuous changes over time. Preliminary results suggest increased risk of bladder cancer from combined exposure to arsenic in drinking water (>25 μg/day) and heavy smoking (>30 cigarettes/day) in the 1970s and 1980s, and a possible cancer cluster around automotive, paint, and organic chemical industries in the early 1970s. These tools have broad application for examining spatially- and temporally-specific relationships between exposures to environmental risk factors and disease.

  7. Timing in audiovisual speech perception: A mini review and new psychophysical data.

    PubMed

    Venezia, Jonathan H; Thurman, Steven M; Matchin, William; George, Sahara E; Hickok, Gregory

    2016-02-01

    Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (~35 % identification of /apa/ compared to ~5 % in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (~130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content.

  8. Timing in Audiovisual Speech Perception: A Mini Review and New Psychophysical Data

    PubMed Central

    Venezia, Jonathan H.; Thurman, Steven M.; Matchin, William; George, Sahara E.; Hickok, Gregory

    2015-01-01

    Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually-relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (∼35% identification of /apa/ compared to ∼5% in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually-relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (∼130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content. PMID:26669309

  9. Face adaptation aftereffects reveal anterior medial temporal cortex role in high level category representation.

    PubMed

    Furl, N; van Rijsbergen, N J; Treves, A; Dolan, R J

    2007-08-01

    Previous studies have shown reductions of the functional magnetic resonance imaging (fMRI) signal in response to repetition of specific visual stimuli. We examined how adaptation affects the neural responses associated with categorization behavior, using face adaptation aftereffects. Adaptation to a given facial category biases categorization towards non-adapted facial categories in response to presentation of ambiguous morphs. We explored a hypothesis, posed by recent psychophysical studies, that these adaptation-induced categorizations are mediated by activity in relatively advanced stages within the occipitotemporal visual processing stream. Replicating these studies, we find that adaptation to a facial expression heightens perception of non-adapted expressions. Using comparable behavioral methods, we also show that adaptation to a specific identity heightens perception of a second identity in morph faces. We show both expression and identity effects to be associated with heightened anterior medial temporal lobe activity, specifically when perceiving the non-adapted category. These regions, incorporating bilateral anterior ventral rhinal cortices, perirhinal cortex and left anterior hippocampus are regions previously implicated in high-level visual perception. These categorization effects were not evident in fusiform or occipital gyri, although activity in these regions was reduced to repeated faces. The findings suggest that adaptation-induced perception is mediated by activity in regions downstream to those showing reductions due to stimulus repetition.

  10. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception

    PubMed Central

    Su, Yi-Huang; Salazar-López, Elvira

    2016-01-01

    Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance. PMID:27313900

  11. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception.

    PubMed

    Su, Yi-Huang; Salazar-López, Elvira

    2016-01-01

    Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance.

  12. Audition dominates vision in duration perception irrespective of salience, attention, and temporal discriminability

    PubMed Central

    Ortega, Laura; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2014-01-01

    Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading visual stimuli to be equivalent or inferior to auditory spatial discriminability. Thus, for spatial perception, the modality that provides greater discriminability dominates. Here we ask whether auditory dominance in duration perception is similarly explained by factors that influence the relative quality of auditory and visual signals. In contrast to the spatial results, the auditory modality dominated over the visual modality in bimodal duration perception even when the auditory signal was clearly weaker, when the auditory signal was ignored (i.e., the visual signal was selectively attended), and when the temporal discriminability was equivalent for the auditory and visual signals. Thus, unlike spatial perception where the modality carrying more discriminable signals dominates, duration perception seems to be mandatorily linked to auditory processing under most circumstances. PMID:24806403

  13. The signature of undetected change: an exploratory electrotomographic investigation of gradual change blindness.

    PubMed

    Kiat, John E; Dodd, Michael D; Belli, Robert F; Cheadle, Jacob E

    2018-05-01

    Neuroimaging-based investigations of change blindness, a phenomenon in which seemingly obvious changes in visual scenes fail to be detected, have significantly advanced our understanding of visual awareness. The vast majority of prior investigations, however, utilize paradigms involving visual disruptions (e.g., intervening blank screens, saccadic movements, "mudsplashes"), making it difficult to isolate neural responses toward visual changes cleanly. To address this issue in this present study, high-density EEG data (256 channel) were collected from 25 participants using a paradigm in which visual changes were progressively introduced into detailed real-world scenes without the use of visual disruption. Oscillatory activity associated with undetected changes was contrasted with activity linked to their absence using standardized low-resolution brain electromagnetic tomography (sLORETA). Although an insufficient number of detections were present to allow for analysis of actual change detection, increased beta-2 activity in the right inferior parietal lobule (rIPL), a region repeatedly associated with change blindness in disruption paradigms, followed by increased theta activity in the right superior temporal gyrus (rSTG) was noted in undetected visual change responses relative to the absence of change. We propose the rIPL beta-2 activity to be associated with orienting attention toward visual changes, with the subsequent rise in rSTG theta activity being potentially linked with updating preconscious perceptual memory representations. NEW & NOTEWORTHY This study represents the first neuroimaging-based investigation of gradual change blindness, a visual phenomenon that has significant potential to shed light on the processes underlying visual detection and conscious perception. The use of gradual change materials is reflective of real-world visual phenomena and allows for cleaner isolation of signals associated with the neural registration of change relative to the use of abrupt change transients.

  14. Evidence for perceptual deficits in associative visual (prosop)agnosia: a single-case study.

    PubMed

    Delvenne, Jean François; Seron, Xavier; Coyette, Françoise; Rossion, Bruno

    2004-01-01

    Associative visual agnosia is classically defined as normal visual perception stripped of its meaning [Archiv für Psychiatrie und Nervenkrankheiten 21 (1890) 22/English translation: Cognitive Neuropsychol. 5 (1988) 155]: these patients cannot access to their stored visual memories to categorize the objects nonetheless perceived correctly. However, according to an influential theory of visual agnosia [Farah, Visual Agnosia: Disorders of Object Recognition and What They Tell Us about Normal Vision, MIT Press, Cambridge, MA, 1990], visual associative agnosics necessarily present perceptual deficits that are the cause of their impairment at object recognition Here we report a detailed investigation of a patient with bilateral occipito-temporal lesions strongly impaired at object and face recognition. NS presents normal drawing copy, and normal performance at object and face matching tasks as used in classical neuropsychological tests. However, when tested with several computer tasks using carefully controlled visual stimuli and taking both his accuracy rate and response times into account, NS was found to have abnormal performances at high-level visual processing of objects and faces. Albeit presenting a different pattern of deficits than previously described in integrative agnosic patients such as HJA and LH, his deficits were characterized by an inability to integrate individual parts into a whole percept, as suggested by his failure at processing structurally impossible three-dimensional (3D) objects, an absence of face inversion effects and an advantage at detecting and matching single parts. Taken together, these observations question the idea of separate visual representations for object/face perception and object/face knowledge derived from investigations of visual associative (prosop)agnosia, and they raise some methodological issues in the analysis of single-case studies of (prosop)agnosic patients.

  15. Novel Visualization of Large Health Related Data Sets

    DTIC Science & Technology

    2015-03-01

    Health Record Data: A Systematic Review B: McPeek Hinz E, Borland D, Shah H, West V, Hammond WE. Temporal Visualization of Diabetes Mellitus via Hemoglobin ...H, Borland D, McPeek Hinz E, West V, Hammond WE. Demonstration of Temporal Visualization of Diabetes Mellitus via Hemoglobin A1C Levels E... Hemoglobin A1c Levels and MultivariateVisualization of System-Wide National Health Service Data Using Radial Coordinates. (Copies in Appendix) 4.3

  16. Creative innovation with temporal lobe epilepsy and lobectomy.

    PubMed

    Ghacibeh, Georges A; Heilman, Kenneth M

    2013-01-15

    Some patients with left temporal degeneration develop visual artistic abilities. These new artistic abilities may be due to disinhibition of the visuo-spatially dominant right hemisphere. Many famous artists have had epilepsy and it is possible that some may have had left temporal seizures (LTS) and this left temporal dysfunction disinhibited their right hemisphere. Alternatively, unilateral epilepsy may alter intrahemispheric connectivity and right anterior temporal lobe seizures (RTS) may have increased these artists' right hemisphere mediated visual artistic creativity. To test the disinhibition versus enhanced connectivity hypotheses we studied 9 participants with RTS and 9 with left anterior temporal seizures (LTS) who underwent unilateral lobectomy for the treatment of medically refractory epilepsy. Creativity was tested using the Torrance Test of Creative Thinking (TTCT). There were no between group differences in either the verbal or figural scores of the TTCT, suggesting that unilateral anterior temporal ablation did not enhance visual artistic ability; however, for the RTS participants' figural creativity scores were significantly higher than verbal scores. Whereas these results fail to support the left temporal lobe disinhibition postulate of enhanced figural creativity, the finding that the patients with RTS had better figural than verbal creativity suggests that their recurrent right hemispheric seizures lead to changes in their right hemispheric networks that facilitated visual creativity. To obtain converging evidence, studies on RTS participants who have not undergone lobectomy will need to be performed. Published by Elsevier B.V.

  17. Cerebellar contributions to motor timing: a PET study of auditory and visual rhythm reproduction.

    PubMed

    Penhune, V B; Zattore, R J; Evans, A C

    1998-11-01

    The perception and production of temporal patterns, or rhythms, is important for both music and speech. However, the way in which the human brain achieves accurate timing of perceptual input and motor output is as yet little understood. Central control of both motor timing and perceptual timing across modalities has been linked to both the cerebellum and the basal ganglia (BG). The present study was designed to test the hypothesized central control of temporal processing and to examine the roles of the cerebellum, BG, and sensory association areas. In this positron emission tomography (PET) activation paradigm, subjects reproduced rhythms of increasing temporal complexity that were presented separately in the auditory and visual modalities. The results provide support for a supramodal contribution of the lateral cerebellar cortex and cerebellar vermis to the production of a timed motor response, particularly when it is complex and/or novel. The results also give partial support to the involvement of BG structures in motor timing, although this may be more directly related to implementation of the motor response than to timing per se. Finally, sensory association areas and the ventrolateral frontal cortex were found to be involved in modality-specific encoding and retrieval of the temporal stimuli. Taken together, these results point to the participation of a number of neural structures in the production of a timed motor response from an external stimulus. The role of the cerebellum in timing is conceptualized not as a clock or counter but simply as the structure that provides the necessary circuitry for the sensory system to extract temporal information and for the motor system to learn to produce a precisely timed response.

  18. [Neuropsychiatric background of severe drawing disturbances].

    PubMed

    Molnár, Gábor

    2008-01-01

    Drawing ability is a primary human skill, which first appeared in the paleolithic art. In spite of this fact, neuropsychology of drawing has been a neglected subject of brain research. In the Crisis Intervention Department at the Budapest Social Center (Hungary), five patients with local brain lesions were identified, who had severe drawing disturbances. This was defined when the form representation in the House-Tree-Person drawing test was was not maintained and was associated with altered figure-perception. Patients were underwent detailed neurological, mental and neuropsychological assessment. Computer tomography of the head was performed at different hospitals in Budapest. House-Tree-Person drawing test was used, which was complemented with copy and visual memory tasks (with Rey-picture), as well as with spontaneous drawing, if it was necessary. Severe drawing disturbances were found in patients with severe right frontal, right temporo-parietal, diffuse right fronto-parieto-temporal, left occipito-temporal lesions and with bilateral basal ganglia lesions with enlarged ventriculi. Impairment of copying figures was sometimes seen without associated impairment of drawing on verbal instructions. Visual memory, visual images in long-term memory, visual analysis, the ability of adequat placement of parts into the whole representation, visuomotor transfer and perhaps the motor drawing programs could be altered separately. Severe drawing disturbance might occur with perfectly maintained writing capabilities. The data indicated that drawing ability requires the intact activity of nearly the whole brain, but it also includes several subfunctions, which could be altered relatively separately.

  19. Spatio-temporal dependencies between hospital beds, physicians and health expenditure using visual variables and data classification in statistical table

    NASA Astrophysics Data System (ADS)

    Medyńska-Gulij, Beata; Cybulski, Paweł

    2016-06-01

    This paper analyses the use of table visual variables of statistical data of hospital beds as an important tool for revealing spatio-temporal dependencies. It is argued that some of conclusions from the data about public health and public expenditure on health have a spatio-temporal reference. Different from previous studies, this article adopts combination of cartographic pragmatics and spatial visualization with previous conclusions made in public health literature. While the significant conclusions about health care and economic factors has been highlighted in research papers, this article is the first to apply visual analysis to statistical table together with maps which is called previsualisation.

  20. Memory reorganization following anterior temporal lobe resection: a longitudinal functional MRI study

    PubMed Central

    Bonelli, Silvia B.; Thompson, Pamela J.; Yogarajah, Mahinda; Powell, Robert H. W.; Samson, Rebecca S.; McEvoy, Andrew W.; Symms, Mark R.; Koepp, Matthias J.

    2013-01-01

    Anterior temporal lobe resection controls seizures in 50–60% of patients with intractable temporal lobe epilepsy but may impair memory function, typically verbal memory following left, and visual memory following right anterior temporal lobe resection. Functional reorganization can occur within the ipsilateral and contralateral hemispheres. We investigated the reorganization of memory function in patients with temporal lobe epilepsy before and after left or right anterior temporal lobe resection and the efficiency of postoperative memory networks. We studied 46 patients with unilateral medial temporal lobe epilepsy (25/26 left hippocampal sclerosis, 16/20 right hippocampal sclerosis) before and after anterior temporal lobe resection on a 3 T General Electric magnetic resonance imaging scanner. All subjects had neuropsychological testing and performed a functional magnetic resonance imaging memory encoding paradigm for words, pictures and faces, testing verbal and visual memory in a single scanning session, preoperatively and again 4 months after surgery. Event-related analysis revealed that patients with left temporal lobe epilepsy had greater activation in the left posterior medial temporal lobe when successfully encoding words postoperatively than preoperatively. Greater pre- than postoperative activation in the ipsilateral posterior medial temporal lobe for encoding words correlated with better verbal memory outcome after left anterior temporal lobe resection. In contrast, greater postoperative than preoperative activation in the ipsilateral posterior medial temporal lobe correlated with worse postoperative verbal memory performance. These postoperative effects were not observed for visual memory function after right anterior temporal lobe resection. Our findings provide evidence for effective preoperative reorganization of verbal memory function to the ipsilateral posterior medial temporal lobe due to the underlying disease, suggesting that it is the capacity of the posterior remnant of the ipsilateral hippocampus rather than the functional reserve of the contralateral hippocampus that is important for maintaining verbal memory function after anterior temporal lobe resection. Early postoperative reorganization to ipsilateral posterior or contralateral medial temporal lobe structures does not underpin better performance. Additionally our results suggest that visual memory function in right temporal lobe epilepsy is affected differently by right anterior temporal lobe resection than verbal memory in left temporal lobe epilepsy. PMID:23715092

  1. Functional Architecture for Disparity in Macaque Inferior Temporal Cortex and Its Relationship to the Architecture for Faces, Color, Scenes, and Visual Field

    PubMed Central

    Verhoef, Bram-Ernst; Bohon, Kaitlin S.

    2015-01-01

    Binocular disparity is a powerful depth cue for object perception. The computations for object vision culminate in inferior temporal cortex (IT), but the functional organization for disparity in IT is unknown. Here we addressed this question by measuring fMRI responses in alert monkeys to stimuli that appeared in front of (near), behind (far), or at the fixation plane. We discovered three regions that showed preferential responses for near and far stimuli, relative to zero-disparity stimuli at the fixation plane. These “near/far” disparity-biased regions were located within dorsal IT, as predicted by microelectrode studies, and on the posterior inferotemporal gyrus. In a second analysis, we instead compared responses to near stimuli with responses to far stimuli and discovered a separate network of “near” disparity-biased regions that extended along the crest of the superior temporal sulcus. We also measured in the same animals fMRI responses to faces, scenes, color, and checkerboard annuli at different visual field eccentricities. Disparity-biased regions defined in either analysis did not show a color bias, suggesting that disparity and color contribute to different computations within IT. Scene-biased regions responded preferentially to near and far stimuli (compared with stimuli without disparity) and had a peripheral visual field bias, whereas face patches had a marked near bias and a central visual field bias. These results support the idea that IT is organized by a coarse eccentricity map, and show that disparity likely contributes to computations associated with both central (face processing) and peripheral (scene processing) visual field biases, but likely does not contribute much to computations within IT that are implicated in processing color. PMID:25926470

  2. Right hemispheric dominance of visual phenomena evoked by intracerebral stimulation of the human visual cortex.

    PubMed

    Jonas, Jacques; Frismand, Solène; Vignal, Jean-Pierre; Colnat-Coulbois, Sophie; Koessler, Laurent; Vespignani, Hervé; Rossion, Bruno; Maillard, Louis

    2014-07-01

    Electrical brain stimulation can provide important information about the functional organization of the human visual cortex. Here, we report the visual phenomena evoked by a large number (562) of intracerebral electrical stimulations performed at low-intensity with depth electrodes implanted in the occipito-parieto-temporal cortex of 22 epileptic patients. Focal electrical stimulation evoked primarily visual hallucinations with various complexities: simple (spot or blob), intermediary (geometric forms), or complex meaningful shapes (faces); visual illusions and impairments of visual recognition were more rarely observed. With the exception of the most posterior cortical sites, the probability of evoking a visual phenomenon was significantly higher in the right than the left hemisphere. Intermediary and complex hallucinations, illusions, and visual recognition impairments were almost exclusively evoked by stimulation in the right hemisphere. The probability of evoking a visual phenomenon decreased substantially from the occipital pole to the most anterior sites of the temporal lobe, and this decrease was more pronounced in the left hemisphere. The greater sensitivity of the right occipito-parieto-temporal regions to intracerebral electrical stimulation to evoke visual phenomena supports a predominant role of right hemispheric visual areas from perception to recognition of visual forms, regardless of visuospatial and attentional factors. Copyright © 2013 Wiley Periodicals, Inc.

  3. Attractive faces temporally modulate visual attention

    PubMed Central

    Nakamura, Koyo; Kawabata, Hideaki

    2014-01-01

    Facial attractiveness is an important biological and social signal on social interaction. Recent research has demonstrated that an attractive face captures greater spatial attention than an unattractive face does. Little is known, however, about the temporal characteristics of visual attention for facial attractiveness. In this study, we investigated the temporal modulation of visual attention induced by facial attractiveness by using a rapid serial visual presentation. Fourteen male faces and two female faces were successively presented for 160 ms, respectively, and participants were asked to identify two female faces embedded among a series of multiple male distractor faces. Identification of a second female target (T2) was impaired when a first target (T1) was attractive compared to neutral or unattractive faces, at 320 ms stimulus onset asynchrony (SOA); identification was improved when T1 was attractive compared to unattractive faces at 640 ms SOA. These findings suggest that the spontaneous appraisal of facial attractiveness modulates temporal attention. PMID:24994994

  4. Blepharoplasty techniques in the management of orbito-temporal neurofibromatosis.

    PubMed

    Li, Jin; Lin, Ming; Shao, Chunyi; Ge, Shengfang; Fan, Xianqun

    2014-11-01

    We aimed to present blepharoplasty techniques we used for severe orbito-temporal neurofibromatosis (NF). A retrospective noncomparative single-center case study was undertaken on patients with orbito-temporal NF. Twenty-two patients with orbito-temporal NF treated at the Department of Ophthalmology of Shanghai Ninth People's Hospital between 2007 and 2011 participated in the study. They underwent a standard ophthalmologic assessment for orbito-temporal NF involving both the orbito-temporal soft tissue and bony orbits. The orbits were examined with three-dimensional computed tomography (CT) and all 22 patients underwent tumor debulking, blepharoplasty, and orbital reconstruction. We modified the conventional procedures. Our reconstructive techniques included eyelid reduction; lateral canthal reattachment; for patients with collapse of the lateral orbital margin, reconstruction of the orbital margin to be performed before reattaching the lateral canthus to the implanted titanium mesh; anterior levator resection; and frontalis suspension according to preoperative levator muscle function. Visual acuity, tumor recurrence, and postoperative palpebral fissure and orbital appearance were evaluated to assess outcomes. Acceptable cosmetic results were obtained in 22 patients after debulking of the orbito-temporal NF and surgical reconstruction. There was no loss of vision or visual impairment postoperatively. All patients did not display recrudescence after a follow-up period of >1 year. Three patients with residual ptosis were successfully treated with a second ptosis repair. We believe that the blepharoplasty techniques described in the treatment of orbito-palpebral NF may provide both functional and esthetic benefits. Copyright © 2014 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  5. Decoding visual object categories from temporal correlations of ECoG signals.

    PubMed

    Majima, Kei; Matsuo, Takeshi; Kawasaki, Keisuke; Kawai, Kensuke; Saito, Nobuhito; Hasegawa, Isao; Kamitani, Yukiyasu

    2014-04-15

    How visual object categories are represented in the brain is one of the key questions in neuroscience. Studies on low-level visual features have shown that relative timings or phases of neural activity between multiple brain locations encode information. However, whether such temporal patterns of neural activity are used in the representation of visual objects is unknown. Here, we examined whether and how visual object categories could be predicted (or decoded) from temporal patterns of electrocorticographic (ECoG) signals from the temporal cortex in five patients with epilepsy. We used temporal correlations between electrodes as input features, and compared the decoding performance with features defined by spectral power and phase from individual electrodes. While using power or phase alone, the decoding accuracy was significantly better than chance, correlations alone or those combined with power outperformed other features. Decoding performance with correlations was degraded by shuffling the order of trials of the same category in each electrode, indicating that the relative time series between electrodes in each trial is critical. Analysis using a sliding time window revealed that decoding performance with correlations began to rise earlier than that with power. This earlier increase in performance was replicated by a model using phase differences to encode categories. These results suggest that activity patterns arising from interactions between multiple neuronal units carry additional information on visual object categories. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Audiovisual Temporal Perception in Aging: The Role of Multisensory Integration and Age-Related Sensory Loss

    PubMed Central

    Brooks, Cassandra J.; Chan, Yu Man; Anderson, Andrew J.; McKendrick, Allison M.

    2018-01-01

    Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information. PMID:29867415

  7. Audiovisual Temporal Perception in Aging: The Role of Multisensory Integration and Age-Related Sensory Loss.

    PubMed

    Brooks, Cassandra J; Chan, Yu Man; Anderson, Andrew J; McKendrick, Allison M

    2018-01-01

    Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information.

  8. Visuoperceptive region atrophy independent of cognitive status in patients with Parkinson's disease with hallucinations.

    PubMed

    Goldman, Jennifer G; Stebbins, Glenn T; Dinh, Vy; Bernard, Bryan; Merkitch, Doug; deToledo-Morrell, Leyla; Goetz, Christopher G

    2014-03-01

    Visual hallucinations are frequent, disabling complications of advanced Parkinson's disease, but their neuroanatomical basis is incompletely understood. Previous structural brain magnetic resonance imaging studies suggest volume loss in the mesial temporal lobe and limbic regions in subjects with Parkinson's disease with visual hallucinations, relative to those without visual hallucinations. However, these studies have not always controlled for the presence of cognitive impairment or dementia, which are common co-morbidities of hallucinations in Parkinson's disease and whose neuroanatomical substrates may involve mesial temporal lobe and limbic regions. Therefore, we used structural magnetic resonance imaging to examine grey matter atrophy patterns associated with visual hallucinations, comparing Parkinson's disease hallucinators to Parkinson's disease non-hallucinators of comparable cognitive function. We studied 50 subjects with Parkinson's disease: 25 classified as current and chronic visual hallucinators and 25 as non-hallucinators, who were matched for cognitive status (demented or non-demented) and age (± 3 years). Subjects underwent (i) clinical evaluations; and (ii) brain MRI scans analysed using whole-brain voxel-based morphometry techniques. Clinically, the Parkinson's disease hallucinators did not differ in their cognitive classification or performance in any of the five assessed cognitive domains, compared with the non-hallucinators. The Parkinson's disease groups also did not differ significantly in age, motor severity, medication use or duration of disease. On imaging analyses, the hallucinators, all of whom experienced visual hallucinations, exhibited grey matter atrophy with significant voxel-wise differences in the cuneus, lingual and fusiform gyri, middle occipital lobe, inferior parietal lobule, and also cingulate, paracentral, and precentral gyri, compared with the non-hallucinators. Grey matter atrophy in the hallucinators occurred predominantly in brain regions responsible for processing visuoperceptual information including the ventral 'what' and dorsal 'where' pathways, which are important in object and facial recognition and identification of spatial locations of objects, respectively. Furthermore, the structural brain changes seen on magnetic resonance imaging occurred independently of cognitive function and age. Our findings suggest that when hallucinators and non-hallucinators are similar in their cognitive performance, the neural networks involving visuoperceptual pathways, rather than the mesial temporal lobe regions, distinctively contribute to the pathophysiology of visual hallucinations and may explain their predominantly visual nature in Parkinson's disease. Identification of distinct structural MRI differences associated with hallucinations in Parkinson's disease may permit earlier detection of at-risk patients and ultimately, development of therapies specifically targeting hallucinations and visuoperceptive functions.

  9. Visuoperceptive region atrophy independent of cognitive status in patients with Parkinson’s disease with hallucinations

    PubMed Central

    Stebbins, Glenn T.; Dinh, Vy; Bernard, Bryan; Merkitch, Doug; deToledo-Morrell, Leyla; Goetz, Christopher G.

    2014-01-01

    Visual hallucinations are frequent, disabling complications of advanced Parkinson’s disease, but their neuroanatomical basis is incompletely understood. Previous structural brain magnetic resonance imaging studies suggest volume loss in the mesial temporal lobe and limbic regions in subjects with Parkinson’s disease with visual hallucinations, relative to those without visual hallucinations. However, these studies have not always controlled for the presence of cognitive impairment or dementia, which are common co-morbidities of hallucinations in Parkinson’s disease and whose neuroanatomical substrates may involve mesial temporal lobe and limbic regions. Therefore, we used structural magnetic resonance imaging to examine grey matter atrophy patterns associated with visual hallucinations, comparing Parkinson’s disease hallucinators to Parkinson’s disease non-hallucinators of comparable cognitive function. We studied 50 subjects with Parkinson’s disease: 25 classified as current and chronic visual hallucinators and 25 as non-hallucinators, who were matched for cognitive status (demented or non-demented) and age (±3 years). Subjects underwent (i) clinical evaluations; and (ii) brain MRI scans analysed using whole-brain voxel-based morphometry techniques. Clinically, the Parkinson’s disease hallucinators did not differ in their cognitive classification or performance in any of the five assessed cognitive domains, compared with the non-hallucinators. The Parkinson’s disease groups also did not differ significantly in age, motor severity, medication use or duration of disease. On imaging analyses, the hallucinators, all of whom experienced visual hallucinations, exhibited grey matter atrophy with significant voxel-wise differences in the cuneus, lingual and fusiform gyri, middle occipital lobe, inferior parietal lobule, and also cingulate, paracentral, and precentral gyri, compared with the non-hallucinators. Grey matter atrophy in the hallucinators occurred predominantly in brain regions responsible for processing visuoperceptual information including the ventral ‘what’ and dorsal ‘where’ pathways, which are important in object and facial recognition and identification of spatial locations of objects, respectively. Furthermore, the structural brain changes seen on magnetic resonance imaging occurred independently of cognitive function and age. Our findings suggest that when hallucinators and non-hallucinators are similar in their cognitive performance, the neural networks involving visuoperceptual pathways, rather than the mesial temporal lobe regions, distinctively contribute to the pathophysiology of visual hallucinations and may explain their predominantly visual nature in Parkinson’s disease. Identification of distinct structural MRI differences associated with hallucinations in Parkinson’s disease may permit earlier detection of at-risk patients and ultimately, development of therapies specifically targeting hallucinations and visuoperceptive functions. PMID:24480486

  10. The anterior temporal lobes support residual comprehension in Wernicke’s aphasia

    PubMed Central

    Robson, Holly; Zahn, Roland; Keidel, James L.; Binney, Richard J.; Sage, Karen; Lambon Ralph, Matthew A.

    2014-01-01

    Wernicke’s aphasia occurs after a stroke to classical language comprehension regions in the left temporoparietal cortex. Consequently, auditory–verbal comprehension is significantly impaired in Wernicke’s aphasia but the capacity to comprehend visually presented materials (written words and pictures) is partially spared. This study used functional magnetic resonance imaging to investigate the neural basis of written word and picture semantic processing in Wernicke’s aphasia, with the wider aim of examining how the semantic system is altered after damage to the classical comprehension regions. Twelve participants with chronic Wernicke’s aphasia and 12 control participants performed semantic animate–inanimate judgements and a visual height judgement baseline task. Whole brain and region of interest analysis in Wernicke’s aphasia and control participants found that semantic judgements were underpinned by activation in the ventral and anterior temporal lobes bilaterally. The Wernicke’s aphasia group displayed an ‘over-activation’ in comparison with control participants, indicating that anterior temporal lobe regions become increasingly influential following reduction in posterior semantic resources. Semantic processing of written words in Wernicke’s aphasia was additionally supported by recruitment of the right anterior superior temporal lobe, a region previously associated with recovery from auditory-verbal comprehension impairments. Overall, the results provide support for models in which the anterior temporal lobes are crucial for multimodal semantic processing and that these regions may be accessed without support from classic posterior comprehension regions. PMID:24519979

  11. The anterior temporal lobes support residual comprehension in Wernicke's aphasia.

    PubMed

    Robson, Holly; Zahn, Roland; Keidel, James L; Binney, Richard J; Sage, Karen; Lambon Ralph, Matthew A

    2014-03-01

    Wernicke's aphasia occurs after a stroke to classical language comprehension regions in the left temporoparietal cortex. Consequently, auditory-verbal comprehension is significantly impaired in Wernicke's aphasia but the capacity to comprehend visually presented materials (written words and pictures) is partially spared. This study used functional magnetic resonance imaging to investigate the neural basis of written word and picture semantic processing in Wernicke's aphasia, with the wider aim of examining how the semantic system is altered after damage to the classical comprehension regions. Twelve participants with chronic Wernicke's aphasia and 12 control participants performed semantic animate-inanimate judgements and a visual height judgement baseline task. Whole brain and region of interest analysis in Wernicke's aphasia and control participants found that semantic judgements were underpinned by activation in the ventral and anterior temporal lobes bilaterally. The Wernicke's aphasia group displayed an 'over-activation' in comparison with control participants, indicating that anterior temporal lobe regions become increasingly influential following reduction in posterior semantic resources. Semantic processing of written words in Wernicke's aphasia was additionally supported by recruitment of the right anterior superior temporal lobe, a region previously associated with recovery from auditory-verbal comprehension impairments. Overall, the results provide support for models in which the anterior temporal lobes are crucial for multimodal semantic processing and that these regions may be accessed without support from classic posterior comprehension regions.

  12. The trait of sensory processing sensitivity and neural responses to changes in visual scenes

    PubMed Central

    Xu, Xiaomeng; Aron, Arthur; Aron, Elaine; Cao, Guikang; Feng, Tingyong; Weng, Xuchu

    2011-01-01

    This exploratory study examined the extent to which individual differences in sensory processing sensitivity (SPS), a temperament/personality trait characterized by social, emotional and physical sensitivity, are associated with neural response in visual areas in response to subtle changes in visual scenes. Sixteen participants completed the Highly Sensitive Person questionnaire, a standard measure of SPS. Subsequently, they were tested on a change detection task while undergoing functional magnetic resonance imaging (fMRI). SPS was associated with significantly greater activation in brain areas involved in high-order visual processing (i.e. right claustrum, left occipitotemporal, bilateral temporal and medial and posterior parietal regions) as well as in the right cerebellum, when detecting minor (vs major) changes in stimuli. These findings remained strong and significant after controlling for neuroticism and introversion, traits that are often correlated with SPS. These results provide the first evidence of neural differences associated with SPS, the first direct support for the sensory aspect of this trait that has been studied primarily for its social and affective implications, and preliminary evidence for heightened sensory processing in individuals high in SPS. PMID:20203139

  13. Temporal stability of visually selective responses in intracranial field potentials recorded from human occipital and temporal lobes

    PubMed Central

    Bansal, Arjun K.; Singer, Jedediah M.; Anderson, William S.; Golby, Alexandra; Madsen, Joseph R.

    2012-01-01

    The cerebral cortex needs to maintain information for long time periods while at the same time being capable of learning and adapting to changes. The degree of stability of physiological signals in the human brain in response to external stimuli over temporal scales spanning hours to days remains unclear. Here, we quantitatively assessed the stability across sessions of visually selective intracranial field potentials (IFPs) elicited by brief flashes of visual stimuli presented to 27 subjects. The interval between sessions ranged from hours to multiple days. We considered electrodes that showed robust visual selectivity to different shapes; these electrodes were typically located in the inferior occipital gyrus, the inferior temporal cortex, and the fusiform gyrus. We found that IFP responses showed a strong degree of stability across sessions. This stability was evident in averaged responses as well as single-trial decoding analyses, at the image exemplar level as well as at the category level, across different parts of visual cortex, and for three different visual recognition tasks. These results establish a quantitative evaluation of the degree of stationarity of visually selective IFP responses within and across sessions and provide a baseline for studies of cortical plasticity and for the development of brain-machine interfaces. PMID:22956795

  14. Visual exploration of big spatio-temporal urban data: a study of New York City taxi trips.

    PubMed

    Ferreira, Nivan; Poco, Jorge; Vo, Huy T; Freire, Juliana; Silva, Cláudio T

    2013-12-01

    As increasing volumes of urban data are captured and become available, new opportunities arise for data-driven analysis that can lead to improvements in the lives of citizens through evidence-based decision making and policies. In this paper, we focus on a particularly important urban data set: taxi trips. Taxis are valuable sensors and information associated with taxi trips can provide unprecedented insight into many different aspects of city life, from economic activity and human behavior to mobility patterns. But analyzing these data presents many challenges. The data are complex, containing geographical and temporal components in addition to multiple variables associated with each trip. Consequently, it is hard to specify exploratory queries and to perform comparative analyses (e.g., compare different regions over time). This problem is compounded due to the size of the data-there are on average 500,000 taxi trips each day in NYC. We propose a new model that allows users to visually query taxi trips. Besides standard analytics queries, the model supports origin-destination queries that enable the study of mobility across the city. We show that this model is able to express a wide range of spatio-temporal queries, and it is also flexible in that not only can queries be composed but also different aggregations and visual representations can be applied, allowing users to explore and compare results. We have built a scalable system that implements this model which supports interactive response times; makes use of an adaptive level-of-detail rendering strategy to generate clutter-free visualization for large results; and shows hidden details to the users in a summary through the use of overlay heat maps. We present a series of case studies motivated by traffic engineers and economists that show how our model and system enable domain experts to perform tasks that were previously unattainable for them.

  15. Enhancement of Temporal Resolution and BOLD Sensitivity in Real-Time fMRI using Multi-Slab Echo-Volumar Imaging

    PubMed Central

    Posse, Stefan; Ackley, Elena; Mutihac, Radu; Rick, Jochen; Shane, Matthew; Murray-Krezan, Cristina; Zaitsev, Maxim; Speck, Oliver

    2012-01-01

    In this study, a new approach to high-speed fMRI using multi-slab echo-volumar imaging (EVI) is developed that minimizes geometrical image distortion and spatial blurring, and enables nonaliased sampling of physiological signal fluctuation to increase BOLD sensitivity compared to conventional echo-planar imaging (EPI). Real-time fMRI using whole brain 4-slab EVI with 286 ms temporal resolution (4 mm isotropic voxel size) and partial brain 2-slab EVI with 136 ms temporal resolution (4×4×6 mm3 voxel size) was performed on a clinical 3 Tesla MRI scanner equipped with 12-channel head coil. Four-slab EVI of visual and motor tasks significantly increased mean (visual: 96%, motor: 66%) and maximum t-score (visual: 263%, motor: 124%) and mean (visual: 59%, motor: 131%) and maximum (visual: 29%, motor: 67%) BOLD signal amplitude compared with EPI. Time domain moving average filtering (2 s width) to suppress physiological noise from cardiac and respiratory fluctuations further improved mean (visual: 196%, motor: 140%) and maximum (visual: 384%, motor: 200%) t-scores and increased extents of activation (visual: 73%, motor: 70%) compared to EPI. Similar sensitivity enhancement, which is attributed to high sampling rate at only moderately reduced temporal signal-to-noise ratio (mean: − 52%) and longer sampling of the BOLD effect in the echo-time domain compared to EPI, was measured in auditory cortex. Two-slab EVI further improved temporal resolution for measuring task-related activation and enabled mapping of five major resting state networks (RSNs) in individual subjects in 5 min scans. The bilateral sensorimotor, the default mode and the occipital RSNs were detectable in time frames as short as 75 s. In conclusion, the high sampling rate of real-time multi-slab EVI significantly improves sensitivity for studying the temporal dynamics of hemodynamic responses and for characterizing functional networks at high field strength in short measurement times. PMID:22398395

  16. Lingual and fusiform gyri in visual processing: a clinico-pathologic study of superior altitudinal hemianopia.

    PubMed Central

    Bogousslavsky, J; Miklossy, J; Deruaz, J P; Assal, G; Regli, F

    1987-01-01

    A macular-sparing superior altitudinal hemianopia with no visuo-psychic disturbance, except impaired visual learning, was associated with bilateral ischaemic necrosis of the lingual gyrus and only partial involvement of the fusiform gyrus on the left side. It is suggested that bilateral destruction of the lingual gyrus alone is not sufficient to affect complex visual processing. The fusiform gyrus probably has a critical role in colour integration, visuo-spatial processing, facial recognition and corresponding visual imagery. Involvement of the occipitotemporal projection system deep to the lingual gyri probably explained visual memory dysfunction, by a visuo-limbic disconnection. Impaired verbal memory may have been due to posterior involvement of the parahippocampal gyrus and underlying white matter, which may have disconnected the intact speech areas from the left medial temporal structures. Images PMID:3585386

  17. Temporal dynamics of the knowledge-mediated visual disambiguation process in humans: a magnetoencephalography study.

    PubMed

    Urakawa, Tomokazu; Ogata, Katsuya; Kimura, Takahiro; Kume, Yuko; Tobimatsu, Shozo

    2015-01-01

    Disambiguation of a noisy visual scene with prior knowledge is an indispensable task of the visual system. To adequately adapt to a dynamically changing visual environment full of noisy visual scenes, the implementation of knowledge-mediated disambiguation in the brain is imperative and essential for proceeding as fast as possible under the limited capacity of visual image processing. However, the temporal profile of the disambiguation process has not yet been fully elucidated in the brain. The present study attempted to determine how quickly knowledge-mediated disambiguation began to proceed along visual areas after the onset of a two-tone ambiguous image using magnetoencephalography with high temporal resolution. Using the predictive coding framework, we focused on activity reduction for the two-tone ambiguous image as an index of the implementation of disambiguation. Source analysis revealed that a significant activity reduction was observed in the lateral occipital area at approximately 120 ms after the onset of the ambiguous image, but not in preceding activity (about 115 ms) in the cuneus when participants perceptually disambiguated the ambiguous image with prior knowledge. These results suggested that knowledge-mediated disambiguation may be implemented as early as approximately 120 ms following an ambiguous visual scene, at least in the lateral occipital area, and provided an insight into the temporal profile of the disambiguation process of a noisy visual scene with prior knowledge. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  18. Simulating spatial and temporal context of forest management using hypothetical landscapes

    Treesearch

    Eric J. Gustafson; Thomas R. Crow

    1998-01-01

    Spatially explicit models that combine remote sensing with geographic information systems (GIS) offer great promise to land managers because they consider the arrangement of landscape elements in time and space. Their visual and geographic nature facilitate the comparison of alternative landscape designs. Among various activities associated with forest management,...

  19. [Diffuse retinal epitheliopathy].

    PubMed

    Abaloun, Yassine; Omari, Abdelhadi

    2017-01-01

    We report the case of a 52-year old man with no previous significant medical history presenting with progressive decrease in visual acuity (VA) of the right eye evolving over 10 years. Corrected visual acuity was 2/10 - P6 in the RE and 10/10 - P2 in the LE. The examination of the anterior segment was unremarkable. Fundus examination showed alteration of the pigment epithelium (APE) in the RE associated with osteoblast-like pigment migrations involving the macula and a wide area due to gravitational descent of the superior temporal arcade onto the lower temporal quadrant. The left eye had a similar appearance especially in the inter-papillo-macular region (A,B). Fluorescein angiography showed early hyperfluorescence areas in the PE depigmented areas associated with pigment migrations giving a comet tail appearance by gravity casting in both eyes (C,D). Optical coherence tomography (OCT) showed retrofoveolar epithelial detachment (PED) at the level of the RE (E). The patient received Diamox therapy with regular monitoring to manage possible leakage points. Patient 's evolution was marked by PED regression and VA improvement.

  20. How silent is silent reading? Intracerebral evidence for top-down activation of temporal voice areas during reading.

    PubMed

    Perrone-Bertolotti, Marcela; Kujala, Jan; Vidal, Juan R; Hamame, Carlos M; Ossandon, Tomas; Bertrand, Olivier; Minotti, Lorella; Kahane, Philippe; Jerbi, Karim; Lachaux, Jean-Philippe

    2012-12-05

    As you might experience it while reading this sentence, silent reading often involves an imagery speech component: we can hear our own "inner voice" pronouncing words mentally. Recent functional magnetic resonance imaging studies have associated that component with increased metabolic activity in the auditory cortex, including voice-selective areas. It remains to be determined, however, whether this activation arises automatically from early bottom-up visual inputs or whether it depends on late top-down control processes modulated by task demands. To answer this question, we collaborated with four epileptic human patients recorded with intracranial electrodes in the auditory cortex for therapeutic purposes, and measured high-frequency (50-150 Hz) "gamma" activity as a proxy of population level spiking activity. Temporal voice-selective areas (TVAs) were identified with an auditory localizer task and monitored as participants viewed words flashed on screen. We compared neural responses depending on whether words were attended or ignored and found a significant increase of neural activity in response to words, strongly enhanced by attention. In one of the patients, we could record that response at 800 ms in TVAs, but also at 700 ms in the primary auditory cortex and at 300 ms in the ventral occipital temporal cortex. Furthermore, single-trial analysis revealed a considerable jitter between activation peaks in visual and auditory cortices. Altogether, our results demonstrate that the multimodal mental experience of reading is in fact a heterogeneous complex of asynchronous neural responses, and that auditory and visual modalities often process distinct temporal frames of our environment at the same time.

  1. Brief Report: Which Came First? Exploring Crossmodal Temporal Order Judgements and Their Relationship with Sensory Reactivity in Autism and Neurotypicals

    ERIC Educational Resources Information Center

    Poole, Daniel; Gowen, Emma; Warren, Paul A.; Poliakoff, Ellen

    2017-01-01

    Previous studies have indicated that visual-auditory temporal acuity is reduced in children with autism spectrum conditions (ASC) in comparison to neurotypicals. In the present study we investigated temporal acuity for all possible bimodal pairings of visual, tactile and auditory information in adults with ASC (n = 18) and a matched control group…

  2. Intrusive Images in Psychological Disorders

    PubMed Central

    Brewin, Chris R.; Gregory, James D.; Lipton, Michelle; Burgess, Neil

    2010-01-01

    Involuntary images and visual memories are prominent in many types of psychopathology. Patients with posttraumatic stress disorder, other anxiety disorders, depression, eating disorders, and psychosis frequently report repeated visual intrusions corresponding to a small number of real or imaginary events, usually extremely vivid, detailed, and with highly distressing content. Both memory and imagery appear to rely on common networks involving medial prefrontal regions, posterior regions in the medial and lateral parietal cortices, the lateral temporal cortex, and the medial temporal lobe. Evidence from cognitive psychology and neuroscience implies distinct neural bases to abstract, flexible, contextualized representations (C-reps) and to inflexible, sensory-bound representations (S-reps). We revise our previous dual representation theory of posttraumatic stress disorder to place it within a neural systems model of healthy memory and imagery. The revised model is used to explain how the different types of distressing visual intrusions associated with clinical disorders arise, in terms of the need for correct interaction between the neural systems supporting S-reps and C-reps via visuospatial working memory. Finally, we discuss the treatment implications of the new model and relate it to existing forms of psychological therapy. PMID:20063969

  3. Hierarchical Spatio-temporal Visual Analysis of Cluster Evolution in Electrocorticography Data

    DOE PAGES

    Murugesan, Sugeerth; Bouchard, Kristofer; Chang, Edward; ...

    2016-10-02

    Here, we present ECoG ClusterFlow, a novel interactive visual analysis tool for the exploration of high-resolution Electrocorticography (ECoG) data. Our system detects and visualizes dynamic high-level structures, such as communities, using the time-varying spatial connectivity network derived from the high-resolution ECoG data. ECoG ClusterFlow provides a multi-scale visualization of the spatio-temporal patterns underlying the time-varying communities using two views: 1) an overview summarizing the evolution of clusters over time and 2) a hierarchical glyph-based technique that uses data aggregation and small multiples techniques to visualize the propagation of clusters in their spatial domain. ECoG ClusterFlow makes it possible 1) tomore » compare the spatio-temporal evolution patterns across various time intervals, 2) to compare the temporal information at varying levels of granularity, and 3) to investigate the evolution of spatial patterns without occluding the spatial context information. Lastly, we present case studies done in collaboration with neuroscientists on our team for both simulated and real epileptic seizure data aimed at evaluating the effectiveness of our approach.« less

  4. Frequency modulation of neural oscillations according to visual task demands.

    PubMed

    Wutz, Andreas; Melcher, David; Samaha, Jason

    2018-02-06

    Temporal integration in visual perception is thought to occur within cycles of occipital alpha-band (8-12 Hz) oscillations. Successive stimuli may be integrated when they fall within the same alpha cycle and segregated for different alpha cycles. Consequently, the speed of alpha oscillations correlates with the temporal resolution of perception, such that lower alpha frequencies provide longer time windows for perceptual integration and higher alpha frequencies correspond to faster sampling and segregation. Can the brain's rhythmic activity be dynamically controlled to adjust its processing speed according to different visual task demands? We recorded magnetoencephalography (MEG) while participants switched between task instructions for temporal integration and segregation, holding stimuli and task difficulty constant. We found that the peak frequency of alpha oscillations decreased when visual task demands required temporal integration compared with segregation. Alpha frequency was strategically modulated immediately before and during stimulus processing, suggesting a preparatory top-down source of modulation. Its neural generators were located in occipital and inferotemporal cortex. The frequency modulation was specific to alpha oscillations and did not occur in the delta (1-3 Hz), theta (3-7 Hz), beta (15-30 Hz), or gamma (30-50 Hz) frequency range. These results show that alpha frequency is under top-down control to increase or decrease the temporal resolution of visual perception.

  5. Neural correlates of auditory recognition memory in the primate dorsal temporal pole

    PubMed Central

    Ng, Chi-Wing; Plakke, Bethany

    2013-01-01

    Temporal pole (TP) cortex is associated with higher-order sensory perception and/or recognition memory, as human patients with damage in this region show impaired performance during some tasks requiring recognition memory (Olson et al. 2007). The underlying mechanisms of TP processing are largely based on examination of the visual nervous system in humans and monkeys, while little is known about neuronal activity patterns in the auditory portion of this region, dorsal TP (dTP; Poremba et al. 2003). The present study examines single-unit activity of dTP in rhesus monkeys performing a delayed matching-to-sample task utilizing auditory stimuli, wherein two sounds are determined to be the same or different. Neurons of dTP encode several task-relevant events during the delayed matching-to-sample task, and encoding of auditory cues in this region is associated with accurate recognition performance. Population activity in dTP shows a match suppression mechanism to identical, repeated sound stimuli similar to that observed in the visual object identification pathway located ventral to dTP (Desimone 1996; Nakamura and Kubota 1996). However, in contrast to sustained visual delay-related activity in nearby analogous regions, auditory delay-related activity in dTP is transient and limited. Neurons in dTP respond selectively to different sound stimuli and often change their sound response preferences between experimental contexts. Current findings suggest a significant role for dTP in auditory recognition memory similar in many respects to the visual nervous system, while delay memory firing patterns are not prominent, which may relate to monkeys' shorter forgetting thresholds for auditory vs. visual objects. PMID:24198324

  6. Audio-Visual Perception of 3D Cinematography: An fMRI Study Using Condition-Based and Computation-Based Analyses

    PubMed Central

    Ogawa, Akitoshi; Bordier, Cecile; Macaluso, Emiliano

    2013-01-01

    The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard “condition-based” designs, as well as “computational” methods based on the extraction of time-varying features of the stimuli (e.g. motion). Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround), 3D with monaural sound (3D-Mono), 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG). The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life-like stimuli. PMID:24194828

  7. Audio-visual perception of 3D cinematography: an fMRI study using condition-based and computation-based analyses.

    PubMed

    Ogawa, Akitoshi; Bordier, Cecile; Macaluso, Emiliano

    2013-01-01

    The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard "condition-based" designs, as well as "computational" methods based on the extraction of time-varying features of the stimuli (e.g. motion). Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround), 3D with monaural sound (3D-Mono), 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG). The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life-like stimuli.

  8. Extent of resection in temporal lobectomy for epilepsy. II. Memory changes and neurologic complications.

    PubMed

    Katz, A; Awad, I A; Kong, A K; Chelune, G J; Naugle, R I; Wyllie, E; Beauchamp, G; Lüders, H

    1989-01-01

    We present correlations of extent of temporal lobectomy for intractable epilepsy with postoperative memory changes (20 cases) and abnormalities of visual field and neurologic examination (45 cases). Postoperative magnetic resonance imaging (MRI) in the coronal plane was used to quantify anteroposterior extent of resection of various quadrants of the temporal lobe, using a 20-compartment model of that structure. The Wechsler Memory Scale-Revised (WMS-R) was administered preoperatively and postoperatively. Postoperative decrease in percentage of retention of verbal material correlated with extent of medial resection of left temporal lobe, whereas decrease in percentage of retention of visual material correlated with extent of medial resection of right temporal lobe. These correlations approached but did not reach statistical significance. Extent of resection correlated significantly with the presence of visual field defect on perimetry testing but not with severity, denseness, or congruity of the defect. There was no correlation between postoperative dysphasia and extent of resection in any quadrant. Assessment of extent of resection after temporal lobectomy allows a rational interpretation of postoperative neurologic deficits in light of functional anatomy of the temporal lobe.

  9. Indirect choroidal ruptures: aetiological factors, patterns of ocular damage, and final visual outcome.

    PubMed Central

    Wood, C M; Richardson, J

    1990-01-01

    Indirect choroidal ruptures result from blunt ocular trauma and have a pathognomonic fundal appearance. We analysed a group of 30 patients with indirect choroidal ruptures with specific reference to the circumstances of the injury, the pattern of ocular damage, the cause of any visual loss, and the final visual outcome. Using this analysis we deduce a pathogenetic explanation for the characteristic fundus signs in patients with indirect choroidal ruptures. The majority of cases were young males injured during sport or by an assault, a minority were injured at work. Diffuse nonfocal impact injuries due to punches were associated with ruptures concentric with and adjacent to the optic disc. Focal impact injuries, due to projectiles, showed more extensive ocular damage. Seventeen of 30 patients regained 6/12 vision after injury. Injuries due to projectiles and temporally situated ruptures were associated with a poorer visual outcome than others. Macular damage was the commonest cause of visual loss, principally due to pigmentary maculopathy, traumatic inner retinal damage, and choroidal neovascular membranes rather than direct focal damage by the rupture. Images PMID:2337545

  10. Dynamic functional brain networks involved in simple visual discrimination learning.

    PubMed

    Fidalgo, Camino; Conejo, Nélida María; González-Pardo, Héctor; Arias, Jorge Luis

    2014-10-01

    Visual discrimination tasks have been widely used to evaluate many types of learning and memory processes. However, little is known about the brain regions involved at different stages of visual discrimination learning. We used cytochrome c oxidase histochemistry to evaluate changes in regional brain oxidative metabolism during visual discrimination learning in a water-T maze at different time points during training. As compared with control groups, the results of the present study reveal the gradual activation of cortical (prefrontal and temporal cortices) and subcortical brain regions (including the striatum and the hippocampus) associated to the mastery of a simple visual discrimination task. On the other hand, the brain regions involved and their functional interactions changed progressively over days of training. Regions associated with novelty, emotion, visuo-spatial orientation and motor aspects of the behavioral task seem to be relevant during the earlier phase of training, whereas a brain network comprising the prefrontal cortex was found along the whole learning process. This study highlights the relevance of functional interactions among brain regions to investigate learning and memory processes. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Circadian timed episodic-like memory - a bee knows what to do when, and also where.

    PubMed

    Pahl, Mario; Zhu, Hong; Pix, Waltraud; Tautz, Juergen; Zhang, Shaowu

    2007-10-01

    This study investigates how the colour, shape and location of patterns could be memorized within a time frame. Bees were trained to visit two Y-mazes, one of which presented yellow vertical (rewarded) versus horizontal (non-rewarded) gratings at one site in the morning, while another presented blue horizontal (rewarded) versus vertical (non-rewarded) gratings at another site in the afternoon. The bees could perform well in the learning tests and various transfer tests, in which (i) all contextual cues from the learning test were present; (ii) the colour cues of the visual patterns were removed, but the location cue, the orientation of the visual patterns and the temporal cue still existed; (iii) the location cue was removed, but other contextual cues, i.e. the colour and orientation of the visual patterns and the temporal cue still existed; (iv) the location cue and the orientation cue of the visual patterns were removed, but the colour cue and temporal cue still existed; (v) the location cue, and the colour cue of the visual patterns were removed, but the orientation cue and the temporal cue still existed. The results reveal that the honeybee can recall the memory of the correct visual patterns by using spatial and/or temporal information. The relative importance of different contextual cues is compared and discussed. The bees' ability to integrate elements of circadian time, place and visual stimuli is akin to episodic-like memory; we have therefore named this kind of memory circadian timed episodic-like memory.

  12. Frequency-following and connectivity of different visual areas in response to contrast-reversal stimulation.

    PubMed

    Stephen, Julia M; Ranken, Doug F; Aine, Cheryl J

    2006-01-01

    The sensitivity of visual areas to different temporal frequencies, as well as the functional connections between these areas, was examined using magnetoencephalography (MEG). Alternating circular sinusoids (0, 3.1, 8.7 and 14 Hz) were presented to foveal and peripheral locations in the visual field to target ventral and dorsal stream structures, respectively. It was hypothesized that higher temporal frequencies would preferentially activate dorsal stream structures. To determine the effect of frequency on the cortical response we analyzed the late time interval (220-770 ms) using a multi-dipole spatio-temporal analysis approach to provide source locations and timecourses for each condition. As an exploratory aspect, we performed cross-correlation analysis on the source timecourses to determine which sources responded similarly within conditions. Contrary to predictions, dorsal stream areas were not activated more frequently during high temporal frequency stimulation. However, across cortical sources the frequency-following response showed a difference, with significantly higher power at the second harmonic for the 3.1 and 8.7 Hz stimulation and at the first and second harmonics for the 14 Hz stimulation with this pattern seen robustly in area V1. Cross-correlations of the source timecourses showed that both low- and high-order visual areas, including dorsal and ventral stream areas, were significantly correlated in the late time interval. The results imply that frequency information is transferred to higher-order visual areas without translation. Despite the less complex waveforms seen in the late interval of time, the cross-correlation results show that visual, temporal and parietal cortical areas are intricately involved in late-interval visual processing.

  13. Time perception of visual motion is tuned by the motor representation of human actions

    PubMed Central

    Gavazzi, Gioele; Bisio, Ambra; Pozzo, Thierry

    2013-01-01

    Several studies have shown that the observation of a rapidly moving stimulus dilates our perception of time. However, this effect appears to be at odds with the fact that our interactions both with environment and with each other are temporally accurate. This work exploits this paradox to investigate whether the temporal accuracy of visual motion uses motor representations of actions. To this aim, the stimuli were a dot moving with kinematics belonging or not to the human motor repertoire and displayed at different velocities. Participants had to replicate its duration with two tasks differing in the underlying motor plan. Results show that independently of the task's motor plan, the temporal accuracy and precision depend on the correspondence between the stimulus' kinematics and the observer's motor competencies. Our data suggest that the temporal mechanism of visual motion exploits a temporal visuomotor representation tuned by the motor knowledge of human actions. PMID:23378903

  14. Relationship of Temporal Lobe Volumes to Neuropsychological Test Performance in Healthy Children

    PubMed Central

    Wells, Carolyn T.; Matson, Melissa A.; Kates, Wendy R.; Hay, Trisha; Horska, Alena

    2008-01-01

    Ecological validity of neuropsychological assessment includes the ability of tests to predict real-world functioning and/or covary with brain structures. Studies have examined the relationship between adaptive skills and test performance, with less focus on the association between regional brain volumes and neurobehavioral function in healthy children. The present study examined the relationship between temporal lobe gray matter volumes and performance on two neuropsychological tests hypothesized to measure temporal lobe functioning (Visual Perception-VP; Peabody Picture Vocabulary Test, Third Edition-PPVT-III) in 48 healthy children ages 5-18 years. After controlling for age and gender, left and right temporal and left occipital volumes were significant predictors of VP. Left and right frontal and temporal volumes were significant predictors of PPVT-III. Temporal volume emerged as the strongest lobar correlate with both tests. These results provide convergent and discriminant validity supporting VP as a measure of the “what” system; but suggest the PPVT-III as a complex measure of receptive vocabulary, potentially involving executive function demands. PMID:18513844

  15. Effects of strabismic amblyopia and strabismus without amblyopia on visuomotor behavior: III. Temporal eye-hand coordination during reaching.

    PubMed

    Niechwiej-Szwedo, Ewa; Goltz, Herbert C; Chandrakumar, Manokaraananthan; Wong, Agnes M F

    2014-11-11

    To examine the effects of strabismic amblyopia and strabismus only, without amblyopia, on the temporal patterns of eye-hand coordination during both the planning and execution stages of visually-guided reaching. Forty-six adults (16 with strabismic amblyopia, 14 with strabismus only, and 16 visually normal) executed reach-to-touch movements toward targets presented randomly 5° or 10° to the left or right of central fixation. Viewing conditions were binocular, monocular viewing with the amblyopic eye, and monocular viewing with the fellow eye (dominant and nondominant viewing for participants without amblyopia). Temporal coordination between eye and hand movements was examined during reach planning (interval between the initiation of saccade and reaching, i.e., saccade-to-reach planning interval) and reach execution (interval between the initiation of saccade and reach peak velocity [PV], i.e., saccade-to-reach PV interval). The frequency and dynamics of secondary reach-related saccades were also examined. The temporal patterns of eye-hand coordination prior to reach initiation were comparable among participants with strabismic amblyopia, strabismus only, and visually normal adults. However, the reach acceleration phase of participants with strabismic amblyopia and those with strabismus only were longer following target fixation (saccade-to-reach PV interval) than that of visually normal participants (P < 0.05). This effect was evident under all viewing conditions. The saccade-to-reach planning interval and the saccade-to-reach PV interval were not significantly different among participants with amblyopia with different levels of acuity and stereo acuity loss. Participants with strabismic amblyopia and strabismus only initiated secondary reach-related saccades significantly more frequently than visually normal participants. The amplitude and peak velocity of these saccades were significantly greater during amblyopic eye viewing in participants with amblyopia who also had negative stereopsis. Adults with strabismic amblyopia and strabismus only showed an altered pattern of temporal eye-hand coordination during the reach acceleration phase, which might affect their ability to modify reach trajectory using early online control. Secondary reach-related saccades may provide a compensatory mechanism with which to facilitate the late online control process in order to ensure relatively good reaching performance during binocular and fellow eye viewing. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  16. Speech comprehension aided by multiple modalities: behavioural and neural interactions

    PubMed Central

    McGettigan, Carolyn; Faulkner, Andrew; Altarelli, Irene; Obleser, Jonas; Baverstock, Harriet; Scott, Sophie K.

    2014-01-01

    Speech comprehension is a complex human skill, the performance of which requires the perceiver to combine information from several sources – e.g. voice, face, gesture, linguistic context – to achieve an intelligible and interpretable percept. We describe a functional imaging investigation of how auditory, visual and linguistic information interact to facilitate comprehension. Our specific aims were to investigate the neural responses to these different information sources, alone and in interaction, and further to use behavioural speech comprehension scores to address sites of intelligibility-related activation in multifactorial speech comprehension. In fMRI, participants passively watched videos of spoken sentences, in which we varied Auditory Clarity (with noise-vocoding), Visual Clarity (with Gaussian blurring) and Linguistic Predictability. Main effects of enhanced signal with increased auditory and visual clarity were observed in overlapping regions of posterior STS. Two-way interactions of the factors (auditory × visual, auditory × predictability) in the neural data were observed outside temporal cortex, where positive signal change in response to clearer facial information and greater semantic predictability was greatest at intermediate levels of auditory clarity. Overall changes in stimulus intelligibility by condition (as determined using an independent behavioural experiment) were reflected in the neural data by increased activation predominantly in bilateral dorsolateral temporal cortex, as well as inferior frontal cortex and left fusiform gyrus. Specific investigation of intelligibility changes at intermediate auditory clarity revealed a set of regions, including posterior STS and fusiform gyrus, showing enhanced responses to both visual and linguistic information. Finally, an individual differences analysis showed that greater comprehension performance in the scanning participants (measured in a post-scan behavioural test) were associated with increased activation in left inferior frontal gyrus and left posterior STS. The current multimodal speech comprehension paradigm demonstrates recruitment of a wide comprehension network in the brain, in which posterior STS and fusiform gyrus form sites for convergence of auditory, visual and linguistic information, while left-dominant sites in temporal and frontal cortex support successful comprehension. PMID:22266262

  17. Speech comprehension aided by multiple modalities: behavioural and neural interactions.

    PubMed

    McGettigan, Carolyn; Faulkner, Andrew; Altarelli, Irene; Obleser, Jonas; Baverstock, Harriet; Scott, Sophie K

    2012-04-01

    Speech comprehension is a complex human skill, the performance of which requires the perceiver to combine information from several sources - e.g. voice, face, gesture, linguistic context - to achieve an intelligible and interpretable percept. We describe a functional imaging investigation of how auditory, visual and linguistic information interact to facilitate comprehension. Our specific aims were to investigate the neural responses to these different information sources, alone and in interaction, and further to use behavioural speech comprehension scores to address sites of intelligibility-related activation in multifactorial speech comprehension. In fMRI, participants passively watched videos of spoken sentences, in which we varied Auditory Clarity (with noise-vocoding), Visual Clarity (with Gaussian blurring) and Linguistic Predictability. Main effects of enhanced signal with increased auditory and visual clarity were observed in overlapping regions of posterior STS. Two-way interactions of the factors (auditory × visual, auditory × predictability) in the neural data were observed outside temporal cortex, where positive signal change in response to clearer facial information and greater semantic predictability was greatest at intermediate levels of auditory clarity. Overall changes in stimulus intelligibility by condition (as determined using an independent behavioural experiment) were reflected in the neural data by increased activation predominantly in bilateral dorsolateral temporal cortex, as well as inferior frontal cortex and left fusiform gyrus. Specific investigation of intelligibility changes at intermediate auditory clarity revealed a set of regions, including posterior STS and fusiform gyrus, showing enhanced responses to both visual and linguistic information. Finally, an individual differences analysis showed that greater comprehension performance in the scanning participants (measured in a post-scan behavioural test) were associated with increased activation in left inferior frontal gyrus and left posterior STS. The current multimodal speech comprehension paradigm demonstrates recruitment of a wide comprehension network in the brain, in which posterior STS and fusiform gyrus form sites for convergence of auditory, visual and linguistic information, while left-dominant sites in temporal and frontal cortex support successful comprehension. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Temporal Binding Window of the Sound-Induced Flash Illusion in Amblyopia.

    PubMed

    Narinesingh, Cindy; Goltz, Herbert C; Wong, Agnes M F

    2017-03-01

    Amblyopia is a neurodevelopmental visual disorder caused by abnormal visual experience in childhood. In addition to known visual deficits, there is evidence for changes in audiovisual integration in amblyopia using explicit tasks. We examined audiovisual integration in amblyopia using an implicit task that is more relevant in a real-world context. A total of 11 participants with amblyopia and 16 controls were tested binocularly and monocularly on the sound-induced flash illusion, in which flashes and beeps are presented concurrently and the perceived number of flashes is influenced by the number of beeps. The task used 1 to 2 rapid peripheral flashes presented with 0 to 2 beeps, at 5 stimulus onset asynchronies, that is, beep (-200 milliseconds, -100 milliseconds) or flash leading (100 milliseconds, 200 milliseconds) or simultaneous (0 milliseconds). Participants reported the number of perceived flashes. Susceptibility was indicated by a "2 flashes" response to "fission" (1 flash, 2 beeps) or "1 flash" to "fusion" (2 flashes, 1 beep). For fission with the beep leading during binocular viewing, controls showed an expected decrease in illusion strength as stimulus onset asynchronies increased, whereas the illusion strength remained constant in participants with amblyopia, indicating a wider temporal binding window in amblyopia (P = 0.007). For fusion, participants with amblyopia showed reduced illusion strength during amblyopic eye viewing (P = 0.044) with the flash leading. Amblyopia is associated with the widening of the temporal binding window, specifically for fission when viewing binocularly with the beep leading. This suggests a developmental adaptation to delayed amblyopic eye visual processing to optimize audiovisual integration.

  19. Endogenous Sequential Cortical Activity Evoked by Visual Stimuli

    PubMed Central

    Miller, Jae-eun Kang; Hamm, Jordan P.; Jackson, Jesse; Yuste, Rafael

    2015-01-01

    Although the functional properties of individual neurons in primary visual cortex have been studied intensely, little is known about how neuronal groups could encode changing visual stimuli using temporal activity patterns. To explore this, we used in vivo two-photon calcium imaging to record the activity of neuronal populations in primary visual cortex of awake mice in the presence and absence of visual stimulation. Multidimensional analysis of the network activity allowed us to identify neuronal ensembles defined as groups of cells firing in synchrony. These synchronous groups of neurons were themselves activated in sequential temporal patterns, which repeated at much higher proportions than chance and were triggered by specific visual stimuli such as natural visual scenes. Interestingly, sequential patterns were also present in recordings of spontaneous activity without any sensory stimulation and were accompanied by precise firing sequences at the single-cell level. Moreover, intrinsic dynamics could be used to predict the occurrence of future neuronal ensembles. Our data demonstrate that visual stimuli recruit similar sequential patterns to the ones observed spontaneously, consistent with the hypothesis that already existing Hebbian cell assemblies firing in predefined temporal sequences could be the microcircuit substrate that encodes visual percepts changing in time. PMID:26063915

  20. Age-Related Changes in Temporal Allocation of Visual Attention: Evidence from the Rapid Serial Visual Presentation (RSVP) Paradigm

    ERIC Educational Resources Information Center

    Berger, Carole; Valdois, Sylviane; Lallier, Marie; Donnadieu, Sophie

    2015-01-01

    The present study explored the temporal allocation of attention in groups of 8-year-old children, 10-year-old children, and adults performing a rapid serial visual presentation task. In a dual-condition task, participants had to detect a briefly presented target (T2) after identifying an initial target (T1) embedded in a random series of…

  1. Statistical learning of multisensory regularities is enhanced in musicians: An MEG study.

    PubMed

    Paraskevopoulos, Evangelos; Chalas, Nikolas; Kartsidis, Panagiotis; Wollbrink, Andreas; Bamidis, Panagiotis

    2018-07-15

    The present study used magnetoencephalography (MEG) to identify the neural correlates of audiovisual statistical learning, while disentangling the differential contributions of uni- and multi-modal statistical mismatch responses in humans. The applied paradigm was based on a combination of a statistical learning paradigm and a multisensory oddball one, combining an audiovisual, an auditory and a visual stimulation stream, along with the corresponding deviances. Plasticity effects due to musical expertise were investigated by comparing the behavioral and MEG responses of musicians to non-musicians. The behavioral results indicated that the learning was successful for both musicians and non-musicians. The unimodal MEG responses are consistent with previous studies, revealing the contribution of Heschl's gyrus for the identification of auditory statistical mismatches and the contribution of medial temporal and visual association areas for the visual modality. The cortical network underlying audiovisual statistical learning was found to be partly common and partly distinct from the corresponding unimodal networks, comprising right temporal and left inferior frontal sources. Musicians showed enhanced activation in superior temporal and superior frontal gyrus. Connectivity and information processing flow amongst the sources comprising the cortical network of audiovisual statistical learning, as estimated by transfer entropy, was reorganized in musicians, indicating enhanced top-down processing. This neuroplastic effect showed a cross-modal stability between the auditory and audiovisual modalities. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Seeing the Song: Left Auditory Structures May Track Auditory-Visual Dynamic Alignment

    PubMed Central

    Mossbridge, Julia A.; Grabowecky, Marcia; Suzuki, Satoru

    2013-01-01

    Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873

  3. Local and Global Correlations between Neurons in the Middle Temporal Area of Primate Visual Cortex.

    PubMed

    Solomon, Selina S; Chen, Spencer C; Morley, John W; Solomon, Samuel G

    2015-09-01

    In humans and other primates, the analysis of visual motion includes populations of neurons in the middle-temporal (MT) area of visual cortex. Motion analysis will be constrained by the structure of neural correlations in these populations. Here, we use multi-electrode arrays to measure correlations in anesthetized marmoset, a New World monkey where area MT lies exposed on the cortical surface. We measured correlations in the spike count between pairs of neurons and within populations of neurons, for moving dot fields and moving gratings. Correlations were weaker in area MT than in area V1. The magnitude of correlations in area MT diminished with distance between receptive fields, and difference in preferred direction. Correlations during presentation of moving gratings were stronger than those during presentation of moving dot fields, extended further across cortex, and were less dependent on the functional properties of neurons. Analysis of the timescales of correlation suggests presence of 2 mechanisms. A local mechanism, associated with near-synchronous spiking activity, is strongest in nearby neurons with similar direction preference and is independent of visual stimulus. A global mechanism, operating over larger spatial scales and longer timescales, is independent of direction preference and is modulated by the type of visual stimulus presented. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Intrinsic brain subsystem associated with dietary restraint, disinhibition and hunger: an fMRI study.

    PubMed

    Zhao, Jizheng; Li, Mintong; Zhang, Yi; Song, Huaibo; von Deneen, Karen M; Shi, Yinggang; Liu, Yijun; He, Dongjian

    2017-02-01

    Eating behaviors are closely related to body weight, and eating traits are depicted in three dimensions: dietary restraint, disinhibition, and hunger. The current study aims to explore whether these aspects of eating behaviors are related to intrinsic brain activation, and to further investigate the relationship between the brain activation relating to these eating traits and body weight, as well as the link between function connectivity (FC) of the correlative brain regions and body weight. Our results demonstrated positive associations between dietary restraint and baseline activation of the frontal and the temporal regions (i.e., food reward encoding) and the limbic regions (i.e., homeostatic control, including the hypothalamus). Disinhibition was positively associated with the activation of the frontal motivational system (i.e., OFC) and the premotor cortex. Hunger was positively related to extensive activations in the prefrontal, temporal, and limbic, as well as in the cerebellum. Within the brain regions relating to dietary restraint, weight status was negatively correlated with FC of the left middle temporal gyrus and left inferior temporal gyrus, and was positively associated with the FC of regions in the anterior temporal gyrus and fusiform visual cortex. Weight status was positively associated with the FC within regions in the prefrontal motor cortex and the right ACC serving inhibition, and was negatively related with the FC of regions in the frontal cortical-basal ganglia-thalamic circuits responding to hunger control. Our data depicted an association between intrinsic brain activation and dietary restraint, disinhibition, and hunger, and presented the links of their activations and FCs with weight status.

  5. Functional correlates of musical and visual ability in frontotemporal dementia.

    PubMed

    Miller, B L; Boone, K; Cummings, J L; Read, S L; Mishkin, F

    2000-05-01

    The emergence of new skills in the setting of dementia suggests that loss of function in one brain area can release new functions elsewhere. To characterise 12 patients with frontotemporal dementia (FTD) who acquired, or sustained, new musical or visual abilities despite progression of their dementia. Twelve patients with FTD who acquired or maintained musical or artistic ability were compared with 46 patients with FTD in whom new or sustained ability was absent. The group with musical or visual ability performed better on visual, but worse on verbal tasks than did the other patients with FTD. Nine had asymmetrical left anterior dysfunction. Nine showed the temporal lobe variant of FTD. Loss of function in the left anterior temporal lobe may lead to facilitation of artistic or musical skills. Patients with the left-sided temporal lobe variant of FTD offer an unexpected window into the neurological mediation of visual and musical talents.

  6. Similarity-Based Fusion of MEG and fMRI Reveals Spatio-Temporal Dynamics in Human Cortex During Visual Object Recognition

    PubMed Central

    Cichy, Radoslaw Martin; Pantazis, Dimitrios; Oliva, Aude

    2016-01-01

    Every human cognitive function, such as visual object recognition, is realized in a complex spatio-temporal activity pattern in the brain. Current brain imaging techniques in isolation cannot resolve the brain's spatio-temporal dynamics, because they provide either high spatial or temporal resolution but not both. To overcome this limitation, we developed an integration approach that uses representational similarities to combine measurements of magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) to yield a spatially and temporally integrated characterization of neuronal activation. Applying this approach to 2 independent MEG–fMRI data sets, we observed that neural activity first emerged in the occipital pole at 50–80 ms, before spreading rapidly and progressively in the anterior direction along the ventral and dorsal visual streams. Further region-of-interest analyses established that dorsal and ventral regions showed MEG–fMRI correspondence in representations later than early visual cortex. Together, these results provide a novel and comprehensive, spatio-temporally resolved view of the rapid neural dynamics during the first few hundred milliseconds of object vision. They further demonstrate the feasibility of spatially unbiased representational similarity-based fusion of MEG and fMRI, promising new insights into how the brain computes complex cognitive functions. PMID:27235099

  7. Decoding the time-course of object recognition in the human brain: From visual features to categorical decisions.

    PubMed

    Contini, Erika W; Wardle, Susan G; Carlson, Thomas A

    2017-10-01

    Visual object recognition is a complex, dynamic process. Multivariate pattern analysis methods, such as decoding, have begun to reveal how the brain processes complex visual information. Recently, temporal decoding methods for EEG and MEG have offered the potential to evaluate the temporal dynamics of object recognition. Here we review the contribution of M/EEG time-series decoding methods to understanding visual object recognition in the human brain. Consistent with the current understanding of the visual processing hierarchy, low-level visual features dominate decodable object representations early in the time-course, with more abstract representations related to object category emerging later. A key finding is that the time-course of object processing is highly dynamic and rapidly evolving, with limited temporal generalisation of decodable information. Several studies have examined the emergence of object category structure, and we consider to what degree category decoding can be explained by sensitivity to low-level visual features. Finally, we evaluate recent work attempting to link human behaviour to the neural time-course of object processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Saliency affects feedforward more than feedback processing in early visual cortex.

    PubMed

    Emmanouil, Tatiana Aloi; Avigan, Philip; Persuh, Marjan; Ro, Tony

    2013-07-01

    Early visual cortex activity is influenced by both bottom-up and top-down factors. To investigate the influences of bottom-up (saliency) and top-down (task) factors on different stages of visual processing, we used transcranial magnetic stimulation (TMS) of areas V1/V2 to induce visual suppression at varying temporal intervals. Subjects were asked to detect and discriminate the color or the orientation of briefly-presented small lines that varied on color saliency based on color contrast with the surround. Regardless of task, color saliency modulated the magnitude of TMS-induced visual suppression, especially at earlier temporal processing intervals that reflect the feedforward stage of visual processing in V1/V2. In a second experiment we found that our color saliency effects were also influenced by an inherent advantage of the color red relative to other hues and that color discrimination difficulty did not affect visual suppression. These results support the notion that early visual processing is stimulus driven and that feedforward and feedback processing encode different types of information about visual scenes. They further suggest that certain hues can be prioritized over others within our visual systems by being more robustly represented during early temporal processing intervals. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Attentional Episodes in Visual Perception

    ERIC Educational Resources Information Center

    Wyble, Brad; Potter, Mary C.; Bowman, Howard; Nieuwenstein, Mark

    2011-01-01

    Is one's temporal perception of the world truly as seamless as it appears? This article presents a computationally motivated theory suggesting that visual attention samples information from temporal episodes (episodic simultaneous type/serial token model; Wyble, Bowman, & Nieuwenstein, 2009). Breaks between these episodes are punctuated by periods…

  10. Temporal Expectations Guide Dynamic Prioritization in Visual Working Memory through Attenuated α Oscillations.

    PubMed

    van Ede, Freek; Niklaus, Marcel; Nobre, Anna C

    2017-01-11

    Although working memory is generally considered a highly dynamic mnemonic store, popular laboratory tasks used to understand its psychological and neural mechanisms (such as change detection and continuous reproduction) often remain relatively "static," involving the retention of a set number of items throughout a shared delay interval. In the current study, we investigated visual working memory in a more dynamic setting, and assessed the following: (1) whether internally guided temporal expectations can dynamically and reversibly prioritize individual mnemonic items at specific times at which they are deemed most relevant; and (2) the neural substrates that support such dynamic prioritization. Participants encoded two differently colored oriented bars into visual working memory to retrieve the orientation of one bar with a precision judgment when subsequently probed. To test for the flexible temporal control to access and retrieve remembered items, we manipulated the probability for each of the two bars to be probed over time, and recorded EEG in healthy human volunteers. Temporal expectations had a profound influence on working memory performance, leading to faster access times as well as more accurate orientation reproductions for items that were probed at expected times. Furthermore, this dynamic prioritization was associated with the temporally specific attenuation of contralateral α (8-14 Hz) oscillations that, moreover, predicted working memory access times on a trial-by-trial basis. We conclude that attentional prioritization in working memory can be dynamically steered by internally guided temporal expectations, and is supported by the attenuation of α oscillations in task-relevant sensory brain areas. In dynamic, everyday-like, environments, flexible goal-directed behavior requires that mental representations that are kept in an active (working memory) store are dynamic, too. We investigated working memory in a more dynamic setting than is conventional, and demonstrate that expectations about when mnemonic items are most relevant can dynamically and reversibly prioritize these items in time. Moreover, we uncover a neural substrate of such dynamic prioritization in contralateral visual brain areas and show that this substrate predicts working memory retrieval times on a trial-by-trial basis. This places the experimental study of working memory, and its neuronal underpinnings, in a more dynamic and ecologically valid context, and provides new insights into the neural implementation of attentional prioritization within working memory. Copyright © 2017 van Ede et al.

  11. Duration estimates within a modality are integrated sub-optimally

    PubMed Central

    Cai, Ming Bo; Eagleman, David M.

    2015-01-01

    Perceived duration can be influenced by various properties of sensory stimuli. For example, visual stimuli of higher temporal frequency are perceived to last longer than those of lower temporal frequency. How does the brain form a representation of duration when each of two simultaneously presented stimuli influences perceived duration in different way? To answer this question, we investigated the perceived duration of a pair of dynamic visual stimuli of different temporal frequencies in comparison to that of a single visual stimulus of either low or high temporal frequency. We found that the duration representation of simultaneously occurring visual stimuli is best described by weighting the estimates of duration based on each individual stimulus. However, the weighting performance deviates from the prediction of statistically optimal integration. In addition, we provided a Bayesian account to explain a difference in the apparent sensitivity of the psychometric curves introduced by the order in which the two stimuli are displayed in a two-alternative forced-choice task. PMID:26321965

  12. Functionally segregated neural substrates for arbitrary audiovisual paired-association learning.

    PubMed

    Tanabe, Hiroki C; Honda, Manabu; Sadato, Norihiro

    2005-07-06

    To clarify the neural substrates and their dynamics during crossmodal association learning, we conducted functional magnetic resonance imaging (MRI) during audiovisual paired-association learning of delayed matching-to-sample tasks. Thirty subjects were involved in the study; 15 performed an audiovisual paired-association learning task, and the remainder completed a control visuo-visual task. Each trial consisted of the successive presentation of a pair of stimuli. Subjects were asked to identify predefined audiovisual or visuo-visual pairs by trial and error. Feedback for each trial was given regardless of whether the response was correct or incorrect. During the delay period, several areas showed an increase in the MRI signal as learning proceeded: crossmodal activity increased in unimodal areas corresponding to visual or auditory areas, and polymodal responses increased in the occipitotemporal junction and parahippocampal gyrus. This pattern was not observed in the visuo-visual intramodal paired-association learning task, suggesting that crossmodal associations might be formed by binding unimodal sensory areas via polymodal regions. In both the audiovisual and visuo-visual tasks, the MRI signal in the superior temporal sulcus (STS) in response to the second stimulus and feedback peaked during the early phase of learning and then decreased, indicating that the STS might be key to the creation of paired associations, regardless of stimulus type. In contrast to the activity changes in the regions discussed above, there was constant activity in the frontoparietal circuit during the delay period in both tasks, implying that the neural substrates for the formation and storage of paired associates are distinct from working memory circuits.

  13. Impaired Driving Performance as Evidence of a Magnocellular Deficit in Dyslexia and Visual Stress.

    PubMed

    Fisher, Carri; Chekaluk, Eugene; Irwin, Julia

    2015-11-01

    High comorbidity and an overlap in symptomology have been demonstrated between dyslexia and visual stress. Several researchers have hypothesized an underlying or causal influence that may account for this relationship. The magnocellular theory of dyslexia proposes that a deficit in visuo-temporal processing can explain symptomology for both disorders. If the magnocellular theory holds true, individuals who experience symptomology for these disorders should show impairment on a visuo-temporal task, such as driving. Eighteen male participants formed the sample for this study. Self-report measures assessed dyslexia and visual stress symptomology as well as participant IQ. Participants completed a drive simulation in which errors in response to road signs were measured. Bivariate correlations revealed significant associations between scores on measures of dyslexia and visual stress. Results also demonstrated that self-reported symptomology predicts magnocellular impairment as measured by performance on a driving task. Results from this study suggest that a magnocellular deficit offers a likely explanation for individuals who report high symptomology across both conditions. While conclusions about the impact of these disorders on driving performance should not be derived from this research alone, this study provides a platform for the development of future research, utilizing a clinical population and on-road driving assessment techniques. Copyright © 2015 John Wiley & Sons, Ltd.

  14. Mate choice in the eye and ear of the beholder? Female multimodal sensory configuration influences her preferences.

    PubMed

    Ronald, Kelly L; Fernández-Juricic, Esteban; Lucas, Jeffrey R

    2018-05-16

    A common assumption in sexual selection studies is that receivers decode signal information similarly. However, receivers may vary in how they rank signallers if signal perception varies with an individual's sensory configuration. Furthermore, receivers may vary in their weighting of different elements of multimodal signals based on their sensory configuration. This could lead to complex levels of selection on signalling traits. We tested whether multimodal sensory configuration could affect preferences for multimodal signals. We used brown-headed cowbird ( Molothrus ater ) females to examine how auditory sensitivity and auditory filters, which influence auditory spectral and temporal resolution, affect song preferences, and how visual spatial resolution and visual temporal resolution, which influence resolution of a moving visual signal, affect visual display preferences. Our results show that multimodal sensory configuration significantly affects preferences for male displays: females with better auditory temporal resolution preferred songs that were shorter, with lower Wiener entropy, and higher frequency; and females with better visual temporal resolution preferred males with less intense visual displays. Our findings provide new insights into mate-choice decisions and receiver signal processing. Furthermore, our results challenge a long-standing assumption in animal communication which can affect how we address honest signalling, assortative mating and sensory drive. © 2018 The Author(s).

  15. Multisensory perceptual learning of temporal order: audiovisual learning transfers to vision but not audition.

    PubMed

    Alais, David; Cass, John

    2010-06-23

    An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question. Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes. The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order may be optimised to object-centered rather than viewer-centered constraints.

  16. Ocular phenotypes associated with two mutations (R121W, C126X) in the Norrie disease gene.

    PubMed

    Kellner, U; Fuchs, S; Bornfeld, N; Foerster, M H; Gal, A

    1996-06-01

    To describe the ocular phenotypes associated with 2 mutations in the Norrie disease gene including a manifesting carrier. Ophthalmological examinations were performed in 2 affected males and one manifesting carrier. Genomic DNA was analyzed by direct sequencing of the Norrie disease gene. Family I: A 29-year-old male had the right eye enucleated at the age of 3 years. His left eye showed severe temporal dragging of the retina and central scars. Visual acuity was 20/300. DNA analysis revealed a C-to-T transition of the first nucleotide in codon 121 predicting the replacement of arginine-121 by tryptophan (R121W). Both the mother and maternal grandmother carry the same mutation in heterozygous form. Family 2: A 3-month-old boy presented with severe temporal dragging of the retina on both eyes and subsequently developed retinal detachment. Visual acuity was limited to light perception. His mother's left eye was amaurotic and phthitic. Her right eye showed severe retinal dragging, visual acuity was reduced to 20/60. DNA analysis revealed a T-to-A transversion of the third nucleotide in codon 126 creating a stop codon (C126X). The mother and maternal grandmother were carriers. Mutations in the Norrie disease gene can lead to retinal malformations of variable severity both in hemizygous males and manifesting carriers.

  17. Representation of visual symbols in the visual word processing network.

    PubMed

    Muayqil, Taim; Davies-Thompson, Jodie; Barton, Jason J S

    2015-03-01

    Previous studies have shown that word processing involves a predominantly left-sided occipitotemporal network. Words are a form of symbolic representation, in that they are arbitrary perceptual stimuli that represent other objects, actions or concepts. Lesions of parts of the visual word processing network can cause alexia, which can be associated with difficulty processing other types of symbols such as musical notation or road signs. We investigated whether components of the visual word processing network were also activated by other types of symbols. In 16 music-literate subjects, we defined the visual word network using fMRI and examined responses to four symbolic categories: visual words, musical notation, instructive symbols (e.g. traffic signs), and flags and logos. For each category we compared responses not only to scrambled stimuli, but also to similar stimuli that lacked symbolic meaning. The left visual word form area and a homologous right fusiform region responded similarly to all four categories, but equally to both symbolic and non-symbolic equivalents. Greater response to symbolic than non-symbolic stimuli occurred only in the left inferior frontal and middle temporal gyri, but only for words, and in the case of the left inferior frontal gyri, also for musical notation. A whole-brain analysis comparing symbolic versus non-symbolic stimuli revealed a distributed network of inferior temporooccipital and parietal regions that differed for different symbols. The fusiform gyri are involved in processing the form of many symbolic stimuli, but not specifically for stimuli with symbolic content. Selectivity for stimuli with symbolic content only emerges in the visual word network at the level of the middle temporal and inferior frontal gyri, but is specific for words and musical notation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Detecting delay in visual feedback of an action as a monitor of self recognition.

    PubMed

    Hoover, Adria E N; Harris, Laurence R

    2012-10-01

    How do we distinguish "self" from "other"? The correlation between willing an action and seeing it occur is an important cue. We exploited the fact that this correlation needs to occur within a restricted temporal window in order to obtain a quantitative assessment of when a body part is identified as "self". We measured the threshold and sensitivity (d') for detecting a delay between movements of the finger (of both the dominant and non-dominant hands) and visual feedback as seen from four visual perspectives (the natural view, and mirror-reversed and/or inverted views). Each trial consisted of one presentation with minimum delay and another with a delay of between 33 and 150 ms. Participants indicated which presentation contained the delayed view. We varied the amount of efference copy available for this task by comparing performances for discrete movements and continuous movements. Discrete movements are associated with a stronger efference copy. Sensitivity to detect asynchrony between visual and proprioceptive information was significantly higher when movements were viewed from a "plausible" self perspective compared with when the view was reversed or inverted. Further, we found differences in performance between dominant and non-dominant hand finger movements across the continuous and single movements. Performance varied with the viewpoint from which the visual feedback was presented and on the efferent component such that optimal performance was obtained when the presentation was in the normal natural orientation and clear efferent information was available. Variations in sensitivity to visual/non-visual temporal incongruence with the viewpoint in which a movement is seen may help determine the arrangement of the underlying visual representation of the body.

  19. Characteristics of peripapillary retinal nerve fiber layer in preterm children.

    PubMed

    Wang, Jingyun; Spencer, Rand; Leffler, Joel N; Birch, Eileen E

    2012-05-01

    To examine quantitatively characteristics of the peripapillary retinal nerve fiber layer (RNFL) in preterm children using Fourier-domain optical coherence tomography (FD-OCT). Prospective cross-sectional study. A 3-mm high-resolution FD-OCT peripapillary RNFL circular scan centered on the optic disc was obtained from right eyes of 25 preterm children (10.6 ± 3.7 years old, 8 preterm and 17 with regressed retinopathy of prematurity with normal-appearing posterior poles) and 54 full-term controls (9.8 ± 3.2 years old). Images were analyzed using Spectralis FD-OCT software to obtain average thickness measurements for 6 sectors (temporal superior, temporal, temporal inferior, nasal inferior, nasal, nasal superior), and the global average. The RNFL global average for preterm children was 8% thinner than for full-term controls. In the preterm group, peripapillary RNFL thickness on the temporal side of the disc was 6% thicker than in full-term controls, while all other peripapillary RNFL sectors were 9% to 13% thinner. In the preterm group, temporal sector peripapillary RNFL thickness was correlated with gestational age (r = -0.47, P < .001), with foveal center total thickness (r = 0.48, P = .008, 1-tailed), and with visual acuity (r = 0.42; P = .026, 1-tailed). The significantly thinner RNFL global average for preterm children suggests that prematurity is associated with subclinical optic nerve hypoplasia. Significant correlations between temporal sector RNFL thickness and both the foveal thickness and visual acuity suggest that the peripapillary RNFL is related to abnormalities in macular development as a result of preterm birth. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Unsupervised Neural Network Quantifies the Cost of Visual Information Processing.

    PubMed

    Orbán, Levente L; Chartier, Sylvain

    2015-01-01

    Untrained, "flower-naïve" bumblebees display behavioural preferences when presented with visual properties such as colour, symmetry, spatial frequency and others. Two unsupervised neural networks were implemented to understand the extent to which these models capture elements of bumblebees' unlearned visual preferences towards flower-like visual properties. The computational models, which are variants of Independent Component Analysis and Feature-Extracting Bidirectional Associative Memory, use images of test-patterns that are identical to ones used in behavioural studies. Each model works by decomposing images of floral patterns into meaningful underlying factors. We reconstruct the original floral image using the components and compare the quality of the reconstructed image to the original image. Independent Component Analysis matches behavioural results substantially better across several visual properties. These results are interpreted to support a hypothesis that the temporal and energetic costs of information processing by pollinators served as a selective pressure on floral displays: flowers adapted to pollinators' cognitive constraints.

  1. Attention distributed across sensory modalities enhances perceptual performance

    PubMed Central

    Mishra, Jyoti; Gazzaley, Adam

    2012-01-01

    This study investigated the interaction between top-down attentional control and multisensory processing in humans. Using semantically congruent and incongruent audiovisual stimulus streams, we found target detection to be consistently improved in the setting of distributed audiovisual attention versus focused visual attention. This performance benefit was manifested as faster reaction times for congruent audiovisual stimuli, and as accuracy improvements for incongruent stimuli, resulting in a resolution of stimulus interference. Electrophysiological recordings revealed that these behavioral enhancements were associated with reduced neural processing of both auditory and visual components of the audiovisual stimuli under distributed vs. focused visual attention. These neural changes were observed at early processing latencies, within 100–300 ms post-stimulus onset, and localized to auditory, visual, and polysensory temporal cortices. These results highlight a novel neural mechanism for top-down driven performance benefits via enhanced efficacy of sensory neural processing during distributed audiovisual attention relative to focused visual attention. PMID:22933811

  2. Temporal Visualization for Legal Case Histories.

    ERIC Educational Resources Information Center

    Harris, Chanda; Allen, Robert B.; Plaisant, Catherine; Shneiderman, Ben

    1999-01-01

    Discusses visualization of legal information using a tool for temporal information called "LifeLines." Explores ways "LifeLines" could aid in viewing the links between original case and direct and indirect case histories. Uses the case of Apple Computer, Inc. versus Microsoft Corporation and Hewlett Packard Company to…

  3. Does Temporal Integration Occur for Unrecognizable Words in Visual Crowding?

    PubMed Central

    Zhou, Jifan; Lee, Chia-Lin; Li, Kuei-An; Tien, Yung-Hsuan; Yeh, Su-Ling

    2016-01-01

    Visual crowding—the inability to see an object when it is surrounded by flankers in the periphery—does not block semantic activation: unrecognizable words due to visual crowding still generated robust semantic priming in subsequent lexical decision tasks. Based on the previous finding, the current study further explored whether unrecognizable crowded words can be temporally integrated into a phrase. By showing one word at a time, we presented Chinese four-word idioms with either a congruent or incongruent ending word in order to examine whether the three preceding crowded words can be temporally integrated to form a semantic context so as to affect the processing of the ending word. Results from both behavioral (Experiment 1) and Event-Related Potential (Experiment 2 and 3) measures showed congruency effect in only the non-crowded condition, which does not support the existence of unconscious multi-word integration. Aside from four-word idioms, we also found that two-word (modifier + adjective combination) integration—the simplest kind of temporal semantic integration—did not occur in visual crowding (Experiment 4). Our findings suggest that integration of temporally separated words might require conscious awareness, at least under the timing conditions tested in the current study. PMID:26890366

  4. Reference frames for spatial frequency in face representation differ in the temporal visual cortex and amygdala.

    PubMed

    Inagaki, Mikio; Fujita, Ichiro

    2011-07-13

    Social communication in nonhuman primates and humans is strongly affected by facial information from other individuals. Many cortical and subcortical brain areas are known to be involved in processing facial information. However, how the neural representation of faces differs across different brain areas remains unclear. Here, we demonstrate that the reference frame for spatial frequency (SF) tuning of face-responsive neurons differs in the temporal visual cortex and amygdala in monkeys. Consistent with psychophysical properties for face recognition, temporal cortex neurons were tuned to image-based SFs (cycles/image) and showed viewing distance-invariant representation of face patterns. On the other hand, many amygdala neurons were influenced by retina-based SFs (cycles/degree), a characteristic that is useful for social distance computation. The two brain areas also differed in the luminance contrast sensitivity of face-responsive neurons; amygdala neurons sharply reduced their responses to low luminance contrast images, while temporal cortex neurons maintained the level of their responses. From these results, we conclude that different types of visual processing in the temporal visual cortex and the amygdala contribute to the construction of the neural representations of faces.

  5. The dissonance mutation at the no-on-transient-A locus of D. melanogaster: genetic control of courtship song and visual behaviors by a protein with putative RNA-binding motifs.

    PubMed

    Rendahl, K G; Jones, K R; Kulkarni, S J; Bagully, S H; Hall, J C

    1992-02-01

    Genetic and molecular results are here presented revealing that the dissonance (diss) courtship song mutation is an allele of the no-on-transient-A (nonA) locus of Drosophila melanogaster. diss (now called nonAdiss) was originally isolated as a mutant with aberrant pulse song, although it was then noted to exhibit defects in responses to visual stimuli as well. The lack of transient spikes in the electroretinogram (ERG) and optomotor blindness associated with nonAdiss are shown to be similar to the visual abnormalities caused by the original nonA mutations. nonAdiss failed to complement either the ERG or optomotor defects associated with four other nonA mutations. However, all four of these nonA mutants--which were isolated on visual criteria alone--sang a normal courtship song. nonAdiss complemented at least three of the nonA mutations with regard to the singing phenotype, as assessed by a new method for temporal analysis of the male's pulse song. Both visual and song abnormalities caused by nonAdiss were rescued by P-element-mediated transformation with overlapping 11 and 16 kilobase (kb) fragments of genomic DNA (originally cloned from the nonA locus by Jones and Rubin, 1990). Analysis of behavioral phenotypes in transformed flies carrying mutagenized versions of the 11 kb genomic fragment (in a nonAdiss genomic background) localized the rescuing DNA to a region containing an open reading frame that encodes a polypeptide (NONA) with similarity to a family of RNA-binding proteins. Immunohistochemical determination of NONA's spatial and temporal expression revealed that it is localized to the nuclei of cells in many neural and non-neural tissues, at all stages of the life cycle after very early in development. Genetic connections between the control of two quite different behaviors--reproductive and visual--are discussed, along with precedences for generally expressed gene products playing roles in specific behaviors.

  6. Frontotemporal neural systems supporting semantic processing in Alzheimer's disease.

    PubMed

    Peelle, Jonathan E; Powers, John; Cook, Philip A; Smith, Edward E; Grossman, Murray

    2014-03-01

    We hypothesized that semantic memory for object concepts involves both representations of visual feature knowledge in modality-specific association cortex and heteromodal regions that are important for integrating and organizing this semantic knowledge so that it can be used in a flexible, contextually appropriate manner. We examined this hypothesis in an fMRI study of mild Alzheimer's disease (AD). Participants were presented with pairs of printed words and asked whether the words matched on a given visual-perceptual feature (e.g., guitar, violin: SHAPE). The stimuli probed natural kinds and manufactured objects, and the judgments involved shape or color. We found activation of bilateral ventral temporal cortex and left dorsolateral prefrontal cortex during semantic judgments, with AD patients showing less activation of these regions than healthy seniors. Moreover, AD patients showed less ventral temporal activation than did healthy seniors for manufactured objects, but not for natural kinds. We also used diffusion-weighted MRI of white matter to examine fractional anisotropy (FA). Patients with AD showed significantly reduced FA in the superior longitudinal fasciculus and inferior frontal-occipital fasciculus, which carry projections linking temporal and frontal regions of this semantic network. Our results are consistent with the hypothesis that semantic memory is supported in part by a large-scale neural network involving modality-specific association cortex, heteromodal association cortex, and projections between these regions. The semantic deficit in AD thus arises from gray matter disease that affects the representation of feature knowledge and processing its content, as well as white matter disease that interrupts the integrated functioning of this large-scale network.

  7. Grouping and Segregation of Sensory Events by Actions in Temporal Audio-Visual Recalibration.

    PubMed

    Ikumi, Nara; Soto-Faraco, Salvador

    2016-01-01

    Perception in multi-sensory environments involves both grouping and segregation of events across sensory modalities. Temporal coincidence between events is considered a strong cue to resolve multisensory perception. However, differences in physical transmission and neural processing times amongst modalities complicate this picture. This is illustrated by cross-modal recalibration, whereby adaptation to audio-visual asynchrony produces shifts in perceived simultaneity. Here, we examined whether voluntary actions might serve as a temporal anchor to cross-modal recalibration in time. Participants were tested on an audio-visual simultaneity judgment task after an adaptation phase where they had to synchronize voluntary actions with audio-visual pairs presented at a fixed asynchrony (vision leading or vision lagging). Our analysis focused on the magnitude of cross-modal recalibration to the adapted audio-visual asynchrony as a function of the nature of the actions during adaptation, putatively fostering cross-modal grouping or, segregation. We found larger temporal adjustments when actions promoted grouping than segregation of sensory events. However, a control experiment suggested that additional factors, such as attention to planning/execution of actions, could have an impact on recalibration effects. Contrary to the view that cross-modal temporal organization is mainly driven by external factors related to the stimulus or environment, our findings add supporting evidence for the idea that perceptual adjustments strongly depend on the observer's inner states induced by motor and cognitive demands.

  8. Grouping and Segregation of Sensory Events by Actions in Temporal Audio-Visual Recalibration

    PubMed Central

    Ikumi, Nara; Soto-Faraco, Salvador

    2017-01-01

    Perception in multi-sensory environments involves both grouping and segregation of events across sensory modalities. Temporal coincidence between events is considered a strong cue to resolve multisensory perception. However, differences in physical transmission and neural processing times amongst modalities complicate this picture. This is illustrated by cross-modal recalibration, whereby adaptation to audio-visual asynchrony produces shifts in perceived simultaneity. Here, we examined whether voluntary actions might serve as a temporal anchor to cross-modal recalibration in time. Participants were tested on an audio-visual simultaneity judgment task after an adaptation phase where they had to synchronize voluntary actions with audio-visual pairs presented at a fixed asynchrony (vision leading or vision lagging). Our analysis focused on the magnitude of cross-modal recalibration to the adapted audio-visual asynchrony as a function of the nature of the actions during adaptation, putatively fostering cross-modal grouping or, segregation. We found larger temporal adjustments when actions promoted grouping than segregation of sensory events. However, a control experiment suggested that additional factors, such as attention to planning/execution of actions, could have an impact on recalibration effects. Contrary to the view that cross-modal temporal organization is mainly driven by external factors related to the stimulus or environment, our findings add supporting evidence for the idea that perceptual adjustments strongly depend on the observer's inner states induced by motor and cognitive demands. PMID:28154529

  9. Short-term memory stores organized by information domain.

    PubMed

    Noyce, Abigail L; Cestero, Nishmar; Shinn-Cunningham, Barbara G; Somers, David C

    2016-04-01

    Vision and audition have complementary affinities, with vision excelling in spatial resolution and audition excelling in temporal resolution. Here, we investigated the relationships among the visual and auditory modalities and spatial and temporal short-term memory (STM) using change detection tasks. We created short sequences of visual or auditory items, such that each item within a sequence arose at a unique spatial location at a unique time. On each trial, two successive sequences were presented; subjects attended to either space (the sequence of locations) or time (the sequence of inter item intervals) and reported whether the patterns of locations or intervals were identical. Each subject completed blocks of unimodal trials (both sequences presented in the same modality) and crossmodal trials (Sequence 1 visual, Sequence 2 auditory, or vice versa) for both spatial and temporal tasks. We found a strong interaction between modality and task: Spatial performance was best on unimodal visual trials, whereas temporal performance was best on unimodal auditory trials. The order of modalities on crossmodal trials also mattered, suggesting that perceptual fidelity at encoding is critical to STM. Critically, no cost was attributable to crossmodal comparison: In both tasks, performance on crossmodal trials was as good as or better than on the weaker unimodal trials. STM representations of space and time can guide change detection in either the visual or the auditory modality, suggesting that the temporal or spatial organization of STM may supersede sensory-specific organization.

  10. Visual search of cyclic spatio-temporal events

    NASA Astrophysics Data System (ADS)

    Gautier, Jacques; Davoine, Paule-Annick; Cunty, Claire

    2018-05-01

    The analysis of spatio-temporal events, and especially of relationships between their different dimensions (space-time-thematic attributes), can be done with geovisualization interfaces. But few geovisualization tools integrate the cyclic dimension of spatio-temporal event series (natural events or social events). Time Coil and Time Wave diagrams represent both the linear time and the cyclic time. By introducing a cyclic temporal scale, these diagrams may highlight the cyclic characteristics of spatio-temporal events. However, the settable cyclic temporal scales are limited to usual durations like days or months. Because of that, these diagrams cannot be used to visualize cyclic events, which reappear with an unusual period, and don't allow to make a visual search of cyclic events. Also, they don't give the possibility to identify the relationships between the cyclic behavior of the events and their spatial features, and more especially to identify localised cyclic events. The lack of possibilities to represent the cyclic time, outside of the temporal diagram of multi-view geovisualization interfaces, limits the analysis of relationships between the cyclic reappearance of events and their other dimensions. In this paper, we propose a method and a geovisualization tool, based on the extension of Time Coil and Time Wave, to provide a visual search of cyclic events, by allowing to set any possible duration to the diagram's cyclic temporal scale. We also propose a symbology approach to push the representation of the cyclic time into the map, in order to improve the analysis of relationships between space and the cyclic behavior of events.

  11. Developmental changes in the neural influence of sublexical information on semantic processing.

    PubMed

    Lee, Shu-Hui; Booth, James R; Chou, Tai-Li

    2015-07-01

    Functional magnetic resonance imaging (fMRI) was used to examine the developmental changes in a group of normally developing children (aged 8-12) and adolescents (aged 13-16) during semantic processing. We manipulated association strength (i.e. a global reading unit) and semantic radical (i.e. a local reading unit) to explore the interaction of lexical and sublexical semantic information in making semantic judgments. In the semantic judgment task, two types of stimuli were used: visually-similar (i.e. shared a semantic radical) versus visually-dissimilar (i.e. did not share a semantic radical) character pairs. Participants were asked to indicate if two Chinese characters, arranged according to association strength, were related in meaning. The results showed greater developmental increases in activation in left angular gyrus (BA 39) in the visually-similar compared to the visually-dissimilar pairs for the strong association. There were also greater age-related increases in angular gyrus for the strong compared to weak association in the visually-similar pairs. Both of these results suggest that shared semantics at the sublexical level facilitates the integration of overlapping features at the lexical level in older children. In addition, there was a larger developmental increase in left posterior middle temporal gyrus (BA 21) for the weak compared to strong association in the visually-dissimilar pairs, suggesting conflicting sublexical information placed greater demands on access to lexical representations in the older children. All together, these results suggest that older children are more sensitive to sublexical information when processing lexical representations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Visualization of spatial-temporal data based on 3D virtual scene

    NASA Astrophysics Data System (ADS)

    Wang, Xianghong; Liu, Jiping; Wang, Yong; Bi, Junfang

    2009-10-01

    The main purpose of this paper is to realize the expression of the three-dimensional dynamic visualization of spatialtemporal data based on three-dimensional virtual scene, using three-dimensional visualization technology, and combining with GIS so that the people's abilities of cognizing time and space are enhanced and improved by designing dynamic symbol and interactive expression. Using particle systems, three-dimensional simulation, virtual reality and other visual means, we can simulate the situations produced by changing the spatial location and property information of geographical entities over time, then explore and analyze its movement and transformation rules by changing the interactive manner, and also replay history and forecast of future. In this paper, the main research object is the vehicle track and the typhoon path and spatial-temporal data, through three-dimensional dynamic simulation of its track, and realize its timely monitoring its trends and historical track replaying; according to visualization techniques of spatialtemporal data in Three-dimensional virtual scene, providing us with excellent spatial-temporal information cognitive instrument not only can add clarity to show spatial-temporal information of the changes and developments in the situation, but also be used for future development and changes in the prediction and deduction.

  13. Temporal information entropy of the Blood-Oxygenation Level-Dependent signals increases in the activated human primary visual cortex

    NASA Astrophysics Data System (ADS)

    DiNuzzo, Mauro; Mascali, Daniele; Moraschi, Marta; Bussu, Giorgia; Maraviglia, Bruno; Mangia, Silvia; Giove, Federico

    2017-02-01

    Time-domain analysis of blood-oxygenation level-dependent (BOLD) signals allows the identification of clusters of voxels responding to photic stimulation in primary visual cortex (V1). However, the characterization of information encoding into temporal properties of the BOLD signals of an activated cluster is poorly investigated. Here, we used Shannon entropy to determine spatial and temporal information encoding in the BOLD signal within the most strongly activated area of the human visual cortex during a hemifield photic stimulation. We determined the distribution profile of BOLD signals during epochs at rest and under stimulation within small (19-121 voxels) clusters designed to include only voxels driven by the stimulus as highly and uniformly as possible. We found consistent and significant increases (2-4% on average) in temporal information entropy during activation in contralateral but not ipsilateral V1, which was mirrored by an expected loss of spatial information entropy. These opposite changes coexisted with increases in both spatial and temporal mutual information (i.e. dependence) in contralateral V1. Thus, we showed that the first cortical stage of visual processing is characterized by a specific spatiotemporal rearrangement of intracluster BOLD responses. Our results indicate that while in the space domain BOLD maps may be incapable of capturing the functional specialization of small neuronal populations due to relatively low spatial resolution, some information encoding may still be revealed in the temporal domain by an increase of temporal information entropy.

  14. Temporal Influence on Awareness

    DTIC Science & Technology

    1995-12-01

    43 38. Test Setup Timing: Measured vs Expected Modal Delays (in ms) ............. 46 39. Experiment I: visual and auditory stimuli...presented simultaneously; visual- auditory delay=Oms, visual-visual delay=0ms ....... .......................... 47 40. Experiment II: visual and auditory ...stimuli presented in order; visual- auditory de- lay=Oms, visual-visual delay=variable ................................ 48 41. Experiment II: visual and

  15. Effects of Spatio-Temporal Aliasing on Pilot Performance in Active Control Tasks

    NASA Technical Reports Server (NTRS)

    Zaal, Peter; Sweet, Barbara

    2010-01-01

    Spatio-temporal aliasing affects pilot performance and control behavior. For increasing refresh rates: 1) Significant change in control behavior: a) Increase in visual gain and neuromuscular frequency. b) Decrease in visual time delay. 2) Increase in tracking performance: a) Decrease in RMSe. b) Increase in crossover frequency.

  16. Visualizing Interaction Patterns in Online Discussions and Indices of Cognitive Presence

    ERIC Educational Resources Information Center

    Gibbs, William J.

    2006-01-01

    This paper discusses Mapping Temporal Relations of Discussions Software (MTRDS), a Web-based application that visually represents the temporal relations of online discussions. MTRDS was used to observe interaction characteristics of three online discussions. In addition, the research employed the Practical Inquiry Model to identify indices of…

  17. Judgments of auditory-visual affective congruence in adolescents with and without autism: a pilot study of a new task using fMRI.

    PubMed

    Loveland, Katherine A; Steinberg, Joel L; Pearson, Deborah A; Mansour, Rosleen; Reddoch, Stacy

    2008-10-01

    One of the most widely reported developmental deficits associated with autism is difficulty perceiving and expressing emotion appropriately. Brain activation associated with performance on a new task, the Emotional Congruence Task, requires judging affective congruence of facial expression and voice, compared with their sex congruence. Participants in this pilot study were adolescents with normal IQ (n = 5) and autism or without (n = 4) autism. In the emotional congruence condition, as compared to the sex congruence of voice and face, controls had significantly more activation than the Autism group in the orbitofrontal cortex, the superior temporal, parahippocampal, and posterior cingulate gyri and occipital regions. Unlike controls, the Autism group did not have significantly greater prefrontal activation during the emotional congruence condition, but did during the sex congruence condition. Results indicate the Emotional Congruence Task can be used successfully to assess brain activation and behavior associated with integration of auditory and visual information for emotion. While the numbers in the groups are small, the results suggest that brain activity while performing the Emotional Congruence Task differed between adolescents with and without autism in fronto-limbic areas and in the superior temporal region. These findings must be confirmed using larger samples of participants.

  18. Visual integration enhances associative memory equally for young and older adults without reducing hippocampal encoding activation.

    PubMed

    Memel, Molly; Ryan, Lee

    2017-06-01

    The ability to remember associations between previously unrelated pieces of information is often impaired in older adults (Naveh-Benjamin, 2000). Unitization, the process of creating a perceptually or semantically integrated representation that includes both items in an associative pair, attenuates age-related associative deficits (Bastin et al., 2013; Ahmad et al., 2015; Zheng et al., 2015). Compared to non-unitized pairs, unitized pairs may rely less on hippocampally-mediated binding associated with recollection, and more on familiarity-based processes mediated by perirhinal cortex (PRC) and parahippocampal cortex (PHC). While unitization of verbal materials improves associative memory in older adults, less is known about the impact of visual integration. The present study determined whether visual integration improves associative memory in older adults by minimizing the need for hippocampal (HC) recruitment and shifting encoding to non-hippocampal medial temporal structures, such as the PRC and PHC. Young and older adults were presented with a series of objects paired with naturalistic scenes while undergoing fMRI scanning, and were later given an associative memory test. Visual integration was varied by presenting the object either next to the scene (Separated condition) or visually integrated within the scene (Combined condition). Visual integration improved associative memory among young and older adults to a similar degree by increasing the hit rate for intact pairs, but without increasing false alarms for recombined pairs, suggesting enhanced recollection rather than increased reliance on familiarity. Also contrary to expectations, visual integration resulted in increased hippocampal activation in both age groups, along with increases in PRC and PHC activation. Activation in all three MTL regions predicted discrimination performance during the Separated condition in young adults, while only a marginal relationship between PRC activation and performance was observed during the Combined condition. Older adults showed less overall activation in MTL regions compared to young adults, and associative memory performance was most strongly predicted by prefrontal, rather than MTL, activation. We suggest that visual integration benefits both young and older adults similarly, and provides a special case of unitization that may be mediated by recollective, rather than familiarity-based encoding processes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Emergence of artistic talent in frontotemporal dementia.

    PubMed

    Miller, B L; Cummings, J; Mishkin, F; Boone, K; Prince, F; Ponton, M; Cotman, C

    1998-10-01

    To describe the clinical, neuropsychological, and imaging features of five patients with frontotemporal dementia (FTD) who acquired new artistic skills in the setting of dementia. Creativity in the setting of dementia has recently been reported. We describe five patients who became visual artists in the setting of FTD. Sixty-nine FTD patients were interviewed regarding visual abilities. Five became artists in the early stages of FTD. Their history, artistic process, neuropsychology, and anatomy are described. On SPECT or pathology, four of the five patients had the temporal variant of FTD in which anterior temporal lobes are involved but the dorsolateral frontal cortex is spared. Visual skills were spared but language and social skills were devastated. Loss of function in the anterior temporal lobes may lead to the "facilitation" of artistic skills. Patients with the temporal lobe variant of FTD offer a window into creativity.

  20. Spatial limitations of fast temporal segmentation are best modeled by V1 receptive fields.

    PubMed

    Goodbourn, Patrick T; Forte, Jason D

    2013-11-22

    The fine temporal structure of events influences the spatial grouping and segmentation of visual-scene elements. Although adjacent regions flickering asynchronously at high temporal frequencies appear identical, the visual system signals a boundary between them. These "phantom contours" disappear when the gap between regions exceeds a critical value (g(max)). We used g(max) as an index of neuronal receptive-field size to compare with known receptive-field data from along the visual pathway and thus infer the location of the mechanism responsible for fast temporal segmentation. Observers viewed a circular stimulus reversing in luminance contrast at 20 Hz for 500 ms. A gap of constant retinal eccentricity segmented each stimulus quadrant; on each trial, participants identified a target quadrant containing counterphasing inner and outer segments. Through varying the gap width, g(max) was determined at a range of retinal eccentricities. We found that g(max) increased from 0.3° to 0.8° for eccentricities from 2° to 12°. These values correspond to receptive-field diameters of neurons in primary visual cortex that have been reported in single-cell and fMRI studies and are consistent with the spatial limitations of motion detection. In a further experiment, we found that modulation sensitivity depended critically on the length of the contour and could be predicted by a simple model of spatial summation in early cortical neurons. The results suggest that temporal segmentation is achieved by neurons at the earliest cortical stages of visual processing, most likely in primary visual cortex.

  1. Clinical utility of the Wechsler Memory Scale - Fourth Edition (WMS-IV) in patients with intractable temporal lobe epilepsy.

    PubMed

    Bouman, Zita; Elhorst, Didi; Hendriks, Marc P H; Kessels, Roy P C; Aldenkamp, Albert P

    2016-02-01

    The Wechsler Memory Scale (WMS) is one of the most widely used test batteries to assess memory functions in patients with brain dysfunctions of different etiologies. This study examined the clinical validation of the Dutch Wechsler Memory Scale - Fourth Edition (WMS-IV-NL) in patients with temporal lobe epilepsy (TLE). The sample consisted of 75 patients with intractable TLE, who were eligible for epilepsy surgery, and 77 demographically matched healthy controls. All participants were examined with the WMS-IV-NL. Patients with TLE performed significantly worse than healthy controls on all WMS-IV-NL indices and subtests (p<.01), with the exception of the Visual Working Memory Index including its contributing subtests, as well as the subtests Logical Memory I, Verbal Paired Associates I, and Designs II. In addition, patients with mesiotemporal abnormalities performed significantly worse than patients with lateral temporal abnormalities on the subtests Logical Memory I and Designs II and all the indices (p<.05), with the exception of the Auditory Memory Index and Visual Working Memory Index. Patients with either a left or a right temporal focus performed equally on all WMS-IV-NL indices and subtests (F(15, 50)=.70, p=.78), as well as the Auditory-Visual discrepancy score (t(64)=-1.40, p=.17). The WMS-IV-NL is capable of detecting memory problems in patients with TLE, indicating that it is a sufficiently valid memory battery. Furthermore, the findings support previous research showing that the WMS-IV has limited value in identifying material-specific memory deficits in presurgical patients with TLE. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Exploring associations between gaze patterns and putative human mirror neuron system activity.

    PubMed

    Donaldson, Peter H; Gurvich, Caroline; Fielding, Joanne; Enticott, Peter G

    2015-01-01

    The human mirror neuron system (MNS) is hypothesized to be crucial to social cognition. Given that key MNS-input regions such as the superior temporal sulcus are involved in biological motion processing, and mirror neuron activity in monkeys has been shown to vary with visual attention, aberrant MNS function may be partly attributable to atypical visual input. To examine the relationship between gaze pattern and interpersonal motor resonance (IMR; an index of putative MNS activity), healthy right-handed participants aged 18-40 (n = 26) viewed videos of transitive grasping actions or static hands, whilst the left primary motor cortex received transcranial magnetic stimulation. Motor-evoked potentials recorded in contralateral hand muscles were used to determine IMR. Participants also underwent eyetracking analysis to assess gaze patterns whilst viewing the same videos. No relationship was observed between predictive gaze and IMR. However, IMR was positively associated with fixation counts in areas of biological motion in the videos, and negatively associated with object areas. These findings are discussed with reference to visual influences on the MNS, and the possibility that MNS atypicalities might be influenced by visual processes such as aberrant gaze pattern.

  3. Differential involvement of the posterior temporal cortex in mentalizing but not perspective taking

    PubMed Central

    Aumann, Carolin; Santos, Natacha S.; Bewernick, Bettina H.; Eickhoff, Simon B.; Newen, Albert; Shah, N. Jon; Fink, Gereon R.; Vogeley, Kai

    2008-01-01

    Understanding and predicting other people's mental states and behavior are important prerequisites for social interactions. The capacity to attribute mental states such as desires, thoughts or intentions to oneself or others is referred to as mentalizing. The right posterior temporal cortex at the temporal–parietal junction has been associated with mentalizing but also with taking someone else's spatial perspective onto the world—possibly an important prerequisite for mentalizing. Here, we directly compared the neural correlates of mentalizing and perspective taking using the same stimulus material. We found significantly increased neural activity in the right posterior segment of the superior temporal sulcus only during mentalizing but not perspective taking. Our data further clarify the role of the posterior temporal cortex in social cognition by showing that it is involved in processing information from socially salient visual cues in situations that require the inference about other people's mental states. PMID:19015120

  4. Lateralization of spatial rather than temporal attention underlies the left hemifield advantage in rapid serial visual presentation.

    PubMed

    Asanowicz, Dariusz; Kruse, Lena; Śmigasiewicz, Kamila; Verleger, Rolf

    2017-11-01

    In bilateral rapid serial visual presentation (RSVP), the second of two targets, T1 and T2, is better identified in the left visual field (LVF) than in the right visual field (RVF). This LVF advantage may reflect hemispheric asymmetry in temporal attention or/and in spatial orienting of attention. Participants performed two tasks: the "standard" bilateral RSVP task (Exp.1) and its unilateral variant (Exp.1 & 2). In the bilateral task, spatial location was uncertain, thus target identification involved stimulus-driven spatial orienting. In the unilateral task, the targets were presented block-wise in the LVF or RVF only, such that no spatial orienting was needed for target identification. Temporal attention was manipulated in both tasks by varying the T1-T2 lag. The results showed that the LVF advantage disappeared when involvement of stimulus-driven spatial orienting was eliminated, whereas the manipulation of temporal attention had no effect on the asymmetry. In conclusion, the results do not support the hypothesis of hemispheric asymmetry in temporal attention, and provide further evidence that the LVF advantage reflects right hemisphere predominance in stimulus-driven orienting of spatial attention. These conclusions fit evidence that temporal attention is implemented by bilateral parietal areas and spatial attention by the right-lateralized ventral frontoparietal network. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Visual and auditory perception in preschool children at risk for dyslexia.

    PubMed

    Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina

    2014-11-01

    Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. What can neuromorphic event-driven precise timing add to spike-based pattern recognition?

    PubMed

    Akolkar, Himanshu; Meyer, Cedric; Clady, Zavier; Marre, Olivier; Bartolozzi, Chiara; Panzeri, Stefano; Benosman, Ryad

    2015-03-01

    This letter introduces a study to precisely measure what an increase in spike timing precision can add to spike-driven pattern recognition algorithms. The concept of generating spikes from images by converting gray levels into spike timings is currently at the basis of almost every spike-based modeling of biological visual systems. The use of images naturally leads to generating incorrect artificial and redundant spike timings and, more important, also contradicts biological findings indicating that visual processing is massively parallel, asynchronous with high temporal resolution. A new concept for acquiring visual information through pixel-individual asynchronous level-crossing sampling has been proposed in a recent generation of asynchronous neuromorphic visual sensors. Unlike conventional cameras, these sensors acquire data not at fixed points in time for the entire array but at fixed amplitude changes of their input, resulting optimally sparse in space and time-pixel individually and precisely timed only if new, (previously unknown) information is available (event based). This letter uses the high temporal resolution spiking output of neuromorphic event-based visual sensors to show that lowering time precision degrades performance on several recognition tasks specifically when reaching the conventional range of machine vision acquisition frequencies (30-60 Hz). The use of information theory to characterize separability between classes for each temporal resolution shows that high temporal acquisition provides up to 70% more information that conventional spikes generated from frame-based acquisition as used in standard artificial vision, thus drastically increasing the separability between classes of objects. Experiments on real data show that the amount of information loss is correlated with temporal precision. Our information-theoretic study highlights the potentials of neuromorphic asynchronous visual sensors for both practical applications and theoretical investigations. Moreover, it suggests that representing visual information as a precise sequence of spike times as reported in the retina offers considerable advantages for neuro-inspired visual computations.

  7. Intracranial Cortical Responses during Visual–Tactile Integration in Humans

    PubMed Central

    Quinn, Brian T.; Carlson, Chad; Doyle, Werner; Cash, Sydney S.; Devinsky, Orrin; Spence, Charles; Halgren, Eric

    2014-01-01

    Sensory integration of touch and sight is crucial to perceiving and navigating the environment. While recent evidence from other sensory modality combinations suggests that low-level sensory areas integrate multisensory information at early processing stages, little is known about how the brain combines visual and tactile information. We investigated the dynamics of multisensory integration between vision and touch using the high spatial and temporal resolution of intracranial electrocorticography in humans. We present a novel, two-step metric for defining multisensory integration. The first step compares the sum of the unisensory responses to the bimodal response as multisensory responses. The second step eliminates the possibility that double addition of sensory responses could be misinterpreted as interactions. Using these criteria, averaged local field potentials and high-gamma-band power demonstrate a functional processing cascade whereby sensory integration occurs late, both anatomically and temporally, in the temporo–parieto–occipital junction (TPOJ) and dorsolateral prefrontal cortex. Results further suggest two neurophysiologically distinct and temporally separated integration mechanisms in TPOJ, while providing direct evidence for local suppression as a dominant mechanism for synthesizing visual and tactile input. These results tend to support earlier concepts of multisensory integration as relatively late and centered in tertiary multimodal association cortices. PMID:24381279

  8. Imaging multi-scale dynamics in vivo with spiral volumetric optoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Deán-Ben, X. Luís.; Fehm, Thomas F.; Ford, Steven J.; Gottschalk, Sven; Razansky, Daniel

    2017-03-01

    Imaging dynamics in living organisms is essential for the understanding of biological complexity. While multiple imaging modalities are often required to cover both microscopic and macroscopic spatial scales, dynamic phenomena may also extend over different temporal scales, necessitating the use of different imaging technologies based on the trade-off between temporal resolution and effective field of view. Optoacoustic (photoacoustic) imaging has been shown to offer the exclusive capability to link multiple spatial scales ranging from organelles to entire organs of small animals. Yet, efficient visualization of multi-scale dynamics remained difficult with state-of-the-art systems due to inefficient trade-offs between image acquisition and effective field of view. Herein, we introduce a spiral volumetric optoacoustic tomography (SVOT) technique that provides spectrally-enriched high-resolution optical absorption contrast across multiple spatio-temporal scales. We demonstrate that SVOT can be used to monitor various in vivo dynamics, from video-rate volumetric visualization of cardiac-associated motion in whole organs to high-resolution imaging of pharmacokinetics in larger regions. The multi-scale dynamic imaging capability thus emerges as a powerful and unique feature of the optoacoustic technology that adds to the multiple advantages of this technology for structural, functional and molecular imaging.

  9. SEMANTIC DEMENTIA AND PERSISTING WERNICKE’S APHASIA: LINGUISTIC AND ANATOMICAL PROFILES

    PubMed Central

    Ogar, JM; Baldo, JV; Wilson, SM; Brambati, SM; Miller, BL; Dronkers, NF; Gorno-Tempini, ML

    2011-01-01

    Few studies have directly compared the clinical and anatomical characteristics of patients with progressive aphasia to those of patients with aphasia caused by stroke. In the current study we examined fluent forms of aphasia in these two groups, specifically the semantic dementia (SD) and persisting Wernicke's aphasia (WA) due to stroke. We compared 10 patients with SD to 10 age- and education-matched patients with WA in three language domains: language comprehension (single words and sentences), spontaneous speech and visual semantics. Neuroanatomical involvement was analyzed using disease-specific image analysis techniques: voxel-based morphometry (VBM) for patients with SD and overlays of lesion masks in patients with WA. Patients with SD and WA were both impaired on tasks that involved visual semantics, but patients with SD were less impaired in spontaneous speech and sentence comprehension. The anatomical findings showed that different regions were most affected in the two disorders: the left anterior temporal lobe in SD and the left posterior middle temporal gyrus in chronic WA. This study highlights that the two syndromes classically associated with language comprehension deficits in aphasia due to stroke and neurodegenerative disease are clinically distinct, most likely due to distinct distributions of damage in the temporal lobe. PMID:21315437

  10. Ultrafast dynamic contrast-enhanced mri of the breast using compressed sensing: breast cancer diagnosis based on separate visualization of breast arteries and veins.

    PubMed

    Onishi, Natsuko; Kataoka, Masako; Kanao, Shotaro; Sagawa, Hajime; Iima, Mami; Nickel, Marcel Dominik; Toi, Masakazu; Togashi, Kaori

    2018-01-01

    To evaluate the feasibility of ultrafast dynamic contrast-enhanced (UF-DCE) magnetic resonance imaging (MRI) with compressed sensing (CS) for the separate identification of breast arteries/veins and perform temporal evaluations of breast arteries and veins with a focus on the association with ipsilateral cancers. Our Institutional Review Board approved this study with retrospective design. Twenty-five female patients who underwent UF-DCE MRI at 3T were included. UF-DCE MRI consisting of 20 continuous frames was acquired using a prototype 3D gradient-echo volumetric interpolated breath-hold sequence including a CS reconstruction: temporal resolution, 3.65 sec/frame; spatial resolution, 0.9 × 1.3 × 2.5 mm. Two readers analyzed 19 maximum intensity projection images reconstructed from subtracted images, separately identified breast arteries/veins and the earliest frame in which they were respectively visualized, and calculated the time interval between arterial and venous visualization (A-V interval) for each breast. In total, 49 breasts including 31 lesions (breast cancer, 16; benign lesion, 15) were identified. In 39 of the 49 breasts (breasts with cancers, 16; breasts with benign lesions, 10; breasts with no lesions, 13), both breast arteries and veins were separately identified. The A-V intervals for breasts with cancers were significantly shorter than those for breasts with benign lesions (P = 0.043) and no lesions (P = 0.007). UF-DCE MRI using CS enables the separate identification of breast arteries/veins. Temporal evaluations calculating the time interval between arterial and venous visualization might be helpful in the differentiation of ipsilateral breast cancers from benign lesions. 3 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2018;47:97-104. © 2017 International Society for Magnetic Resonance in Medicine.

  11. Action-outcome learning and prediction shape the window of simultaneity of audiovisual outcomes.

    PubMed

    Desantis, Andrea; Haggard, Patrick

    2016-08-01

    To form a coherent representation of the objects around us, the brain must group the different sensory features composing these objects. Here, we investigated whether actions contribute in this grouping process. In particular, we assessed whether action-outcome learning and prediction contribute to audiovisual temporal binding. Participants were presented with two audiovisual pairs: one pair was triggered by a left action, and the other by a right action. In a later test phase, the audio and visual components of these pairs were presented at different onset times. Participants judged whether they were simultaneous or not. To assess the role of action-outcome prediction on audiovisual simultaneity, each action triggered either the same audiovisual pair as in the learning phase ('predicted' pair), or the pair that had previously been associated with the other action ('unpredicted' pair). We found the time window within which auditory and visual events appeared simultaneous increased for predicted compared to unpredicted pairs. However, no change in audiovisual simultaneity was observed when audiovisual pairs followed visual cues, rather than voluntary actions. This suggests that only action-outcome learning promotes temporal grouping of audio and visual effects. In a second experiment we observed that changes in audiovisual simultaneity do not only depend on our ability to predict what outcomes our actions generate, but also on learning the delay between the action and the multisensory outcome. When participants learned that the delay between action and audiovisual pair was variable, the window of audiovisual simultaneity for predicted pairs increased, relative to a fixed action-outcome pair delay. This suggests that participants learn action-based predictions of audiovisual outcome, and adapt their temporal perception of outcome events based on such predictions. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  12. Activations in temporal areas using visual and auditory naming stimuli: A language fMRI study in temporal lobe epilepsy.

    PubMed

    Gonzálvez, Gloria G; Trimmel, Karin; Haag, Anja; van Graan, Louis A; Koepp, Matthias J; Thompson, Pamela J; Duncan, John S

    2016-12-01

    Verbal fluency functional MRI (fMRI) is used for predicting language deficits after anterior temporal lobe resection (ATLR) for temporal lobe epilepsy (TLE), but primarily engages frontal lobe areas. In this observational study we investigated fMRI paradigms using visual and auditory stimuli, which predominately involve language areas resected during ATLR. Twenty-three controls and 33 patients (20 left (LTLE), 13 right (RTLE)) were assessed using three fMRI paradigms: verbal fluency, auditory naming with a contrast of auditory reversed speech; picture naming with a contrast of scrambled pictures and blurred faces. Group analysis showed bilateral temporal activations for auditory naming and picture naming. Correcting for auditory and visual input (by subtracting activations resulting from auditory reversed speech and blurred pictures/scrambled faces respectively) resulted in left-lateralised activations for patients and controls, which was more pronounced for LTLE compared to RTLE patients. Individual subject activations at a threshold of T>2.5, extent >10 voxels, showed that verbal fluency activated predominantly the left inferior frontal gyrus (IFG) in 90% of LTLE, 92% of RTLE, and 65% of controls, compared to right IFG activations in only 15% of LTLE and RTLE and 26% of controls. Middle temporal (MTG) or superior temporal gyrus (STG) activations were seen on the left in 30% of LTLE, 23% of RTLE, and 52% of controls, and on the right in 15% of LTLE, 15% of RTLE, and 35% of controls. Auditory naming activated temporal areas more frequently than did verbal fluency (LTLE: 93%/73%; RTLE: 92%/58%; controls: 82%/70% (left/right)). Controlling for auditory input resulted in predominantly left-sided temporal activations. Picture naming resulted in temporal lobe activations less frequently than did auditory naming (LTLE 65%/55%; RTLE 53%/46%; controls 52%/35% (left/right)). Controlling for visual input had left-lateralising effects. Auditory and picture naming activated temporal lobe structures, which are resected during ATLR, more frequently than did verbal fluency. Controlling for auditory and visual input resulted in more left-lateralised activations. We hypothesise that these paradigms may be more predictive of postoperative language decline than verbal fluency fMRI. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. The role of the human pulvinar in visual attention and action: evidence from temporal-order judgment, saccade decision, and antisaccade tasks.

    PubMed

    Arend, Isabel; Machado, Liana; Ward, Robert; McGrath, Michelle; Ro, Tony; Rafal, Robert D

    2008-01-01

    The pulvinar nucleus of the thalamus has been considered as a key structure for visual attention functions (Grieve, K.L. et al. (2000). Trends Neurosci., 23: 35-39; Shipp, S. (2003). Philos. Trans. R. Soc. Lond. B Biol. Sci., 358(1438): 1605-1624). During the past several years, we have studied the role of the human pulvinar in visual attention and oculomotor behaviour by testing a small group of patients with unilateral pulvinar lesions. Here we summarize some of these findings, and present new evidence for the role of this structure in both eye movements and visual attention through two versions of a temporal-order judgment task and an antisaccade task. Pulvinar damage induces an ipsilesional bias in perceptual temporal-order judgments and in saccadic decision, and also increases the latency of antisaccades away from contralesional targets. The demonstration that pulvinar damage affects both attention and oculomotor behaviour highlights the role of this structure in the integration of visual and oculomotor signals and, more generally, its role in flexibly linking visual stimuli with context-specific motor responses.

  14. Magnifying visual target information and the role of eye movements in motor sequence learning.

    PubMed

    Massing, Matthias; Blandin, Yannick; Panzer, Stefan

    2016-01-01

    An experiment investigated the influence of eye movements on learning a simple motor sequence task when the visual display was magnified. The task was to reproduce a 1300 ms spatial-temporal pattern of elbow flexions and extensions. The spatial-temporal pattern was displayed in front of the participants. Participants were randomly assigned to four groups differing on eye movements (free to use their eyes/instructed to fixate) and the visual display (small/magnified). All participants had to perform a pre-test, an acquisition phase, a delayed retention test, and a transfer test. The results indicated that participants in each practice condition increased their performance during acquisition. The participants who were permitted to use their eyes in the magnified visual display outperformed those who were instructed to fixate on the magnified visual display. When a small visual display was used, the instruction to fixate induced no performance decrements compared to participants who were permitted to use their eyes during acquisition. The findings demonstrated that a spatial-temporal pattern can be learned without eye movements, but being permitting to use eye movements facilitates the response production when the visual angle is increased. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Aging and Visual Function of Military Pilots: A Review

    DTIC Science & Technology

    1982-08-01

    of the Soc. for Inf. Disp. 21:219- 227. 24. Ginsburg. A. P .. M. W. Cannon, R. Sekuler, D . Evans, C . Owsley, and P ... the Institute of Medicine. This work relates to Department of Navy Contract N0OOI48O- C - 0159 issued by the Office of Naval Research under Contract...loss with age in the temporal resolving power of the visual system. Temporally con- tiguous visual events that would be seen as separate

  16. Spatio-Temporal Story Mapping Animation Based On Structured Causal Relationships Of Historical Events

    NASA Astrophysics Data System (ADS)

    Inoue, Y.; Tsuruoka, K.; Arikawa, M.

    2014-04-01

    In this paper, we proposed a user interface that displays visual animations on geographic maps and timelines for depicting historical stories by representing causal relationships among events for time series. We have been developing an experimental software system for the spatial-temporal visualization of historical stories for tablet computers. Our proposed system makes people effectively learn historical stories using visual animations based on hierarchical structures of different scale timelines and maps.

  17. Short temporal asynchrony disrupts visual object recognition

    PubMed Central

    Singer, Jedediah M.; Kreiman, Gabriel

    2014-01-01

    Humans can recognize objects and scenes in a small fraction of a second. The cascade of signals underlying rapid recognition might be disrupted by temporally jittering different parts of complex objects. Here we investigated the time course over which shape information can be integrated to allow for recognition of complex objects. We presented fragments of object images in an asynchronous fashion and behaviorally evaluated categorization performance. We observed that visual recognition was significantly disrupted by asynchronies of approximately 30 ms, suggesting that spatiotemporal integration begins to break down with even small deviations from simultaneity. However, moderate temporal asynchrony did not completely obliterate recognition; in fact, integration of visual shape information persisted even with an asynchrony of 100 ms. We describe the data with a concise model based on the dynamic reduction of uncertainty about what image was presented. These results emphasize the importance of timing in visual processing and provide strong constraints for the development of dynamical models of visual shape recognition. PMID:24819738

  18. Clinical implications of parallel visual pathways.

    PubMed

    Bassi, C J; Lehmkuhle, S

    1990-02-01

    Visual information travels from the retina to visual cortical areas along at least two parallel pathways. In this paper, anatomical and physiological evidence is presented to demonstrate the existence of, and trace these two pathways throughout the visual systems of the cat, primate, and human. Physiological and behavioral experiments are discussed which establish that these two pathways are differentially sensitive to stimuli that vary in spatial and temporal frequency. One pathway (M-pathway) is more sensitive to coarse visual form that is modulated or moving at fast rates, whereas the other pathway (P-pathway) is more sensitive to spatial detail that is stationary or moving at slow rates. This difference between the M- and P-pathways is related to some spatial and temporal effects observed in humans. Furthermore, evidence is presented that certain diseases selectively comprise the functioning of M- or P-pathways (i.e., glaucoma, Alzheimer's disease, and anisometropic amblyopia), and some of the spatial and temporal deficits observed in these patients are presented within the context of the dysfunction of the M- or P-pathway.

  19. The Association Between P3 Amplitude at Age 11 and Criminal Offending at Age 23

    PubMed Central

    Gao, Yu; Raine, Adrian; Venables, Peter H.; Mednick, Sarnoff A.

    2014-01-01

    Reduced P3 amplitude to targets is an information-processing deficit associated with adult antisocial behavior and may reflect dysfunction of the temporal-parietal junction. This study aims to examine whether this deficit precedes criminal offending. From a birth cohort of 1,795 children, 73 individuals who become criminal offenders at age 23 and 123 noncriminal individuals were assessed on P3 amplitude. The two groups did not differ on gender, ethnicity, and social adversity. P3 amplitude was measured over the temporal-parietal junction during a visual continuous performance task at age 11, together with antisocial behavior. Criminal convictions were assessed at age 23. Reduced P3 amplitude at age 11 was associated with increased antisocial behavior at age 11. Criminal offenders showed significantly reduced P3 amplitudes to target stimuli compared to controls. Findings remained significant after controlling for antisocial behavior and hyperactivity at age 11 and alcoholism at age 23. P3 deficits at age 11 are associated with adult crime at age 23, suggesting that reduced P3 may be an early neurobiological marker for cognitive and affective processes subserved by the temporal-parietal junction that place a child at risk for adult crime. PMID:22963083

  20. The association between p3 amplitude at age 11 and criminal offending at age 23.

    PubMed

    Gao, Yu; Raine, Adrian; Venables, Peter H; Mednick, Sarnoff A

    2013-01-01

    Reduced P3 amplitude to targets is an information-processing deficit associated with adult antisocial behavior and may reflect dysfunction of the temporal-parietal junction. This study aims to examine whether this deficit precedes criminal offending. From a birth cohort of 1,795 children, 73 individuals who become criminal offenders at age 23 and 123 noncriminal individuals were assessed on P3 amplitude. The two groups did not differ on gender, ethnicity, and social adversity. P3 amplitude was measured over the temporal-parietal junction during a visual continuous performance task at age 11, together with antisocial behavior. Criminal convictions were assessed at age 23. Reduced P3 amplitude at age 11 was associated with increased antisocial behavior at age 11. Criminal offenders showed significantly reduced P3 amplitudes to target stimuli compared to controls. Findings remained significant after controlling for antisocial behavior and hyperactivity at age 11 and alcoholism at age 23. P3 deficits at age 11 are associated with adult crime at age 23, suggesting that reduced P3 may be an early neurobiological marker for cognitive and affective processes subserved by the temporal-parietal junction that place a child at risk for adult crime.

  1. How can audiovisual pathways enhance the temporal resolution of time-compressed speech in blind subjects?

    PubMed

    Hertrich, Ingo; Dietrich, Susanne; Ackermann, Hermann

    2013-01-01

    In blind people, the visual channel cannot assist face-to-face communication via lipreading or visual prosody. Nevertheless, the visual system may enhance the evaluation of auditory information due to its cross-links to (1) the auditory system, (2) supramodal representations, and (3) frontal action-related areas. Apart from feedback or top-down support of, for example, the processing of spatial or phonological representations, experimental data have shown that the visual system can impact auditory perception at more basic computational stages such as temporal signal resolution. For example, blind as compared to sighted subjects are more resistant against backward masking, and this ability appears to be associated with activity in visual cortex. Regarding the comprehension of continuous speech, blind subjects can learn to use accelerated text-to-speech systems for "reading" texts at ultra-fast speaking rates (>16 syllables/s), exceeding by far the normal range of 6 syllables/s. A functional magnetic resonance imaging study has shown that this ability, among other brain regions, significantly covaries with BOLD responses in bilateral pulvinar, right visual cortex, and left supplementary motor area. Furthermore, magnetoencephalographic measurements revealed a particular component in right occipital cortex phase-locked to the syllable onsets of accelerated speech. In sighted people, the "bottleneck" for understanding time-compressed speech seems related to higher demands for buffering phonological material and is, presumably, linked to frontal brain structures. On the other hand, the neurophysiological correlates of functions overcoming this bottleneck, seem to depend upon early visual cortex activity. The present Hypothesis and Theory paper outlines a model that aims at binding these data together, based on early cross-modal pathways that are already known from various audiovisual experiments on cross-modal adjustments during space, time, and object recognition.

  2. How can audiovisual pathways enhance the temporal resolution of time-compressed speech in blind subjects?

    PubMed Central

    Hertrich, Ingo; Dietrich, Susanne; Ackermann, Hermann

    2013-01-01

    In blind people, the visual channel cannot assist face-to-face communication via lipreading or visual prosody. Nevertheless, the visual system may enhance the evaluation of auditory information due to its cross-links to (1) the auditory system, (2) supramodal representations, and (3) frontal action-related areas. Apart from feedback or top-down support of, for example, the processing of spatial or phonological representations, experimental data have shown that the visual system can impact auditory perception at more basic computational stages such as temporal signal resolution. For example, blind as compared to sighted subjects are more resistant against backward masking, and this ability appears to be associated with activity in visual cortex. Regarding the comprehension of continuous speech, blind subjects can learn to use accelerated text-to-speech systems for “reading” texts at ultra-fast speaking rates (>16 syllables/s), exceeding by far the normal range of 6 syllables/s. A functional magnetic resonance imaging study has shown that this ability, among other brain regions, significantly covaries with BOLD responses in bilateral pulvinar, right visual cortex, and left supplementary motor area. Furthermore, magnetoencephalographic measurements revealed a particular component in right occipital cortex phase-locked to the syllable onsets of accelerated speech. In sighted people, the “bottleneck” for understanding time-compressed speech seems related to higher demands for buffering phonological material and is, presumably, linked to frontal brain structures. On the other hand, the neurophysiological correlates of functions overcoming this bottleneck, seem to depend upon early visual cortex activity. The present Hypothesis and Theory paper outlines a model that aims at binding these data together, based on early cross-modal pathways that are already known from various audiovisual experiments on cross-modal adjustments during space, time, and object recognition. PMID:23966968

  3. The iconography of mourning and its neural correlates: a functional neuroimaging study

    PubMed Central

    Labek, Karin; Berger, Samantha; Buchheim, Anna; Bosch, Julia; Spohrs, Jennifer; Dommes, Lisa; Beschoner, Petra; Stingl, Julia C.

    2017-01-01

    Abstract The present functional neuroimaging study focuses on the iconography of mourning. A culture-specific pattern of body postures of mourning individuals, mostly suggesting withdrawal, emerged from a survey of visual material. When used in different combinations in stylized drawings in our neuroimaging study, this material activated cortical areas commonly seen in studies of social cognition (temporo-parietal junction, superior temporal gyrus, and inferior temporal lobe), empathy for pain (somatosensory cortex), and loss (precuneus, middle/posterior cingular gyrus). This pattern of activation developed over time. While in the early phases of exposure lower association areas, such as the extrastriate body area, were active, in the late phases activation in parietal and temporal association areas and the prefrontal cortex was more prominent. These findings are consistent with the conventional and contextual character of iconographic material, and further differentiate it from emotionally negatively valenced and high-arousing stimuli. In future studies, this neuroimaging assay may be useful in characterizing interpretive appraisal of material of negative emotional valence. PMID:28449116

  4. Ageing diminishes the modulation of human brain responses to visual food cues by meal ingestion.

    PubMed

    Cheah, Y S; Lee, S; Ashoor, G; Nathan, Y; Reed, L J; Zelaya, F O; Brammer, M J; Amiel, S A

    2014-09-01

    Rates of obesity are greatest in middle age. Obesity is associated with altered activity of brain networks sensing food-related stimuli and internal signals of energy balance, which modulate eating behaviour. The impact of healthy mid-life ageing on these processes has not been characterised. We therefore aimed to investigate changes in brain responses to food cues, and the modulatory effect of meal ingestion on such evoked neural activity, from young adulthood to middle age. Twenty-four healthy, right-handed subjects, aged 19.5-52.6 years, were studied on separate days after an overnight fast, randomly receiving 50 ml water or 554 kcal mixed meal before functional brain magnetic resonance imaging while viewing visual food cues. Across the group, meal ingestion reduced food cue-evoked activity of amygdala, putamen, insula and thalamus, and increased activity in precuneus and bilateral parietal cortex. Corrected for body mass index, ageing was associated with decreasing food cue-evoked activation of right dorsolateral prefrontal cortex (DLPFC) and precuneus, and increasing activation of left ventrolateral prefrontal cortex (VLPFC), bilateral temporal lobe and posterior cingulate in the fasted state. Ageing was also positively associated with the difference in food cue-evoked activation between fed and fasted states in the right DLPFC, bilateral amygdala and striatum, and negatively associated with that of the left orbitofrontal cortex and VLPFC, superior frontal gyrus, left middle and temporal gyri, posterior cingulate and precuneus. There was an overall tendency towards decreasing modulatory effects of prior meal ingestion on food cue-evoked regional brain activity with increasing age. Healthy ageing to middle age is associated with diminishing sensitivity to meal ingestion of visual food cue-evoked activity in brain regions that represent the salience of food and direct food-associated behaviour. Reduced satiety sensing may have a role in the greater risk of obesity in middle age.

  5. Neck/shoulder discomfort due to visually demanding experimental near work is influenced by previous neck pain, task duration, astigmatism, internal eye discomfort and accommodation

    PubMed Central

    Forsman, Mikael; Richter, Hans O.

    2017-01-01

    Visually demanding near work can cause eye discomfort, and eye and neck/shoulder discomfort during, e.g., computer work are associated. To investigate direct effects of experimental near work on eye and neck/shoulder discomfort, 33 individuals with chronic neck pain and 33 healthy control subjects performed a visual task four times using four different trial lenses (referred to as four different viewing conditions), and they rated eye and neck/shoulder discomfort at baseline and after each task. Since symptoms of eye discomfort may differ depending on the underlying cause, two categories were used; internal eye discomfort, such as ache and strain, that may be caused by accommodative or vergence stress; and external eye discomfort, such as burning and smarting, that may be caused by dry-eye disorders. The cumulative performance time (reflected in the temporal order of the tasks), astigmatism, accommodation response and concurrent symptoms of internal eye discomfort all aggravated neck/shoulder discomfort, but there was no significant effect of external eye discomfort. There was also an interaction effect between the temporal order and internal eye discomfort: participants with a greater mean increase in internal eye discomfort also developed more neck/shoulder discomfort with time. Since moderate musculoskeletal symptoms are a risk factor for more severe symptoms, it is important to ensure a good visual environment in occupations involving visually demanding near work. PMID:28832612

  6. Dynamics of normalization underlying masking in human visual cortex.

    PubMed

    Tsai, Jeffrey J; Wade, Alex R; Norcia, Anthony M

    2012-02-22

    Stimulus visibility can be reduced by other stimuli that overlap the same region of visual space, a process known as masking. Here we studied the neural mechanisms of masking in humans using source-imaged steady state visual evoked potentials and frequency-domain analysis over a wide range of relative stimulus strengths of test and mask stimuli. Test and mask stimuli were tagged with distinct temporal frequencies and we quantified spectral response components associated with the individual stimuli (self terms) and responses due to interaction between stimuli (intermodulation terms). In early visual cortex, masking alters the self terms in a manner consistent with a reduction of input contrast. We also identify a novel signature of masking: a robust intermodulation term that peaks when the test and mask stimuli have equal contrast and disappears when they are widely different. We fit all of our data simultaneously with family of a divisive gain control models that differed only in their dynamics. Models with either very short or very long temporal integration constants for the gain pool performed worse than a model with an integration time of ∼30 ms. Finally, the absolute magnitudes of the response were controlled by the ratio of the stimulus contrasts, not their absolute values. This contrast-contrast invariance suggests that many neurons in early visual cortex code relative rather than absolute contrast. Together, these results provide a more complete description of masking within the normalization framework of contrast gain control and suggest that contrast normalization accomplishes multiple functional goals.

  7. Neck/shoulder discomfort due to visually demanding experimental near work is influenced by previous neck pain, task duration, astigmatism, internal eye discomfort and accommodation.

    PubMed

    Zetterberg, Camilla; Forsman, Mikael; Richter, Hans O

    2017-01-01

    Visually demanding near work can cause eye discomfort, and eye and neck/shoulder discomfort during, e.g., computer work are associated. To investigate direct effects of experimental near work on eye and neck/shoulder discomfort, 33 individuals with chronic neck pain and 33 healthy control subjects performed a visual task four times using four different trial lenses (referred to as four different viewing conditions), and they rated eye and neck/shoulder discomfort at baseline and after each task. Since symptoms of eye discomfort may differ depending on the underlying cause, two categories were used; internal eye discomfort, such as ache and strain, that may be caused by accommodative or vergence stress; and external eye discomfort, such as burning and smarting, that may be caused by dry-eye disorders. The cumulative performance time (reflected in the temporal order of the tasks), astigmatism, accommodation response and concurrent symptoms of internal eye discomfort all aggravated neck/shoulder discomfort, but there was no significant effect of external eye discomfort. There was also an interaction effect between the temporal order and internal eye discomfort: participants with a greater mean increase in internal eye discomfort also developed more neck/shoulder discomfort with time. Since moderate musculoskeletal symptoms are a risk factor for more severe symptoms, it is important to ensure a good visual environment in occupations involving visually demanding near work.

  8. A computational model of the development of separate representations of facial identity and expression in the primate visual system.

    PubMed

    Tromans, James Matthew; Harris, Mitchell; Stringer, Simon Maitland

    2011-01-01

    Experimental studies have provided evidence that the visual processing areas of the primate brain represent facial identity and facial expression within different subpopulations of neurons. For example, in non-human primates there is evidence that cells within the inferior temporal gyrus (TE) respond primarily to facial identity, while cells within the superior temporal sulcus (STS) respond to facial expression. More recently, it has been found that the orbitofrontal cortex (OFC) of non-human primates contains some cells that respond exclusively to changes in facial identity, while other cells respond exclusively to facial expression. How might the primate visual system develop physically separate representations of facial identity and expression given that the visual system is always exposed to simultaneous combinations of facial identity and expression during learning? In this paper, a biologically plausible neural network model, VisNet, of the ventral visual pathway is trained on a set of carefully-designed cartoon faces with different identities and expressions. The VisNet model architecture is composed of a hierarchical series of four Self-Organising Maps (SOMs), with associative learning in the feedforward synaptic connections between successive layers. During learning, the network develops separate clusters of cells that respond exclusively to either facial identity or facial expression. We interpret the performance of the network in terms of the learning properties of SOMs, which are able to exploit the statistical indendependence between facial identity and expression.

  9. Higher order visual input to the mushroom bodies in the bee, Bombus impatiens.

    PubMed

    Paulk, Angelique C; Gronenberg, Wulfila

    2008-11-01

    To produce appropriate behaviors based on biologically relevant associations, sensory pathways conveying different modalities are integrated by higher-order central brain structures, such as insect mushroom bodies. To address this function of sensory integration, we characterized the structure and response of optic lobe (OL) neurons projecting to the calyces of the mushroom bodies in bees. Bees are well known for their visual learning and memory capabilities and their brains possess major direct visual input from the optic lobes to the mushroom bodies. To functionally characterize these visual inputs to the mushroom bodies, we recorded intracellularly from neurons in bumblebees (Apidae: Bombus impatiens) and a single neuron in a honeybee (Apidae: Apis mellifera) while presenting color and motion stimuli. All of the mushroom body input neurons were color sensitive while a subset was motion sensitive. Additionally, most of the mushroom body input neurons would respond to the first, but not to subsequent, presentations of repeated stimuli. In general, the medulla or lobula neurons projecting to the calyx signaled specific chromatic, temporal, and motion features of the visual world to the mushroom bodies, which included sensory information required for the biologically relevant associations bees form during foraging tasks.

  10. Sensory Temporal Processing in Adults with Early Hearing Loss

    ERIC Educational Resources Information Center

    Heming, Joanne E.; Brown, Lenora N.

    2005-01-01

    This study examined tactile and visual temporal processing in adults with early loss of hearing. The tactile task consisted of punctate stimulations that were delivered to one or both hands by a mechanical tactile stimulator. Pairs of light emitting diodes were presented on a display for visual stimulation. Responses consisted of YES or NO…

  11. Superior Temporal Activation in Response to Dynamic Audio-Visual Emotional Cues

    ERIC Educational Resources Information Center

    Robins, Diana L.; Hunyadi, Elinora; Schultz, Robert T.

    2009-01-01

    Perception of emotion is critical for successful social interaction, yet the neural mechanisms underlying the perception of dynamic, audio-visual emotional cues are poorly understood. Evidence from language and sensory paradigms suggests that the superior temporal sulcus and gyrus (STS/STG) play a key role in the integration of auditory and visual…

  12. Concept cells through associative learning of high-level representations.

    PubMed

    Reddy, Leila; Thorpe, Simon J

    2014-10-22

    In this issue of Neuron, Quian Quiroga et al. (2014) show that neurons in the human medial temporal lobe (MTL) follow subjects' perceptual states rather than the features of the visual input. Patients with MTL damage however have intact perceptual abilities but suffer instead from extreme forgetfulness. Thus, the reported MTL neurons could create new memories of the current perceptual state.

  13. Brain and bone abnormalities of thanatophoric dwarfism.

    PubMed

    Miller, Elka; Blaser, Susan; Shannon, Patrick; Widjaja, Elysa

    2009-01-01

    The purpose of this article is to present the imaging findings of skeletal and brain abnormalities in thanatophoric dwarfism, a lethal form of dysplastic dwarfism. The bony abnormalities associated with thanatophoric dwarfism include marked shortening of the tubular bones and ribs. Abnormal temporal lobe development is a common associated feature and can be visualized as early as the second trimester. It is important to assess the brains of fetuses with suspected thanatophoric dwarfism because the presence of associated brain malformations can assist in the antenatal diagnosis of thanatophoric dwarfism.

  14. Lateralization of the human mirror neuron system.

    PubMed

    Aziz-Zadeh, Lisa; Koski, Lisa; Zaidel, Eran; Mazziotta, John; Iacoboni, Marco

    2006-03-15

    A cortical network consisting of the inferior frontal, rostral inferior parietal, and posterior superior temporal cortices has been implicated in representing actions in the primate brain and is critical to imitation in humans. This neural circuitry may be an evolutionary precursor of neural systems associated with language. However, language is predominantly lateralized to the left hemisphere, whereas the degree of lateralization of the imitation circuitry in humans is unclear. We conducted a functional magnetic resonance imaging study of imitation of finger movements with lateralized stimuli and responses. During imitation, activity in the inferior frontal and rostral inferior parietal cortex, although fairly bilateral, was stronger in the hemisphere ipsilateral to the visual stimulus and response hand. This ipsilateral pattern is at variance with the typical contralateral activity of primary visual and motor areas. Reliably increased signal in the right superior temporal sulcus (STS) was observed for both left-sided and right-sided imitation tasks, although subthreshold activity was also observed in the left STS. Overall, the data indicate that visual and motor components of the human mirror system are not left-lateralized. The left hemisphere superiority for language, then, must be have been favored by other types of language precursors, perhaps auditory or multimodal action representations.

  15. Spatial gradient for unique-feature detection in patients with unilateral neglect: evidence from auditory and visual search.

    PubMed

    Eramudugolla, Ranmalee; Mattingley, Jason B

    2008-01-01

    Patients with unilateral spatial neglect following right hemisphere damage are impaired in detecting contralesional targets in both visual and haptic search tasks, and often show a graded improvement in detection performance for more ipsilesional spatial locations. In audition, multiple simultaneous sounds are most effectively perceived if they are distributed along the frequency dimension. Thus, attention to spectro-temporal features alone can allow detection of a target sound amongst multiple simultaneous distracter sounds, regardless of whether these sounds are spatially separated. Spatial bias in attention associated with neglect should not affect auditory search based on spectro-temporal features of a sound target. We report that a right brain damaged patient with neglect demonstrated a significant gradient favouring the ipsilesional side on a visual search task as well as an auditory search task in which the target was a frequency modulated tone amongst steady distractor tones. No such asymmetry was apparent in the auditory search performance of a control patient with a right hemisphere lesion but no neglect. The results suggest that the spatial bias in attention exhibited by neglect patients affects stimulus processing even when spatial information is irrelevant to the task.

  16. It's about time: revisiting temporal processing deficits in dyslexia.

    PubMed

    Casini, Laurence; Pech-Georgel, Catherine; Ziegler, Johannes C

    2018-03-01

    Temporal processing in French children with dyslexia was evaluated in three tasks: a word identification task requiring implicit temporal processing, and two explicit temporal bisection tasks, one in the auditory and one in the visual modality. Normally developing children matched on chronological age and reading level served as a control group. Children with dyslexia exhibited robust deficits in temporal tasks whether they were explicit or implicit and whether they involved the auditory or the visual modality. First, they presented larger perceptual variability when performing temporal tasks, whereas they showed no such difficulties when performing the same task on a non-temporal dimension (intensity). This dissociation suggests that their difficulties were specific to temporal processing and could not be attributed to lapses of attention, reduced alertness, faulty anchoring, or overall noisy processing. In the framework of cognitive models of time perception, these data point to a dysfunction of the 'internal clock' of dyslexic children. These results are broadly compatible with the recent temporal sampling theory of dyslexia. © 2017 John Wiley & Sons Ltd.

  17. Attention versus consciousness in the visual brain: differences in conception, phenomenology, behavior, neuroanatomy, and physiology.

    PubMed

    Baars, B J

    1999-07-01

    A common confound between consciousness and attention makes it difficult to think clearly about recent advances in the understanding of the visual brain. Visual consciousness involves phenomenal experience of the visual world, but visual attention is more plausibly treated as a function that selects and maintains the selection of potential conscious contents, often unconsciously. In the same sense, eye movements select conscious visual events, which are not the same as conscious visual experience. According to common sense, visual experience is consciousness, and selective processes are labeled as attention. The distinction is reflected in very different behavioral measures and in very different brain anatomy and physiology. Visual consciousness tends to be associated with the "what" stream of visual feature neurons in the ventral temporal lobe. In contrast, attentional selection and maintenance are mediated by other brain regions, ranging from superior colliculi to thalamus, prefrontal cortex, and anterior cingulate. The author applied the common-sense distinction between attention and consciousness to the theoretical positions of M. I. Posner (1992, 1994) and D. LaBerge (1997, 1998) to show how it helps to clarify the evidence. He concluded that clarity of thought is served by calling a thing by its proper name.

  18. Children's predictions of future perceptual experiences: Temporal reasoning and phenomenology.

    PubMed

    Burns, Patrick; Russell, James

    2016-11-01

    We investigated the development and cognitive correlates of envisioning future experiences in 3.5- to 6.5-year old children across 2 experiments, both of which involved toy trains traveling along a track. In the first, children were asked to predict the direction of train travel and color of train side, as it would be seen through an arch. Children below 5 years typically failed the task, while performance on it was associated with performance on a "before/after" comprehension task in which order-of-mention in a sentence had to be mapped to a video of 2 actions (after McCormack & Hanley, 2011). In the second train task children were asked to predict the content of a doll's visual experience at the terminal point of a train's transit, based on the tint of a doll's spectacles and the direction of travel (toward or away). Again, success under 5 years of age was very rare and performance was associated with performance on the before/after task. This time there was a strong association with mental rotation skill. We conclude that the consistent association with before/after reasoning suggests that future-envisioning depends upon certain temporal-order concepts being in place. The inconsistent association with mental rotation suggests that envisioning can be achieved phenomenologically, but that its role is only explicit when the question explicitly concerns the content of a visual field. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  19. Disturbance of visual search by stimulating to posterior parietal cortex in the brain using transcranial magnetic stimulation

    NASA Astrophysics Data System (ADS)

    Iramina, Keiji; Ge, Sheng; Hyodo, Akira; Hayami, Takehito; Ueno, Shoogo

    2009-04-01

    In this study, we applied a transcranial magnetic stimulation (TMS) to investigate the temporal aspect for the functional processing of visual attention. Although it has been known that right posterior parietal cortex (PPC) in the brain has a role in certain visual search tasks, there is little knowledge about the temporal aspect of this area. Three visual search tasks that have different difficulties of task execution individually were carried out. These three visual search tasks are the "easy feature task," the "hard feature task," and the "conjunction task." To investigate the temporal aspect of the PPC involved in the visual search, we applied various stimulus onset asynchronies (SOAs) and measured the reaction time of the visual search. The magnetic stimulation was applied on the right PPC or the left PPC by the figure-eight coil. The results show that the reaction times of the hard feature task are longer than those of the easy feature task. When SOA=150 ms, compared with no-TMS condition, there was a significant increase in target-present reaction time when TMS pulses were applied. We considered that the right PPC was involved in the visual search at about SOA=150 ms after visual stimulus presentation. The magnetic stimulation to the right PPC disturbed the processing of the visual search. However, the magnetic stimulation to the left PPC gives no effect on the processing of the visual search.

  20. The acute effects of cocoa flavanols on temporal and spatial attention.

    PubMed

    Karabay, Aytaç; Saija, Jefta D; Field, David T; Akyürek, Elkan G

    2018-05-01

    In this study, we investigated how the acute physiological effects of cocoa flavanols might result in specific cognitive changes, in particular in temporal and spatial attention. To this end, we pre-registered and implemented a randomized, double-blind, placebo- and baseline-controlled crossover design. A sample of 48 university students participated in the study and each of them completed the experimental tasks in four conditions (baseline, placebo, low dose, and high-dose flavanol), administered in separate sessions with a 1-week washout interval. A rapid serial visual presentation task was used to test flavanol effects on temporal attention and integration, and a visual search task was similarly employed to investigate spatial attention. Results indicated that cocoa flavanols improved visual search efficiency, reflected by reduced reaction time. However, cocoa flavanols did not facilitate temporal attention nor integration, suggesting that flavanols may affect some aspects of attention, but not others. Potential underlying mechanisms are discussed.

  1. Morphable Word Clouds for Time-Varying Text Data Visualization.

    PubMed

    Chi, Ming-Te; Lin, Shih-Syun; Chen, Shiang-Yi; Lin, Chao-Hung; Lee, Tong-Yee

    2015-12-01

    A word cloud is a visual representation of a collection of text documents that uses various font sizes, colors, and spaces to arrange and depict significant words. The majority of previous studies on time-varying word clouds focuses on layout optimization and temporal trend visualization. However, they do not fully consider the spatial shapes and temporal motions of word clouds, which are important factors for attracting people's attention and are also important cues for human visual systems in capturing information from time-varying text data. This paper presents a novel method that uses rigid body dynamics to arrange multi-temporal word-tags in a specific shape sequence under various constraints. Each word-tag is regarded as a rigid body in dynamics. With the aid of geometric, aesthetic, and temporal coherence constraints, the proposed method can generate a temporally morphable word cloud that not only arranges word-tags in their corresponding shapes but also smoothly transforms the shapes of word clouds over time, thus yielding a pleasing time-varying visualization. Using the proposed frame-by-frame and morphable word clouds, people can observe the overall story of a time-varying text data from the shape transition, and people can also observe the details from the word clouds in frames. Experimental results on various data demonstrate the feasibility and flexibility of the proposed method in morphable word cloud generation. In addition, an application that uses the proposed word clouds in a simulated exhibition demonstrates the usefulness of the proposed method.

  2. Peripheral resolution and contrast sensitivity: Effects of stimulus drift.

    PubMed

    Venkataraman, Abinaya Priya; Lewis, Peter; Unsbo, Peter; Lundström, Linda

    2017-04-01

    Optimal temporal modulation of the stimulus can improve foveal contrast sensitivity. This study evaluates the characteristics of the peripheral spatiotemporal contrast sensitivity function in normal-sighted subjects. The purpose is to identify a temporal modulation that can potentially improve the remaining peripheral visual function in subjects with central visual field loss. High contrast resolution cut-off for grating stimuli with four temporal frequencies (0, 5, 10 and 15Hz drift) was first evaluated in the 10° nasal visual field. Resolution contrast sensitivity for all temporal frequencies was then measured at four spatial frequencies between 0.5 cycles per degree (cpd) and the measured stationary cut-off. All measurements were performed with eccentric optical correction. Similar to foveal vision, peripheral contrast sensitivity is highest for a combination of low spatial frequency and 5-10Hz drift. At higher spatial frequencies, there was a decrease in contrast sensitivity with 15Hz drift. Despite this decrease, the resolution cut-off did not vary largely between the different temporal frequencies tested. Additional measurements of contrast sensitivity at 0.5 cpd and resolution cut-off for stationary (0Hz) and 7.5Hz stimuli performed at 10, 15, 20 and 25° in the nasal visual field also showed the same characteristics across eccentricities. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Visual and auditory socio-cognitive perception in unilateral temporal lobe epilepsy in children and adolescents: a prospective controlled study.

    PubMed

    Laurent, Agathe; Arzimanoglou, Alexis; Panagiotakaki, Eleni; Sfaello, Ignacio; Kahane, Philippe; Ryvlin, Philippe; Hirsch, Edouard; de Schonen, Scania

    2014-12-01

    A high rate of abnormal social behavioural traits or perceptual deficits is observed in children with unilateral temporal lobe epilepsy. In the present study, perception of auditory and visual social signals, carried by faces and voices, was evaluated in children or adolescents with temporal lobe epilepsy. We prospectively investigated a sample of 62 children with focal non-idiopathic epilepsy early in the course of the disorder. The present analysis included 39 children with a confirmed diagnosis of temporal lobe epilepsy. Control participants (72), distributed across 10 age groups, served as a control group. Our socio-perceptual evaluation protocol comprised three socio-visual tasks (face identity, facial emotion and gaze direction recognition), two socio-auditory tasks (voice identity and emotional prosody recognition), and three control tasks (lip reading, geometrical pattern and linguistic intonation recognition). All 39 patients also benefited from a neuropsychological examination. As a group, children with temporal lobe epilepsy performed at a significantly lower level compared to the control group with regards to recognition of facial identity, direction of eye gaze, and emotional facial expressions. We found no relationship between the type of visual deficit and age at first seizure, duration of epilepsy, or the epilepsy-affected cerebral hemisphere. Deficits in socio-perceptual tasks could be found independently of the presence of deficits in visual or auditory episodic memory, visual non-facial pattern processing (control tasks), or speech perception. A normal FSIQ did not exempt some of the patients from an underlying deficit in some of the socio-perceptual tasks. Temporal lobe epilepsy not only impairs development of emotion recognition, but can also impair development of perception of other socio-perceptual signals in children with or without intellectual deficiency. Prospective studies need to be designed to evaluate the results of appropriate re-education programs in children presenting with deficits in social cue processing.

  4. Role of the mouse retinal photoreceptor ribbon synapse in visual motion processing for optokinetic responses.

    PubMed

    Sugita, Yuko; Araki, Fumiyuki; Chaya, Taro; Kawano, Kenji; Furukawa, Takahisa; Miura, Kenichiro

    2015-01-01

    The ribbon synapse is a specialized synaptic structure in the retinal outer plexiform layer where visual signals are transmitted from photoreceptors to the bipolar and horizontal cells. This structure is considered important in high-efficiency signal transmission; however, its role in visual signal processing is unclear. In order to understand its role in visual processing, the present study utilized Pikachurin-null mutant mice that show improper formation of the photoreceptor ribbon synapse. We examined the initial and late phases of the optokinetic responses (OKRs). The initial phase was examined by measuring the open-loop eye velocity of the OKRs to sinusoidal grating patterns of various spatial frequencies moving at various temporal frequencies for 0.5 s. The mutant mice showed significant initial OKRs with a spatiotemporal frequency tuning (spatial frequency, 0.09 ± 0.01 cycles/°; temporal frequency, 1.87 ± 0.12 Hz) that was slightly different from the wild-type mice (spatial frequency, 0.11 ± 0.01 cycles/°; temporal frequency, 1.66 ± 0.12 Hz). The late phase of the OKRs was examined by measuring the slow phase eye velocity of the optokinetic nystagmus induced by the sinusoidal gratings of various spatiotemporal frequencies moving for 30 s. We found that the optimal spatial and temporal frequencies of the mutant mice (spatial frequency, 0.11 ± 0.02 cycles/°; temporal frequency, 0.81 ± 0.24 Hz) were both lower than those in the wild-type mice (spatial frequency, 0.15 ± 0.02 cycles/°; temporal frequency, 1.93 ± 0.62 Hz). These results suggest that the ribbon synapse modulates the spatiotemporal frequency tuning of visual processing along the ON pathway by which the late phase of OKRs is mediated.

  5. Role of the Mouse Retinal Photoreceptor Ribbon Synapse in Visual Motion Processing for Optokinetic Responses

    PubMed Central

    Sugita, Yuko; Araki, Fumiyuki; Chaya, Taro; Kawano, Kenji; Furukawa, Takahisa; Miura, Kenichiro

    2015-01-01

    The ribbon synapse is a specialized synaptic structure in the retinal outer plexiform layer where visual signals are transmitted from photoreceptors to the bipolar and horizontal cells. This structure is considered important in high-efficiency signal transmission; however, its role in visual signal processing is unclear. In order to understand its role in visual processing, the present study utilized Pikachurin-null mutant mice that show improper formation of the photoreceptor ribbon synapse. We examined the initial and late phases of the optokinetic responses (OKRs). The initial phase was examined by measuring the open-loop eye velocity of the OKRs to sinusoidal grating patterns of various spatial frequencies moving at various temporal frequencies for 0.5 s. The mutant mice showed significant initial OKRs with a spatiotemporal frequency tuning (spatial frequency, 0.09 ± 0.01 cycles/°; temporal frequency, 1.87 ± 0.12 Hz) that was slightly different from the wild-type mice (spatial frequency, 0.11 ± 0.01 cycles/°; temporal frequency, 1.66 ± 0.12 Hz). The late phase of the OKRs was examined by measuring the slow phase eye velocity of the optokinetic nystagmus induced by the sinusoidal gratings of various spatiotemporal frequencies moving for 30 s. We found that the optimal spatial and temporal frequencies of the mutant mice (spatial frequency, 0.11 ± 0.02 cycles/°; temporal frequency, 0.81 ± 0.24 Hz) were both lower than those in the wild-type mice (spatial frequency, 0.15 ± 0.02 cycles/°; temporal frequency, 1.93 ± 0.62 Hz). These results suggest that the ribbon synapse modulates the spatiotemporal frequency tuning of visual processing along the ON pathway by which the late phase of OKRs is mediated. PMID:25955222

  6. The association between socioeconomic status and visual impairments among primary glaucoma: the results from Nationwide Korean National Health Insurance Cohort from 2004 to 2013.

    PubMed

    Sung, Haejune; Shin, Hyun Ho; Baek, Yunseng; Kim, Gyu Ah; Koh, Jae Sang; Park, Eun-Cheol; Shin, Jaeyong

    2017-08-23

    Glaucoma is one of the most leading causes of permanent visual impairments in Korea, and social expenses spent for the glaucoma are increasing. This study is to identify association between socioeconomic status and the visual impairments caused by primary glaucoma in Korea. This study is based on a cohort study using stratified representative samples in the National Health Insurance claim data from 2002 to 2013 with 1,025,340 representative subjects. Target subjects were patients who are newly diagnosed with primary glaucoma from 2004 to 2013. We conducted a multiple logistic regression analysis depending on the occurrence of visual impairment and its temporal order compared to the glaucoma diagnosis. Among 1728 patients with primary glaucoma, those with low and middle income shows higher odds ratio (OR) of the visual impairments than those with high income group (low income; OR = 3.42, 95% Confidential Interval (CI):2.06-5.66, middle income; OR = 2.13, 95% CI: 1.28-3.55), in case of the occurrence of the visual impairments preceded the diagnosis of glaucoma. Glaucoma patients without pre-existing glaucoma history before visual impairment have higher association between socioeconomic status and the occurrence of visual impairments by primary glaucoma. Since glaucoma had not been diagnosed and recognized yet, the differences may have been derived from the disparities of the awareness of the glaucoma. These findings call attention to the correlation between socioeconomic factors and the visual impairments by glaucoma, and raise public health needs over the importance of glaucoma awareness and eye screening for glaucoma, especially for low socioeconomic status.

  7. Visual pattern recognition based on spatio-temporal patterns of retinal ganglion cells’ activities

    PubMed Central

    Jing, Wei; Liu, Wen-Zhong; Gong, Xin-Wei; Gong, Hai-Qing

    2010-01-01

    Neural information is processed based on integrated activities of relevant neurons. Concerted population activity is one of the important ways for retinal ganglion cells to efficiently organize and process visual information. In the present study, the spike activities of bullfrog retinal ganglion cells in response to three different visual patterns (checker-board, vertical gratings and horizontal gratings) were recorded using multi-electrode arrays. A measurement of subsequence distribution discrepancy (MSDD) was applied to identify the spatio-temporal patterns of retinal ganglion cells’ activities in response to different stimulation patterns. The results show that the population activity patterns were different in response to different stimulation patterns, such difference in activity pattern was consistently detectable even when visual adaptation occurred during repeated experimental trials. Therefore, the stimulus pattern can be reliably discriminated according to the spatio-temporal pattern of the neuronal activities calculated using the MSDD algorithm. PMID:21886670

  8. Enlarged temporal integration window in schizophrenia indicated by the double-flash illusion.

    PubMed

    Haß, Katharina; Sinke, Christopher; Reese, Tanya; Roy, Mandy; Wiswede, Daniel; Dillo, Wolfgang; Oranje, Bob; Szycik, Gregor R

    2017-03-01

    In the present study we were interested in the processing of audio-visual integration in schizophrenia compared to healthy controls. The amount of sound-induced double-flash illusions served as an indicator of audio-visual integration. We expected an altered integration as well as a different window of temporal integration for patients. Fifteen schizophrenia patients and 15 healthy volunteers matched for age and gender were included in this study. We used stimuli with eight different temporal delays (stimulus onset asynchronys (SOAs) 25, 50, 75, 100, 125, 150, 200 and 300 ms) to induce a double-flash illusion. Group differences and the widths of temporal integration windows were calculated on percentages of reported double-flash illusions. Patients showed significantly more illusions (ca. 36-44% vs. 9-16% in control subjects) for SOAs 150-300. The temporal integration window for control participants went from SOAs 25 to 200 whereas for patients integration was found across all included temporal delays. We found no significant relationship between the amount of illusions and either illness severity, chlorpromazine equivalent doses or duration of illness in patients. Our results are interpreted in favour of an enlarged temporal integration window for audio-visual stimuli in schizophrenia patients, which is consistent with previous research.

  9. Iconic memory and parietofrontal network: fMRI study using temporal integration.

    PubMed

    Saneyoshi, Ayako; Niimi, Ryosuke; Suetsugu, Tomoko; Kaminaga, Tatsuro; Yokosawa, Kazuhiko

    2011-08-03

    We investigated the neural basis of iconic memory using functional magnetic resonance imaging. The parietofrontal network of selective attention is reportedly relevant to readout from iconic memory. We adopted a temporal integration task that requires iconic memory but not selective attention. The results showed that the task activated the parietofrontal network, confirming that the network is involved in readout from iconic memory. We further tested a condition in which temporal integration was performed by visual short-term memory but not by iconic memory. However, no brain region revealed higher activation for temporal integration by iconic memory than for temporal integration by visual short-term memory. This result suggested that there is no localized brain region specialized for iconic memory per se.

  10. Neural correlates of change detection and change blindness in a working memory task.

    PubMed

    Pessoa, Luiz; Ungerleider, Leslie G

    2004-05-01

    Detecting changes in an ever-changing environment is highly advantageous, and this ability may be critical for survival. In the present study, we investigated the neural substrates of change detection in the context of a visual working memory task. Subjects maintained a sample visual stimulus in short-term memory for 6 s, and were asked to indicate whether a subsequent, test stimulus matched or did not match the original sample. To study change detection largely uncontaminated by attentional state, we compared correct change and correct no-change trials at test. Our results revealed that correctly detecting a change was associated with activation of a network comprising parietal and frontal brain regions, as well as activation of the pulvinar, cerebellum, and inferior temporal gyrus. Moreover, incorrectly reporting a change when none occurred led to a very similar pattern of activations. Finally, few regions were differentially activated by trials in which a change occurred but subjects failed to detect it (change blindness). Thus, brain activation was correlated with a subject's report of a change, instead of correlated with the physical change per se. We propose that frontal and parietal regions, possibly assisted by the cerebellum and the pulvinar, might be involved in controlling the deployment of attention to the location of a change, thereby allowing further processing of the visual stimulus. Visual processing areas, such as the inferior temporal gyrus, may be the recipients of top-down feedback from fronto-parietal regions that control the reactive deployment of attention, and thus exhibit increased activation when a change is reported (irrespective of whether it occurred or not). Whereas reporting that a change occurred, be it correctly or incorrectly, was associated with strong activation in fronto-parietal sites, change blindness appears to involve very limited territories.

  11. Impaired temporal, not just spatial, resolution in amblyopia.

    PubMed

    Spang, Karoline; Fahle, Manfred

    2009-11-01

    In amblyopia, neuronal deficits deteriorate spatial vision including visual acuity, possibly because of a lack of use-dependent fine-tuning of afferents to the visual cortex during infancy; but temporal processing may deteriorate as well. Temporal, rather than spatial, resolution was investigated in patients with amblyopia by means of a task based on time-defined figure-ground segregation. Patients had to indicate the quadrant of the visual field where a purely time-defined square appeared. The results showed a clear decrease in temporal resolution of patients' amblyopic eyes compared with the dominant eyes in this task. The extent of this decrease in figure-ground segregation based on time of motion onset only loosely correlated with the decrease in spatial resolution and spanned a smaller range than did the spatial loss. Control experiments with artificially induced blur in normal observers confirmed that the decrease in temporal resolution was not simply due to the acuity loss. Amblyopia not only decreases spatial resolution, but also temporal factors such as time-based figure-ground segregation, even at high stimulus contrasts. This finding suggests that the realm of neuronal processes that may be disturbed in amblyopia is larger than originally thought.

  12. Semantic congruency but not temporal synchrony enhances long-term memory performance for audio-visual scenes.

    PubMed

    Meyerhoff, Hauke S; Huff, Markus

    2016-04-01

    Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.

  13. Learning temporal context shapes prestimulus alpha oscillations and improves visual discrimination performance.

    PubMed

    Toosi, Tahereh; K Tousi, Ehsan; Esteky, Hossein

    2017-08-01

    Time is an inseparable component of every physical event that we perceive, yet it is not clear how the brain processes time or how the neuronal representation of time affects our perception of events. Here we asked subjects to perform a visual discrimination task while we changed the temporal context in which the stimuli were presented. We collected electroencephalography (EEG) signals in two temporal contexts. In predictable blocks stimuli were presented after a constant delay relative to a visual cue, and in unpredictable blocks stimuli were presented after variable delays relative to the visual cue. Four subsecond delays of 83, 150, 400, and 800 ms were used in the predictable and unpredictable blocks. We observed that predictability modulated the power of prestimulus alpha oscillations in the parieto-occipital sites: alpha power increased in the 300-ms window before stimulus onset in the predictable blocks compared with the unpredictable blocks. This modulation only occurred in the longest delay period, 800 ms, in which predictability also improved the behavioral performance of the subjects. Moreover, learning the temporal context shaped the prestimulus alpha power: modulation of prestimulus alpha power grew during the predictable block and correlated with performance enhancement. These results suggest that the brain is able to learn the subsecond temporal context of stimuli and use this to enhance sensory processing. Furthermore, the neural correlate of this temporal prediction is reflected in the alpha oscillations. NEW & NOTEWORTHY It is not well understood how the uncertainty in the timing of an external event affects its processing, particularly at subsecond scales. Here we demonstrate how a predictable timing scheme improves visual processing. We found that learning the predictable scheme gradually shaped the prestimulus alpha power. These findings indicate that the human brain is able to extract implicit subsecond patterns in the temporal context of events. Copyright © 2017 the American Physiological Society.

  14. Interference within the focus of attention: working memory tasks reflect more than temporary maintenance.

    PubMed

    Shipstead, Zach; Engle, Randall W

    2013-01-01

    One approach to understanding working memory (WM) holds that individual differences in WM capacity arise from the amount of information a person can store in WM over short periods of time. This view is especially prevalent in WM research conducted with the visual arrays task. Within this tradition, many researchers have concluded that the average person can maintain approximately 4 items in WM. The present study challenges this interpretation by demonstrating that performance on the visual arrays task is subject to time-related factors that are associated with retrieval from long-term memory. Experiment 1 demonstrates that memory for an array does not decay as a product of absolute time, which is consistent with both maintenance- and retrieval-based explanations of visual arrays performance. Experiment 2 introduced a manipulation of temporal discriminability by varying the relative spacing of trials in time. We found that memory for a target array was significantly influenced by its temporal compression with, or isolation from, a preceding trial. Subsequent experiments extend these effects to sub-capacity set sizes and demonstrate that changes in the size of k are meaningful to prediction of performance on other measures of WM capacity as well as general fluid intelligence. We conclude that performance on the visual arrays task does not reflect a multi-item storage system but instead measures a person's ability to accurately retrieve information in the face of proactive interference.

  15. Pitting temporal against spatial integration in schizophrenic patients.

    PubMed

    Herzog, Michael H; Brand, Andreas

    2009-06-30

    Schizophrenic patients show strong impairments in visual backward masking possibly caused by deficits on the early stages of visual processing. The underlying aberrant mechanisms are not clearly understood. Spatial as well as temporal processing deficits have been proposed. Here, by combining a spatial with a temporal integration paradigm, we show further evidence that temporal but not spatial processing is impaired in schizophrenic patients. Eleven schizophrenic patients and ten healthy controls were presented with sequences composed of Vernier stimuli. Patients needed significantly longer presentation times for sequentially presented Vernier stimuli to reach a performance level comparable to that of healthy controls (temporal integration deficit). When we added spatial contextual elements to some of the Vernier stimuli, performance changed in a complex but comparable manner in patients and controls (intact spatial integration). Hence, temporal but not spatial processing seems to be deficient in schizophrenia.

  16. Distinct Contributions of the Magnocellular and Parvocellular Visual Streams to Perceptual Selection

    PubMed Central

    Denison, Rachel N.; Silver, Michael A.

    2014-01-01

    During binocular rivalry, conflicting images presented to the two eyes compete for perceptual dominance, but the neural basis of this competition is disputed. In interocular switch (IOS) rivalry, rival images periodically exchanged between the two eyes generate one of two types of perceptual alternation: 1) a fast, regular alternation between the images that is time-locked to the stimulus switches and has been proposed to arise from competition at lower levels of the visual processing hierarchy, or 2) a slow, irregular alternation spanning multiple stimulus switches that has been associated with higher levels of the visual system. The existence of these two types of perceptual alternation has been influential in establishing the view that rivalry may be resolved at multiple hierarchical levels of the visual system. We varied the spatial, temporal, and luminance properties of IOS rivalry gratings and found, instead, an association between fast, regular perceptual alternations and processing by the magnocellular stream and between slow, irregular alternations and processing by the parvocellular stream. The magnocellular and parvocellular streams are two early visual pathways that are specialized for the processing of motion and form, respectively. These results provide a new framework for understanding the neural substrates of binocular rivalry that emphasizes the importance of parallel visual processing streams, and not only hierarchical organization, in the perceptual resolution of ambiguities in the visual environment. PMID:21861685

  17. Simultaneous perception of a spoken and a signed language: The brain basis of ASL-English code-blends

    PubMed Central

    Weisberg, Jill; McCullough, Stephen; Emmorey, Karen

    2018-01-01

    Code-blends (simultaneous words and signs) are a unique characteristic of bimodal bilingual communication. Using fMRI, we investigated code-blend comprehension in hearing native ASL-English bilinguals who made a semantic decision (edible?) about signs, audiovisual words, and semantically equivalent code-blends. English and ASL recruited a similar fronto-temporal network with expected modality differences: stronger activation for English in auditory regions of bilateral superior temporal cortex, and stronger activation for ASL in bilateral occipitotemporal visual regions and left parietal cortex. Code-blend comprehension elicited activity in a combination of these regions, and no cognitive control regions were additionally recruited. Furthermore, code-blends elicited reduced activation relative to ASL presented alone in bilateral prefrontal and visual extrastriate cortices, and relative to English alone in auditory association cortex. Consistent with behavioral facilitation observed during semantic decisions, the findings suggest that redundant semantic content induces more efficient neural processing in language and sensory regions during bimodal language integration. PMID:26177161

  18. Neural Summation in the Hawkmoth Visual System Extends the Limits of Vision in Dim Light.

    PubMed

    Stöckl, Anna Lisa; O'Carroll, David Charles; Warrant, Eric James

    2016-03-21

    Most of the world's animals are active in dim light and depend on good vision for the tasks of daily life. Many have evolved visual adaptations that permit a performance superior to that of manmade imaging devices [1]. In insects, a major model visual system, nocturnal species show impressive visual abilities ranging from flight control [2, 3], to color discrimination [4, 5], to navigation using visual landmarks [6-8] or dim celestial compass cues [9, 10]. In addition to optical adaptations that improve their sensitivity in dim light [11], neural summation of light in space and time-which enhances the coarser and slower features of the scene at the expense of noisier finer and faster features-has been suggested to improve sensitivity in theoretical [12-14], anatomical [15-17], and behavioral [18-20] studies. How these summation strategies function neurally is, however, presently unknown. Here, we quantified spatial and temporal summation in the motion vision pathway of a nocturnal hawkmoth. We show that spatial and temporal summation combine supralinearly to substantially increase contrast sensitivity and visual information rate over four decades of light intensity, enabling hawkmoths to see at light levels 100 times dimmer than without summation. Our results reveal how visual motion is calculated neurally in dim light and how spatial and temporal summation improve sensitivity while simultaneously maximizing spatial and temporal resolution, thus extending models of insect motion vision derived predominantly from diurnal flies. Moreover, the summation strategies we have revealed may benefit manmade vision systems optimized for variable light levels [21]. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Active visual search in non-stationary scenes: coping with temporal variability and uncertainty

    NASA Astrophysics Data System (ADS)

    Ušćumlić, Marija; Blankertz, Benjamin

    2016-02-01

    Objective. State-of-the-art experiments for studying neural processes underlying visual cognition often constrain sensory inputs (e.g., static images) and our behavior (e.g., fixed eye-gaze, long eye fixations), isolating or simplifying the interaction of neural processes. Motivated by the non-stationarity of our natural visual environment, we investigated the electroencephalography (EEG) correlates of visual recognition while participants overtly performed visual search in non-stationary scenes. We hypothesized that visual effects (such as those typically used in human-computer interfaces) may increase temporal uncertainty (with reference to fixation onset) of cognition-related EEG activity in an active search task and therefore require novel techniques for single-trial detection. Approach. We addressed fixation-related EEG activity in an active search task with respect to stimulus-appearance styles and dynamics. Alongside popping-up stimuli, our experimental study embraces two composite appearance styles based on fading-in, enlarging, and motion effects. Additionally, we explored whether the knowledge obtained in the pop-up experimental setting can be exploited to boost the EEG-based intention-decoding performance when facing transitional changes of visual content. Main results. The results confirmed our initial hypothesis that the dynamic of visual content can increase temporal uncertainty of the cognition-related EEG activity in active search with respect to fixation onset. This temporal uncertainty challenges the pivotal aim to keep the decoding performance constant irrespective of visual effects. Importantly, the proposed approach for EEG decoding based on knowledge transfer between the different experimental settings gave a promising performance. Significance. Our study demonstrates that the non-stationarity of visual scenes is an important factor in the evolution of cognitive processes, as well as in the dynamic of ocular behavior (i.e., dwell time and fixation duration) in an active search task. In addition, our method to improve single-trial detection performance in this adverse scenario is an important step in making brain-computer interfacing technology available for human-computer interaction applications.

  20. About Hemispheric Differences in the Processing of Temporal Intervals

    ERIC Educational Resources Information Center

    Grondin, S.; Girard, C.

    2005-01-01

    The purpose of the present study was to identify differences between cerebral hemispheres for processing temporal intervals ranging from .9 to 1.4s. The intervals to be judged were marked by series of brief visual signals located in the left or the right visual field. Series of three (two standards and one comparison) or five intervals (four…

  1. Auditory/visual Duration Bisection in Patients with Left or Right Medial-Temporal Lobe Resection

    ERIC Educational Resources Information Center

    Melgire, Manuela; Ragot, Richard; Samson, Severine; Penney, Trevor B.; Meck, Warren H.; Pouthas, Viviane

    2005-01-01

    Patients with unilateral (left or right) medial temporal lobe lesions and normal control (NC) volunteers participated in two experiments, both using a duration bisection procedure. Experiment 1 assessed discrimination of auditory and visual signal durations ranging from 2 to 8 s, in the same test session. Patients and NC participants judged…

  2. Out of sight but not out of mind: the neurophysiology of iconic memory in the superior temporal sulcus.

    PubMed

    Keysers, C; Xiao, D-K; Foldiak, P; Perrett, D I

    2005-05-01

    Iconic memory, the short-lasting visual memory of a briefly flashed stimulus, is an important component of most models of visual perception. Here we investigate what physiological mechanisms underlie this capacity by showing rapid serial visual presentation (RSVP) sequences with and without interstimulus gaps to human observers and macaque monkeys. For gaps of up to 93 ms between consecutive images, human observers and neurones in the temporal cortex of macaque monkeys were found to continue processing a stimulus as if it was still present on the screen. The continued firing of neurones in temporal cortex may therefore underlie iconic memory. Based on these findings, a neurophysiological vision of iconic memory is presented.

  3. The neuropsychological and neuroradiological correlates of slowly progressive visual agnosia.

    PubMed

    Giovagnoli, Anna Rita; Aresi, Anna; Reati, Fabiola; Riva, Alice; Gobbo, Clara; Bizzi, Alberto

    2009-04-01

    The case of a 64-year-old woman affected by slowly progressive visual agnosia is reported aiming to describe specific cognitive-brain relationships. Longitudinal clinical and neuropsychological assessment, combined with magnetic resonance imaging (MRI), spectroscopy, and positron emission tomography (PET) were used. Sequential neuropsychological evaluations performed during a period of 9 years since disease onset showed the appearance of apperceptive and associative visual agnosia, alexia without agraphia, agraphia, finger agnosia, and prosopoagnosia, but excluded dementia. MRI showed moderate diffuse cortical atrophy, with predominant atrophy in the left posterior cortical areas (temporal, parietal, and lateral occipital cortical gyri). 18FDG-PET showed marked bilateral posterior cortical hypometabolism; proton magnetic resonance spectroscopic imaging disclosed severe focal N-acetyl-aspartate depletion in the left temporoparietal and lateral occipital cortical areas. In conclusion, selective metabolic alterations and neuronal loss in the left temporoparietooccipital cortex may determine progressive visual agnosia in the absence of dementia.

  4. Aversive learning shapes neuronal orientation tuning in human visual cortex.

    PubMed

    McTeague, Lisa M; Gruss, L Forest; Keil, Andreas

    2015-07-28

    The responses of sensory cortical neurons are shaped by experience. As a result perceptual biases evolve, selectively facilitating the detection and identification of sensory events that are relevant for adaptive behaviour. Here we examine the involvement of human visual cortex in the formation of learned perceptual biases. We use classical aversive conditioning to associate one out of a series of oriented gratings with a noxious sound stimulus. After as few as two grating-sound pairings, visual cortical responses to the sound-paired grating show selective amplification. Furthermore, as learning progresses, responses to the orientations with greatest similarity to the sound-paired grating are increasingly suppressed, suggesting inhibitory interactions between orientation-selective neuronal populations. Changes in cortical connectivity between occipital and fronto-temporal regions mirror the changes in visuo-cortical response amplitudes. These findings suggest that short-term behaviourally driven retuning of human visual cortical neurons involves distal top-down projections as well as local inhibitory interactions.

  5. Prolonged fasting impairs neural reactivity to visual stimulation.

    PubMed

    Kohn, N; Wassenberg, A; Toygar, T; Kellermann, T; Weidenfeld, C; Berthold-Losleben, M; Chechko, N; Orfanos, S; Vocke, S; Laoutidis, Z G; Schneider, F; Karges, W; Habel, U

    2016-01-01

    Previous literature has shown that hypoglycemia influences the intensity of the BOLD signal. A similar but smaller effect may also be elicited by low normal blood glucose levels in healthy individuals. This may not only confound the BOLD signal measured in fMRI, but also more generally interact with cognitive processing, and thus indirectly influence fMRI results. Here we show in a placebo-controlled, crossover, double-blind study on 40 healthy subjects, that overnight fasting and low normal levels of glucose contrasted to an activated, elevated glucose condition have an impact on brain activation during basal visual stimulation. Additionally, functional connectivity of the visual cortex shows a strengthened association with higher-order attention-related brain areas in an elevated blood glucose condition compared to the fasting condition. In a fasting state visual brain areas show stronger coupling to the inferior temporal gyrus. Results demonstrate that prolonged overnight fasting leads to a diminished BOLD signal in higher-order occipital processing areas when compared to an elevated blood glucose condition. Additionally, functional connectivity patterns underscore the modulatory influence of fasting on visual brain networks. Patterns of brain activation and functional connectivity associated with a broad range of attentional processes are affected by maturation and aging and associated with psychiatric disease and intoxication. Thus, we conclude that prolonged fasting may decrease fMRI design sensitivity in any task involving attentional processes when fasting status or blood glucose is not controlled.

  6. Left temporal and temporoparietal brain activity depends on depth of word encoding: a magnetoencephalographic study in healthy young subjects.

    PubMed

    Walla, P; Hufnagl, B; Lindinger, G; Imhof, H; Deecke, L; Lang, W

    2001-03-01

    Using a 143-channel whole-head magnetoencephalograph (MEG) we recorded the temporal changes of brain activity from 26 healthy young subjects (14 females) related to shallow perceptual and deep semantic word encoding. During subsequent recognition tests, the subjects had to recognize the previously encoded words which were interspersed with new words. The resulting mean memory performances across all subjects clearly mirrored the different levels of encoding. The grand averaged event-related fields (ERFs) associated with perceptual and semantic word encoding differed significantly between 200 and 550 ms after stimulus onset mainly over left superior temporal and left superior parietal sensors. Semantic encoding elicited higher brain activity than perceptual encoding. Source localization procedures revealed that neural populations of the left temporal and temporoparietal brain areas showed different activity strengths across the whole group of subjects depending on depth of word encoding. We suggest that the higher brain activity associated with deep encoding as compared to shallow encoding was due to the involvement of more neural systems during the processing of visually presented words. Deep encoding required more energy than shallow encoding but for all that led to a better memory performance. Copyright 2001 Academic Press.

  7. A Novel Temporal Bone Simulation Model Using 3D Printing Techniques.

    PubMed

    Mowry, Sarah E; Jammal, Hachem; Myer, Charles; Solares, Clementino Arturo; Weinberger, Paul

    2015-09-01

    An inexpensive temporal bone model for use in a temporal bone dissection laboratory setting can be made using a commercially available, consumer-grade 3D printer. Several models for a simulated temporal bone have been described but use commercial-grade printers and materials to produce these models. The goal of this project was to produce a plastic simulated temporal bone on an inexpensive 3D printer that recreates the visual and haptic experience associated with drilling a human temporal bone. Images from a high-resolution CT of a normal temporal bone were converted into stereolithography files via commercially available software, with image conversion and print settings adjusted to achieve optimal print quality. The temporal bone model was printed using acrylonitrile butadiene styrene (ABS) plastic filament on a MakerBot 2x 3D printer. Simulated temporal bones were drilled by seven expert temporal bone surgeons, assessing the fidelity of the model as compared with a human cadaveric temporal bone. Using a four-point scale, the simulated bones were assessed for haptic experience and recreation of the temporal bone anatomy. The created model was felt to be an accurate representation of a human temporal bone. All raters felt strongly this would be a good training model for junior residents or to simulate difficult surgical anatomy. Material cost for each model was $1.92. A realistic, inexpensive, and easily reproducible temporal bone model can be created on a consumer-grade desktop 3D printer.

  8. Audiovisual speech integration in the superior temporal region is dysfunctional in dyslexia.

    PubMed

    Ye, Zheng; Rüsseler, Jascha; Gerth, Ivonne; Münte, Thomas F

    2017-07-25

    Dyslexia is an impairment of reading and spelling that affects both children and adults even after many years of schooling. Dyslexic readers have deficits in the integration of auditory and visual inputs but the neural mechanisms of the deficits are still unclear. This fMRI study examined the neural processing of auditorily presented German numbers 0-9 and videos of lip movements of a German native speaker voicing numbers 0-9 in unimodal (auditory or visual) and bimodal (always congruent) conditions in dyslexic readers and their matched fluent readers. We confirmed results of previous studies that the superior temporal gyrus/sulcus plays a critical role in audiovisual speech integration: fluent readers showed greater superior temporal activations for combined audiovisual stimuli than auditory-/visual-only stimuli. Importantly, such an enhancement effect was absent in dyslexic readers. Moreover, the auditory network (bilateral superior temporal regions plus medial PFC) was dynamically modulated during audiovisual integration in fluent, but not in dyslexic readers. These results suggest that superior temporal dysfunction may underly poor audiovisual speech integration in readers with dyslexia. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  9. Visual short-term memory for high resolution associations is impaired in patients with medial temporal lobe damage.

    PubMed

    Koen, Joshua D; Borders, Alyssa A; Petzold, Michael T; Yonelinas, Andrew P

    2017-02-01

    The medial temporal lobe (MTL) plays a critical role in episodic long-term memory, but whether the MTL is necessary for visual short-term memory is controversial. Some studies have indicated that MTL damage disrupts visual short-term memory performance whereas other studies have failed to find such evidence. To account for these mixed results, it has been proposed that the hippocampus is critical in supporting short-term memory for high resolution complex bindings, while the cortex is sufficient to support simple, low resolution bindings. This hypothesis was tested in the current study by assessing visual short-term memory in patients with damage to the MTL and controls for high resolution and low resolution object-location and object-color associations. In the location tests, participants encoded sets of two or four objects in different locations on the screen. After each set, participants performed a two-alternative forced-choice task in which they were required to discriminate the object in the target location from the object in a high or low resolution lure location (i.e., the object locations were very close or far away from the target location, respectively). Similarly, in the color tests, participants were presented with sets of two or four objects in a different color and, after each set, were required to discriminate the object in the target color from the object in a high or low resolution lure color (i.e., the lure color was very similar or very different, respectively, to the studied color). The patients were significantly impaired in visual short-term memory, but importantly, they were more impaired for high resolution object-location and object-color bindings. The results are consistent with the proposal that the hippocampus plays a critical role in forming and maintaining complex, high resolution bindings. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  10. Differential sensory cortical involvement in auditory and visual sensorimotor temporal recalibration: Evidence from transcranial direct current stimulation (tDCS).

    PubMed

    Aytemür, Ali; Almeida, Nathalia; Lee, Kwang-Hyuk

    2017-02-01

    Adaptation to delayed sensory feedback following an action produces a subjective time compression between the action and the feedback (temporal recalibration effect, TRE). TRE is important for sensory delay compensation to maintain a relationship between causally related events. It is unclear whether TRE is a sensory modality-specific phenomenon. In 3 experiments employing a sensorimotor synchronization task, we investigated this question using cathodal transcranial direct-current stimulation (tDCS). We found that cathodal tDCS over the visual cortex, and to a lesser extent over the auditory cortex, produced decreased visual TRE. However, both auditory and visual cortex tDCS did not produce any measurable effects on auditory TRE. Our study revealed different nature of TRE in auditory and visual domains. Visual-motor TRE, which is more variable than auditory TRE, is a sensory modality-specific phenomenon, modulated by the auditory cortex. The robustness of auditory-motor TRE, unaffected by tDCS, suggests the dominance of the auditory system in temporal processing, by providing a frame of reference in the realignment of sensorimotor timing signals. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Deep recurrent neural network reveals a hierarchy of process memory during dynamic natural vision.

    PubMed

    Shi, Junxing; Wen, Haiguang; Zhang, Yizhen; Han, Kuan; Liu, Zhongming

    2018-05-01

    The human visual cortex extracts both spatial and temporal visual features to support perception and guide behavior. Deep convolutional neural networks (CNNs) provide a computational framework to model cortical representation and organization for spatial visual processing, but unable to explain how the brain processes temporal information. To overcome this limitation, we extended a CNN by adding recurrent connections to different layers of the CNN to allow spatial representations to be remembered and accumulated over time. The extended model, or the recurrent neural network (RNN), embodied a hierarchical and distributed model of process memory as an integral part of visual processing. Unlike the CNN, the RNN learned spatiotemporal features from videos to enable action recognition. The RNN better predicted cortical responses to natural movie stimuli than the CNN, at all visual areas, especially those along the dorsal stream. As a fully observable model of visual processing, the RNN also revealed a cortical hierarchy of temporal receptive window, dynamics of process memory, and spatiotemporal representations. These results support the hypothesis of process memory, and demonstrate the potential of using the RNN for in-depth computational understanding of dynamic natural vision. © 2018 Wiley Periodicals, Inc.

  12. Elevated audiovisual temporal interaction in patients with migraine without aura

    PubMed Central

    2014-01-01

    Background Photophobia and phonophobia are the most prominent symptoms in patients with migraine without aura. Hypersensitivity to visual stimuli can lead to greater hypersensitivity to auditory stimuli, which suggests that the interaction between visual and auditory stimuli may play an important role in the pathogenesis of migraine. However, audiovisual temporal interactions in migraine have not been well studied. Therefore, our aim was to examine auditory and visual interactions in migraine. Methods In this study, visual, auditory, and audiovisual stimuli with different temporal intervals between the visual and auditory stimuli were randomly presented to the left or right hemispace. During this time, the participants were asked to respond promptly to target stimuli. We used cumulative distribution functions to analyze the response times as a measure of audiovisual integration. Results Our results showed that audiovisual integration was significantly elevated in the migraineurs compared with the normal controls (p < 0.05); however, audiovisual suppression was weaker in the migraineurs compared with the normal controls (p < 0.05). Conclusions Our findings further objectively support the notion that migraineurs without aura are hypersensitive to external visual and auditory stimuli. Our study offers a new quantitative and objective method to evaluate hypersensitivity to audio-visual stimuli in patients with migraine. PMID:24961903

  13. Deja Vu in Unilateral Temporal-Lobe Epilepsy Is Associated with Selective Familiarity Impairments on Experimental Tasks of Recognition Memory

    ERIC Educational Resources Information Center

    Martin, Chris B.; Mirsattari, Seyed M.; Pruessner, Jens C.; Pietrantonio, Sandra; Burneo, Jorge G.; Hayman-Abello, Brent; Kohler, Stefan

    2012-01-01

    In deja vu, a phenomenological impression of familiarity for the current visual environment is experienced with a sense that it should in fact not feel familiar. The fleeting nature of this phenomenon in daily life, and the difficulty in developing experimental paradigms to elicit it, has hindered progress in understanding deja vu. Some…

  14. Sensitivity to timing and order in human visual cortex

    PubMed Central

    Singer, Jedediah M.; Madsen, Joseph R.; Anderson, William S.

    2014-01-01

    Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. PMID:25429116

  15. Lateralized Temporal Order Judgement in Dyslexia

    ERIC Educational Resources Information Center

    Liddle, Elizabeth B.; Jackson, Georgina M.; Rorden, Chris; Jackson, Stephen R.

    2009-01-01

    Temporal and spatial attentional deficits in dyslexia were investigated using a lateralized visual temporal order judgment (TOJ) paradigm that allowed both sensitivity to temporal order and spatial attentional bias to be measured. Findings indicate that adult participants with a positive screen for dyslexia were significantly less sensitive to the…

  16. VisGets: coordinated visualizations for web-based information exploration and discovery.

    PubMed

    Dörk, Marian; Carpendale, Sheelagh; Collins, Christopher; Williamson, Carey

    2008-01-01

    In common Web-based search interfaces, it can be difficult to formulate queries that simultaneously combine temporal, spatial, and topical data filters. We investigate how coordinated visualizations can enhance search and exploration of information on the World Wide Web by easing the formulation of these types of queries. Drawing from visual information seeking and exploratory search, we introduce VisGets--interactive query visualizations of Web-based information that operate with online information within a Web browser. VisGets provide the information seeker with visual overviews of Web resources and offer a way to visually filter the data. Our goal is to facilitate the construction of dynamic search queries that combine filters from more than one data dimension. We present a prototype information exploration system featuring three linked VisGets (temporal, spatial, and topical), and used it to visually explore news items from online RSS feeds.

  17. Differential temporal dynamics during visual imagery and perception.

    PubMed

    Dijkstra, Nadine; Mostert, Pim; Lange, Floris P de; Bosch, Sander; van Gerven, Marcel Aj

    2018-05-29

    Visual perception and imagery rely on similar representations in the visual cortex. During perception, visual activity is characterized by distinct processing stages, but the temporal dynamics underlying imagery remain unclear. Here, we investigated the dynamics of visual imagery in human participants using magnetoencephalography. Firstly, we show that, compared to perception, imagery decoding becomes significant later and representations at the start of imagery already overlap with later time points. This suggests that during imagery, the entire visual representation is activated at once or that there are large differences in the timing of imagery between trials. Secondly, we found consistent overlap between imagery and perceptual processing around 160 ms and from 300 ms after stimulus onset. This indicates that the N170 gets reactivated during imagery and that imagery does not rely on early perceptual representations. Together, these results provide important insights for our understanding of the neural mechanisms of visual imagery. © 2018, Dijkstra et al.

  18. Synchronization to auditory and visual rhythms in hearing and deaf individuals

    PubMed Central

    Iversen, John R.; Patel, Aniruddh D.; Nicodemus, Brenda; Emmorey, Karen

    2014-01-01

    A striking asymmetry in human sensorimotor processing is that humans synchronize movements to rhythmic sound with far greater precision than to temporally equivalent visual stimuli (e.g., to an auditory vs. a flashing visual metronome). Traditionally, this finding is thought to reflect a fundamental difference in auditory vs. visual processing, i.e., superior temporal processing by the auditory system and/or privileged coupling between the auditory and motor systems. It is unclear whether this asymmetry is an inevitable consequence of brain organization or whether it can be modified (or even eliminated) by stimulus characteristics or by experience. With respect to stimulus characteristics, we found that a moving, colliding visual stimulus (a silent image of a bouncing ball with a distinct collision point on the floor) was able to drive synchronization nearly as accurately as sound in hearing participants. To study the role of experience, we compared synchronization to flashing metronomes in hearing and profoundly deaf individuals. Deaf individuals performed better than hearing individuals when synchronizing with visual flashes, suggesting that cross-modal plasticity enhances the ability to synchronize with temporally discrete visual stimuli. Furthermore, when deaf (but not hearing) individuals synchronized with the bouncing ball, their tapping patterns suggest that visual timing may access higher-order beat perception mechanisms for deaf individuals. These results indicate that the auditory advantage in rhythmic synchronization is more experience- and stimulus-dependent than has been previously reported. PMID:25460395

  19. Linguistic processing in visual and modality-nonspecific brain areas: PET recordings during selective attention.

    PubMed

    Vorobyev, Victor A; Alho, Kimmo; Medvedev, Svyatoslav V; Pakhomov, Sergey V; Roudas, Marina S; Rutkovskaya, Julia M; Tervaniemi, Mari; Van Zuijen, Titia L; Näätänen, Risto

    2004-07-01

    Positron emission tomography (PET) was used to investigate the neural basis of selective processing of linguistic material during concurrent presentation of multiple stimulus streams ("cocktail-party effect"). Fifteen healthy right-handed adult males were to attend to one of three simultaneously presented messages: one presented visually, one to the left ear, and one to the right ear. During the control condition, subjects attended to visually presented consonant letter strings and ignored auditory messages. This paper reports the modality-nonspecific language processing and visual word-form processing, whereas the auditory attention effects have been reported elsewhere [Cogn. Brain Res. 17 (2003) 201]. The left-hemisphere areas activated by both the selective processing of text and speech were as follows: the inferior prefrontal (Brodmann's area, BA 45, 47), anterior temporal (BA 38), posterior insular (BA 13), inferior (BA 20) and middle temporal (BA 21), occipital (BA 18/30) cortices, the caudate nucleus, and the amygdala. In addition, bilateral activations were observed in the medial occipito-temporal cortex and the cerebellum. Decreases of activation during both text and speech processing were found in the parietal (BA 7, 40), frontal (BA 6, 8, 44) and occipito-temporal (BA 37) regions of the right hemisphere. Furthermore, the present data suggest that the left occipito-temporal cortex (BA 18, 20, 37, 21) can be subdivided into three functionally distinct regions in the posterior-anterior direction on the basis of their activation during attentive processing of sublexical orthography, visual word form, and supramodal higher-level aspects of language.

  20. The edge of awareness: Mask spatial density, but not color, determines optimal temporal frequency for continuous flash suppression.

    PubMed

    Drewes, Jan; Zhu, Weina; Melcher, David

    2018-01-01

    The study of how visual processing functions in the absence of visual awareness has become a major research interest in the vision-science community. One of the main sources of evidence that stimuli that do not reach conscious awareness-and are thus "invisible"-are still processed to some degree by the visual system comes from studies using continuous flash suppression (CFS). Why and how CFS works may provide more general insight into how stimuli access awareness. As spatial and temporal properties of stimuli are major determinants of visual perception, we hypothesized that these properties of the CFS masks would be of significant importance to the achieved suppression depth. In previous studies however, the spatial and temporal properties of the masks themselves have received little study, and masking parameters vary widely across studies, making a metacomparison difficult. To investigate the factors that determine the effectiveness of CFS, we varied both the temporal frequency and the spatial density of Mondrian-style masks. We consistently found the longest suppression duration for a mask temporal frequency of around 6 Hz. In trials using masks with reduced spatial density, suppression was weaker and frequency tuning was less precise. In contrast, removing color reduced mask effectiveness but did not change the pattern of suppression strength as a function of frequency. Overall, this pattern of results stresses the importance of CFS mask parameters and is consistent with the idea that CFS works by disrupting the spatiotemporal mechanisms that underlie conscious access to visual input.

  1. Temporal and spatial tuning of dorsal lateral geniculate nucleus neurons in unanesthetized rats

    PubMed Central

    Sriram, Balaji; Meier, Philip M.

    2016-01-01

    Visual response properties of neurons in the dorsolateral geniculate nucleus (dLGN) have been well described in several species, but not in rats. Analysis of responses from the unanesthetized rat dLGN will be needed to develop quantitative models that account for visual behavior of rats. We recorded visual responses from 130 single units in the dLGN of 7 unanesthetized rats. We report the response amplitudes, temporal frequency, and spatial frequency sensitivities in this population of cells. In response to 2-Hz visual stimulation, dLGN cells fired 15.9 ± 11.4 spikes/s (mean ± SD) modulated by 10.7 ± 8.4 spikes/s about the mean. The optimal temporal frequency for full-field stimulation ranged from 5.8 to 19.6 Hz across cells. The temporal high-frequency cutoff ranged from 11.7 to 33.6 Hz. Some cells responded best to low temporal frequency stimulation (low pass), and others were strictly bandpass; most cells fell between these extremes. At 2- to 4-Hz temporal modulation, the spatial frequency of drifting grating that drove cells best ranged from 0.008 to 0.18 cycles per degree (cpd) across cells. The high-frequency cutoff ranged from 0.01 to 1.07 cpd across cells. The majority of cells were driven best by the lowest spatial frequency tested, but many were partially or strictly bandpass. We conclude that single units in the rat dLGN can respond vigorously to temporal modulation up to at least 30 Hz and spatial detail up to 1 cpd. Tuning properties were heterogeneous, but each fell along a continuum; we found no obvious clustering into discrete cell types along these dimensions. PMID:26936980

  2. Analysis of the effect of repeated-pulse transcranial magnetic stimulation at the Guangming point on electroencephalograms.

    PubMed

    Zhang, Xin; Fu, Lingdi; Geng, Yuehua; Zhai, Xiang; Liu, Yanhua

    2014-03-01

    Here, we administered repeated-pulse transcranial magnetic stimulation to healthy people at the left Guangming (GB37) and a mock point, and calculated the sample entropy of electroencephalo-gram signals using nonlinear dynamics. Additionally, we compared electroencephalogram sample entropy of signals in response to visual stimulation before, during, and after repeated-pulse tran-scranial magnetic stimulation at the Guangming. Results showed that electroencephalogram sample entropy at left (F3) and right (FP2) frontal electrodes were significantly different depending on where the magnetic stimulation was administered. Additionally, compared with the mock point, electroencephalogram sample entropy was higher after stimulating the Guangming point. When visual stimulation at Guangming was given before repeated-pulse transcranial magnetic stimula-tion, significant differences in sample entropy were found at five electrodes (C3, Cz, C4, P3, T8) in parietal cortex, the central gyrus, and the right temporal region compared with when it was given after repeated-pulse transcranial magnetic stimulation, indicating that repeated-pulse transcranial magnetic stimulation at Guangming can affect visual function. Analysis of electroencephalogram revealed that when visual stimulation preceded repeated pulse transcranial magnetic stimulation, sample entropy values were higher at the C3, C4, and P3 electrodes and lower at the Cz and T8 electrodes than visual stimulation followed preceded repeated pulse transcranial magnetic stimula-tion. The findings indicate that repeated-pulse transcranial magnetic stimulation at the Guangming evokes different patterns of electroencephalogram signals than repeated-pulse transcranial mag-netic stimulation at other nearby points on the body surface, and that repeated-pulse transcranial magnetic stimulation at the Guangming is associated with changes in the complexity of visually evoked electroencephalogram signals in parietal regions, central gyrus, and temporal regions.

  3. Neural mechanisms of understanding rational actions: middle temporal gyrus activation by contextual violation.

    PubMed

    Jastorff, Jan; Clavagnier, Simon; Gergely, György; Orban, Guy A

    2011-02-01

    Performing goal-directed actions toward an object in accordance with contextual constraints, such as the presence or absence of an obstacle, has been widely used as a paradigm for assessing the capacity of infants or nonhuman primates to evaluate the rationality of others' actions. Here, we have used this paradigm in a functional magnetic resonance imaging experiment to visualize the cortical regions involved in the assessment of action rationality while controlling for visual differences in the displays and directly correlating magnetic resonance activity with rationality ratings. Bilateral middle temporal gyrus (MTG) regions, anterior to extrastriate body area and the human middle temporal complex, were involved in the visual evaluation of action rationality. These MTG regions are embedded in the superior temporal sulcus regions processing the kinematics of observed actions. Our results suggest that rationality is assessed initially by purely visual computations, combining the kinematics of the action with the physical constraints of the environmental context. The MTG region seems to be sensitive to the contingent relationship between a goal-directed biological action and its relevant environmental constraints, showing increased activity when the expected pattern of rational goal attainment is violated.

  4. Supporting Children in Mastering Temporal Relations of Stories: The TERENCE Learning Approach

    ERIC Educational Resources Information Center

    Di Mascio, Tania; Gennari, Rosella; Melonio, Alessandra; Tarantino, Laura

    2016-01-01

    Though temporal reasoning is a key factor for text comprehension, existing proposals for visualizing temporal information and temporal connectives proves to be inadequate for children, not only for their levels of abstraction and detail, but also because they rely on pre-existing mental models of time and temporal connectives, while in the case of…

  5. Dissociation of quantifiers and object nouns in speech in focal neurodegenerative disease.

    PubMed

    Ash, Sharon; Ternes, Kylie; Bisbing, Teagan; Min, Nam Eun; Moran, Eileen; York, Collin; McMillan, Corey T; Irwin, David J; Grossman, Murray

    2016-08-01

    Quantifiers such as many and some are thought to depend in part on the conceptual representation of number knowledge, while object nouns such as cookie and boy appear to depend in part on visual feature knowledge associated with object concepts. Further, number knowledge is associated with a frontal-parietal network while object knowledge is related in part to anterior and ventral portions of the temporal lobe. We examined the cognitive and anatomic basis for the spontaneous speech production of quantifiers and object nouns in non-aphasic patients with focal neurodegenerative disease associated with corticobasal syndrome (CBS, n=33), behavioral variant frontotemporal degeneration (bvFTD, n=54), and semantic variant primary progressive aphasia (svPPA, n=19). We recorded a semi-structured speech sample elicited from patients and healthy seniors (n=27) during description of the Cookie Theft scene. We observed a dissociation: CBS and bvFTD were significantly impaired in the production of quantifiers but not object nouns, while svPPA were significantly impaired in the production of object nouns but not quantifiers. MRI analysis revealed that quantifier production deficits in CBS and bvFTD were associated with disease in a frontal-parietal network important for number knowledge, while impaired production of object nouns in all patient groups was related to disease in inferior temporal regions important for representations of visual feature knowledge of objects. These findings imply that partially dissociable representations in semantic memory may underlie different segments of the lexicon. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Video quality assessment using motion-compensated temporal filtering and manifold feature similarity

    PubMed Central

    Yu, Mei; Jiang, Gangyi; Shao, Feng; Peng, Zongju

    2017-01-01

    Well-performed Video quality assessment (VQA) method should be consistent with human visual systems for better prediction accuracy. In this paper, we propose a VQA method using motion-compensated temporal filtering (MCTF) and manifold feature similarity. To be more specific, a group of frames (GoF) is first decomposed into a temporal high-pass component (HPC) and a temporal low-pass component (LPC) by MCTF. Following this, manifold feature learning (MFL) and phase congruency (PC) are used to predict the quality of temporal LPC and temporal HPC respectively. The quality measures of the LPC and the HPC are then combined as GoF quality. A temporal pooling strategy is subsequently used to integrate GoF qualities into an overall video quality. The proposed VQA method appropriately processes temporal information in video by MCTF and temporal pooling strategy, and simulate human visual perception by MFL. Experiments on publicly available video quality database showed that in comparison with several state-of-the-art VQA methods, the proposed VQA method achieves better consistency with subjective video quality and can predict video quality more accurately. PMID:28445489

  7. Holistic Face Categorization in Higher Order Visual Areas of the Normal and Prosopagnosic Brain: Toward a Non-Hierarchical View of Face Perception

    PubMed Central

    Rossion, Bruno; Dricot, Laurence; Goebel, Rainer; Busigny, Thomas

    2011-01-01

    How a visual stimulus is initially categorized as a face in a network of human brain areas remains largely unclear. Hierarchical neuro-computational models of face perception assume that the visual stimulus is first decomposed in local parts in lower order visual areas. These parts would then be combined into a global representation in higher order face-sensitive areas of the occipito-temporal cortex. Here we tested this view in fMRI with visual stimuli that are categorized as faces based on their global configuration rather than their local parts (two-tones Mooney figures and Arcimboldo's facelike paintings). Compared to the same inverted visual stimuli that are not categorized as faces, these stimuli activated the right middle fusiform gyrus (“Fusiform face area”) and superior temporal sulcus (pSTS), with no significant activation in the posteriorly located inferior occipital gyrus (i.e., no “occipital face area”). This observation is strengthened by behavioral and neural evidence for normal face categorization of these stimuli in a brain-damaged prosopagnosic patient whose intact right middle fusiform gyrus and superior temporal sulcus are devoid of any potential face-sensitive inputs from the lesioned right inferior occipital cortex. Together, these observations indicate that face-preferential activation may emerge in higher order visual areas of the right hemisphere without any face-preferential inputs from lower order visual areas, supporting a non-hierarchical view of face perception in the visual cortex. PMID:21267432

  8. Activity and social factors affect cohesion among individuals in female Japanese macaques: A simultaneous focal-follow study.

    PubMed

    Nishikawa, Mari; Suzuki, Mariko; Sprague, David S

    2014-07-01

    Understanding cohesion among individuals within a group is necessary to reveal the social system of group-living primates. Japanese macaques (Macaca fuscata) are female-philopatric primates that reside in social groups. We investigated whether individual activity and social factors can affect spatio-temporal cohesion in wild female Japanese macaques. We conducted behavioral observation on a group, which contained 38 individuals and ranged over ca. 60 ha during the study period. Two observers carried out simultaneous focal-animal sampling of adult female pairs during full-day follows using global positioning system which enabled us to quantify interindividual distances (IIDs), group members within visual range (i.e., visual unit), and separation duration beyond visual range as indicators of cohesion among individuals. We found considerable variation in spatio-temporal group cohesion. The overall mean IID was 99.9 m (range = 0-618.2 m). The percentage of IIDs within visual range was 23.1%, within auditory range was 59.8%, and beyond auditory range was 17.1%. IIDs varied with activity; they were shorter during grooming and resting, and longer during foraging and traveling. Low-ranking females showed less cohesion than high-ranking ones. Kin females stayed nearly always within audible range. The macaques were weakly cohesive with small mean visual unit size (3.15 counting only adults, 5.99 counting all individuals). Both-sex units were the most frequently observed visual unit type when they were grooming/resting. Conversely, female units were the most frequently observed visual unit type when they were foraging. The overall mean visual separation duration was 25.7 min (range = 3-513 min). Separation duration was associated with dominance rank. These results suggest that Japanese macaques regulate cohesion among individuals depending on their activity and on social relationships; they were separated to adapt food distribution and aggregated to maintain social interactions. © 2014 Wiley Periodicals, Inc.

  9. Visual cortex extrastriate body-selective area activation in congenitally blind people "seeing" by using sounds.

    PubMed

    Striem-Amit, Ella; Amedi, Amir

    2014-03-17

    Vision is by far the most prevalent sense for experiencing others' body shapes, postures, actions, and intentions, and its congenital absence may dramatically hamper body-shape representation in the brain. We investigated whether the absence of visual experience and limited exposure to others' body shapes could still lead to body-shape selectivity. We taught congenitally fully-blind adults to perceive full-body shapes conveyed through a sensory-substitution algorithm topographically translating images into soundscapes [1]. Despite the limited experience of the congenitally blind with external body shapes (via touch of close-by bodies and for ~10 hr via soundscapes), once the blind could retrieve body shapes via soundscapes, they robustly activated the visual cortex, specifically the extrastriate body area (EBA; [2]). Furthermore, body selectivity versus textures, objects, and faces in both the blind and sighted control groups was not found in the temporal (auditory) or parietal (somatosensory) cortex but only in the visual EBA. Finally, resting-state data showed that the blind EBA is functionally connected to the temporal cortex temporal-parietal junction/superior temporal sulcus Theory-of-Mind areas [3]. Thus, the EBA preference is present without visual experience and with little exposure to external body-shape information, supporting the view that the brain has a sensory-independent, task-selective supramodal organization rather than a sensory-specific organization. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Recalibration of the Multisensory Temporal Window of Integration Results from Changing Task Demands

    PubMed Central

    Mégevand, Pierre; Molholm, Sophie; Nayak, Ashabari; Foxe, John J.

    2013-01-01

    The notion of the temporal window of integration, when applied in a multisensory context, refers to the breadth of the interval across which the brain perceives two stimuli from different sensory modalities as synchronous. It maintains a unitary perception of multisensory events despite physical and biophysical timing differences between the senses. The boundaries of the window can be influenced by attention and past sensory experience. Here we examined whether task demands could also influence the multisensory temporal window of integration. We varied the stimulus onset asynchrony between simple, short-lasting auditory and visual stimuli while participants performed two tasks in separate blocks: a temporal order judgment task that required the discrimination of subtle auditory-visual asynchronies, and a reaction time task to the first incoming stimulus irrespective of its sensory modality. We defined the temporal window of integration as the range of stimulus onset asynchronies where performance was below 75% in the temporal order judgment task, as well as the range of stimulus onset asynchronies where responses showed multisensory facilitation (race model violation) in the reaction time task. In 5 of 11 participants, we observed audio-visual stimulus onset asynchronies where reaction time was significantly accelerated (indicating successful integration in this task) while performance was accurate in the temporal order judgment task (indicating successful segregation in that task). This dissociation suggests that in some participants, the boundaries of the temporal window of integration can adaptively recalibrate in order to optimize performance according to specific task demands. PMID:23951203

  11. Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses

    PubMed Central

    Molloy, Katharine; Griffiths, Timothy D.; Lavie, Nilli

    2015-01-01

    Due to capacity limits on perception, conditions of high perceptual load lead to reduced processing of unattended stimuli (Lavie et al., 2014). Accumulating work demonstrates the effects of visual perceptual load on visual cortex responses, but the effects on auditory processing remain poorly understood. Here we establish the neural mechanisms underlying “inattentional deafness”—the failure to perceive auditory stimuli under high visual perceptual load. Participants performed a visual search task of low (target dissimilar to nontarget items) or high (target similar to nontarget items) load. On a random subset (50%) of trials, irrelevant tones were presented concurrently with the visual stimuli. Brain activity was recorded with magnetoencephalography, and time-locked responses to the visual search array and to the incidental presence of unattended tones were assessed. High, compared to low, perceptual load led to increased early visual evoked responses (within 100 ms from onset). This was accompanied by reduced early (∼100 ms from tone onset) auditory evoked activity in superior temporal sulcus and posterior middle temporal gyrus. A later suppression of the P3 “awareness” response to the tones was also observed under high load. A behavioral experiment revealed reduced tone detection sensitivity under high visual load, indicating that the reduction in neural responses was indeed associated with reduced awareness of the sounds. These findings support a neural account of shared audiovisual resources, which, when depleted under load, leads to failures of sensory perception and awareness. SIGNIFICANCE STATEMENT The present work clarifies the neural underpinning of inattentional deafness under high visual load. The findings of near-simultaneous load effects on both visual and auditory evoked responses suggest shared audiovisual processing capacity. Temporary depletion of shared capacity in perceptually demanding visual tasks leads to a momentary reduction in sensory processing of auditory stimuli, resulting in inattentional deafness. The dynamic “push–pull” pattern of load effects on visual and auditory processing furthers our understanding of both the neural mechanisms of attention and of cross-modal effects across visual and auditory processing. These results also offer an explanation for many previous failures to find cross-modal effects in experiments where the visual load effects may not have coincided directly with auditory sensory processing. PMID:26658858

  12. An fMRI Study of the Neural Systems Involved in Visually Cued Auditory Top-Down Spatial and Temporal Attention

    PubMed Central

    Li, Chunlin; Chen, Kewei; Han, Hongbin; Chui, Dehua; Wu, Jinglong

    2012-01-01

    Top-down attention to spatial and temporal cues has been thoroughly studied in the visual domain. However, because the neural systems that are important for auditory top-down temporal attention (i.e., attention based on time interval cues) remain undefined, the differences in brain activity between directed attention to auditory spatial location (compared with time intervals) are unclear. Using fMRI (magnetic resonance imaging), we measured the activations caused by cue-target paradigms by inducing the visual cueing of attention to an auditory target within a spatial or temporal domain. Imaging results showed that the dorsal frontoparietal network (dFPN), which consists of the bilateral intraparietal sulcus and the frontal eye field, responded to spatial orienting of attention, but activity was absent in the bilateral frontal eye field (FEF) during temporal orienting of attention. Furthermore, the fMRI results indicated that activity in the right ventrolateral prefrontal cortex (VLPFC) was significantly stronger during spatial orienting of attention than during temporal orienting of attention, while the DLPFC showed no significant differences between the two processes. We conclude that the bilateral dFPN and the right VLPFC contribute to auditory spatial orienting of attention. Furthermore, specific activations related to temporal cognition were confirmed within the superior occipital gyrus, tegmentum, motor area, thalamus and putamen. PMID:23166800

  13. Musicians have enhanced audiovisual multisensory binding: experience-dependent effects in the double-flash illusion.

    PubMed

    Bidelman, Gavin M

    2016-10-01

    Musical training is associated with behavioral and neurophysiological enhancements in auditory processing for both musical and nonmusical sounds (e.g., speech). Yet, whether the benefits of musicianship extend beyond enhancements to auditory-specific skills and impact multisensory (e.g., audiovisual) processing has yet to be fully validated. Here, we investigated multisensory integration of auditory and visual information in musicians and nonmusicians using a double-flash illusion, whereby the presentation of multiple auditory stimuli (beeps) concurrent with a single visual object (flash) induces an illusory perception of multiple flashes. We parametrically varied the onset asynchrony between auditory and visual events (leads and lags of ±300 ms) to quantify participants' "temporal window" of integration, i.e., stimuli in which auditory and visual cues were fused into a single percept. Results show that musically trained individuals were both faster and more accurate at processing concurrent audiovisual cues than their nonmusician peers; nonmusicians had a higher susceptibility for responding to audiovisual illusions and perceived double flashes over an extended range of onset asynchronies compared to trained musicians. Moreover, temporal window estimates indicated that musicians' windows (<100 ms) were ~2-3× shorter than nonmusicians' (~200 ms), suggesting more refined multisensory integration and audiovisual binding. Collectively, findings indicate a more refined binding of auditory and visual cues in musically trained individuals. We conclude that experience-dependent plasticity of intensive musical experience extends beyond simple listening skills, improving multimodal processing and the integration of multiple sensory systems in a domain-general manner.

  14. Multiple brain networks for visual self-recognition with different sensitivity for motion and body part.

    PubMed

    Sugiura, Motoaki; Sassa, Yuko; Jeong, Hyeonjeong; Miura, Naoki; Akitsuki, Yuko; Horie, Kaoru; Sato, Shigeru; Kawashima, Ryuta

    2006-10-01

    Multiple brain networks may support visual self-recognition. It has been hypothesized that the left ventral occipito-temporal cortex processes one's own face as a symbol, and the right parieto-frontal network processes self-image in association with motion-action contingency. Using functional magnetic resonance imaging, we first tested these hypotheses based on the prediction that these networks preferentially respond to a static self-face and to moving one's whole body, respectively. Brain activation specifically related to self-image during familiarity judgment was compared across four stimulus conditions comprising a two factorial design: factor Motion contrasted picture (Picture) and movie (Movie), and factor Body part a face (Face) and whole body (Body). Second, we attempted to segregate self-specific networks using a principal component analysis (PCA), assuming an independent pattern of inter-subject variability in activation over the four stimulus conditions in each network. The bilateral ventral occipito-temporal and the right parietal and frontal cortices exhibited self-specific activation. The left ventral occipito-temporal cortex exhibited greater self-specific activation for Face than for Body, in Picture, consistent with the prediction for this region. The activation profiles of the right parietal and frontal cortices did not show preference for Movie Body predicted by the assumed roles of these regions. The PCA extracted two cortical networks, one with its peaks in the right posterior, and another in frontal cortices; their possible roles in visuo-spatial and conceptual self-representations, respectively, were suggested by previous findings. The results thus supported and provided evidence of multiple brain networks for visual self-recognition.

  15. Ocular-following responses to white noise stimuli in humans reveal a novel nonlinearity that results from temporal sampling

    PubMed Central

    Sheliga, Boris M.; Quaia, Christian; FitzGibbon, Edmond J.; Cumming, Bruce G.

    2016-01-01

    White noise stimuli are frequently used to study the visual processing of broadband images in the laboratory. A common goal is to describe how responses are derived from Fourier components in the image. We investigated this issue by recording the ocular-following responses (OFRs) to white noise stimuli in human subjects. For a given speed we compared OFRs to unfiltered white noise with those to noise filtered with band-pass filters and notch filters. Removing components with low spatial frequency (SF) reduced OFR magnitudes, and the SF associated with the greatest reduction matched the SF that produced the maximal response when presented alone. This reduction declined rapidly with SF, compatible with a winner-take-all operation. Removing higher SF components increased OFR magnitudes. For higher speeds this effect became larger and propagated toward lower SFs. All of these effects were quantitatively well described by a model that combined two factors: (a) an excitatory drive that reflected the OFRs to individual Fourier components and (b) a suppression by higher SF channels where the temporal sampling of the display led to flicker. This nonlinear interaction has an important practical implication: Even with high refresh rates (150 Hz), the temporal sampling introduced by visual displays has a significant impact on visual processing. For instance, we show that this distorts speed tuning curves, shifting the peak to lower speeds. Careful attention to spectral content, in the light of this nonlinearity, is necessary to minimize the resulting artifact when using white noise patterns undergoing apparent motion. PMID:26762277

  16. Semantic dementia and persisting Wernicke's aphasia: linguistic and anatomical profiles.

    PubMed

    Ogar, J M; Baldo, J V; Wilson, S M; Brambati, S M; Miller, B L; Dronkers, N F; Gorno-Tempini, M L

    2011-04-01

    Few studies have directly compared the clinical and anatomical characteristics of patients with progressive aphasia to those of patients with aphasia caused by stroke. In the current study we examined fluent forms of aphasia in these two groups, specifically semantic dementia (SD) and persisting Wernicke's aphasia (WA) due to stroke. We compared 10 patients with SD to 10 age- and education-matched patients with WA in three language domains: language comprehension (single words and sentences), spontaneous speech and visual semantics. Neuroanatomical involvement was analyzed using disease-specific image analysis techniques: voxel-based morphometry (VBM) for patients with SD and overlays of lesion digitized lesion reconstructions in patients with WA. Patients with SD and WA were both impaired on tasks that involved visual semantics, but patients with SD were less impaired in spontaneous speech and sentence comprehension. The anatomical findings showed that different regions were most affected in the two disorders: the left anterior temporal lobe in SD and the left posterior middle temporal gyrus in chronic WA. This study highlights that the two syndromes classically associated with language comprehension deficits in aphasia due to stroke and neurodegenerative disease are clinically distinct, most likely due to distinct distributions of damage in the temporal lobe. Copyright © 2010 Elsevier Inc. All rights reserved.

  17. Underlying Skills of Oral and Silent Reading Fluency in Chinese: Perspective of Visual Rapid Processing

    PubMed Central

    Zhao, Jing; Kwok, Rosa K. W.; Liu, Menglian; Liu, Hanlong; Huang, Chen

    2017-01-01

    Reading fluency is a critical skill to improve the quality of our daily life and working efficiency. The majority of previous studies focused on oral reading fluency rather than silent reading fluency, which is a much more dominant reading mode that is used in middle and high school and for leisure reading. It is still unclear whether the oral and silent reading fluency involved the same underlying skills. To address this issue, the present study examined the relationship between the visual rapid processing and Chinese reading fluency in different modes. Fifty-eight undergraduate students took part in the experiment. The phantom contour paradigm and the visual 1-back task were adopted to measure the visual rapid temporal and simultaneous processing respectively. These two tasks reflected the temporal and spatial dimensions of visual rapid processing separately. We recorded the temporal threshold in the phantom contour task, as well as reaction time and accuracy in the visual 1-back task. Reading fluency was measured in both single-character and sentence levels. Fluent reading of single characters was assessed with a paper-and-pencil lexical decision task, and a sentence verification task was developed to examine reading fluency on a sentence level. The reading fluency test in each level was conducted twice (i.e., oral reading and silent reading). Reading speed and accuracy were recorded. The correlation analysis showed that the temporal threshold in the phantom contour task did not correlate with the scores of the reading fluency tests. Although, the reaction time in visual 1-back task correlated with the reading speed of both oral and silent reading fluency, the comparison of the correlation coefficients revealed a closer relationship between the visual rapid simultaneous processing and silent reading. Furthermore, the visual rapid simultaneous processing exhibited a significant contribution to reading fluency in silent mode but not in oral reading mode. These findings suggest that the underlying mechanism between oral and silent reading fluency is different at the beginning of the basic visual coding. The current results also might reveal a potential modulation of the language characteristics of Chinese on the relationship between visual rapid processing and reading fluency. PMID:28119663

  18. Underlying Skills of Oral and Silent Reading Fluency in Chinese: Perspective of Visual Rapid Processing.

    PubMed

    Zhao, Jing; Kwok, Rosa K W; Liu, Menglian; Liu, Hanlong; Huang, Chen

    2016-01-01

    Reading fluency is a critical skill to improve the quality of our daily life and working efficiency. The majority of previous studies focused on oral reading fluency rather than silent reading fluency, which is a much more dominant reading mode that is used in middle and high school and for leisure reading. It is still unclear whether the oral and silent reading fluency involved the same underlying skills. To address this issue, the present study examined the relationship between the visual rapid processing and Chinese reading fluency in different modes. Fifty-eight undergraduate students took part in the experiment. The phantom contour paradigm and the visual 1-back task were adopted to measure the visual rapid temporal and simultaneous processing respectively. These two tasks reflected the temporal and spatial dimensions of visual rapid processing separately. We recorded the temporal threshold in the phantom contour task, as well as reaction time and accuracy in the visual 1-back task. Reading fluency was measured in both single-character and sentence levels. Fluent reading of single characters was assessed with a paper-and-pencil lexical decision task, and a sentence verification task was developed to examine reading fluency on a sentence level. The reading fluency test in each level was conducted twice (i.e., oral reading and silent reading). Reading speed and accuracy were recorded. The correlation analysis showed that the temporal threshold in the phantom contour task did not correlate with the scores of the reading fluency tests. Although, the reaction time in visual 1-back task correlated with the reading speed of both oral and silent reading fluency, the comparison of the correlation coefficients revealed a closer relationship between the visual rapid simultaneous processing and silent reading. Furthermore, the visual rapid simultaneous processing exhibited a significant contribution to reading fluency in silent mode but not in oral reading mode. These findings suggest that the underlying mechanism between oral and silent reading fluency is different at the beginning of the basic visual coding. The current results also might reveal a potential modulation of the language characteristics of Chinese on the relationship between visual rapid processing and reading fluency.

  19. Spatiotemporal Filter for Visual Motion Integration from Pursuit Eye Movements in Humans and Monkeys

    PubMed Central

    Liu, Bing

    2017-01-01

    Despite the enduring interest in motion integration, a direct measure of the space–time filter that the brain imposes on a visual scene has been elusive. This is perhaps because of the challenge of estimating a 3D function from perceptual reports in psychophysical tasks. We take a different approach. We exploit the close connection between visual motion estimates and smooth pursuit eye movements to measure stimulus–response correlations across space and time, computing the linear space–time filter for global motion direction in humans and monkeys. Although derived from eye movements, we find that the filter predicts perceptual motion estimates quite well. To distinguish visual from motor contributions to the temporal duration of the pursuit motion filter, we recorded single-unit responses in the monkey middle temporal cortical area (MT). We find that pursuit response delays are consistent with the distribution of cortical neuron latencies and that temporal motion integration for pursuit is consistent with a short integration MT subpopulation. Remarkably, the visual system appears to preferentially weight motion signals across a narrow range of foveal eccentricities rather than uniformly over the whole visual field, with a transiently enhanced contribution from locations along the direction of motion. We find that the visual system is most sensitive to motion falling at approximately one-third the radius of the stimulus aperture. Hypothesizing that the visual drive for pursuit is related to the filtered motion energy in a motion stimulus, we compare measured and predicted eye acceleration across several other target forms. SIGNIFICANCE STATEMENT A compact model of the spatial and temporal processing underlying global motion perception has been elusive. We used visually driven smooth eye movements to find the 3D space–time function that best predicts both eye movements and perception of translating dot patterns. We found that the visual system does not appear to use all available motion signals uniformly, but rather weights motion preferentially in a narrow band at approximately one-third the radius of the stimulus. Although not universal, the filter predicts responses to other types of stimuli, demonstrating a remarkable degree of generalization that may lead to a deeper understanding of visual motion processing. PMID:28003348

  20. Visual Memory in Post-Anterior Right Temporal Lobectomy Patients and Adult Normative Data for the Brown Location Test (BLT)

    PubMed Central

    Brown, Franklin C.; Tuttle, Erin; Westerveld, Michael; Ferraro, F. Richard; Chmielowiec, Teresa; Vandemore, Michelle; Gibson-Beverly, Gina; Bemus, Lisa; Roth, Robert M.; Blumenfeld, Hal; Spencer, Dennis D.; Spencer, Susan S

    2010-01-01

    Several large and meta-analytic studies have failed to support a consistent relationship between visual or “nonverbal” memory deficits and right mesial temporal lobe changes. However, the Brown Location Test (BLT) is a recently developed dot location learning and memory test that uses a nonsymmetrical array and provides control over many of the confounding variables (e.g., verbal influence and drawing requirements) inherent in other measures of visual memory. In the present investigation, we evaluated the clinical utility of the BLT in patients who had undergone left or right anterior mesial temporal lobectomies. We also provide adult normative data of 298 healthy adults in order to provide standardized scores. Results revealed significantly worse performance on the BLT in the right as compared to left lobectomy group and the healthy adult normative sample. The present findings support a role for the right anterior-mesial temporal lobe in dot location learning and memory. PMID:20056493

  1. Visual Benefits in Apparent Motion Displays: Automatically Driven Spatial and Temporal Anticipation Are Partially Dissociated

    PubMed Central

    Ahrens, Merle-Marie; Veniero, Domenica; Gross, Joachim; Harvey, Monika; Thut, Gregor

    2015-01-01

    Many behaviourally relevant sensory events such as motion stimuli and speech have an intrinsic spatio-temporal structure. This will engage intentional and most likely unintentional (automatic) prediction mechanisms enhancing the perception of upcoming stimuli in the event stream. Here we sought to probe the anticipatory processes that are automatically driven by rhythmic input streams in terms of their spatial and temporal components. To this end, we employed an apparent visual motion paradigm testing the effects of pre-target motion on lateralized visual target discrimination. The motion stimuli either moved towards or away from peripheral target positions (valid vs. invalid spatial motion cueing) at a rhythmic or arrhythmic pace (valid vs. invalid temporal motion cueing). Crucially, we emphasized automatic motion-induced anticipatory processes by rendering the motion stimuli non-predictive of upcoming target position (by design) and task-irrelevant (by instruction), and by creating instead endogenous (orthogonal) expectations using symbolic cueing. Our data revealed that the apparent motion cues automatically engaged both spatial and temporal anticipatory processes, but that these processes were dissociated. We further found evidence for lateralisation of anticipatory temporal but not spatial processes. This indicates that distinct mechanisms may drive automatic spatial and temporal extrapolation of upcoming events from rhythmic event streams. This contrasts with previous findings that instead suggest an interaction between spatial and temporal attention processes when endogenously driven. Our results further highlight the need for isolating intentional from unintentional processes for better understanding the various anticipatory mechanisms engaged in processing behaviourally relevant stimuli with predictable spatio-temporal structure such as motion and speech. PMID:26623650

  2. Frontal lobe atrophy is associated with small vessel disease in ischemic stroke patients.

    PubMed

    Chen, Yangkun; Chen, Xiangyan; Xiao, Weimin; Mok, Vincent C T; Wong, Ka Sing; Tang, Wai Kwong

    2009-12-01

    The pathogenesis of frontal lobe atrophy (FLA) in stroke patients is unclear. We aimed to ascertain whether subcortical ischemic changes were more associated with FLA than with parietal lobe atrophy (PLA) and temporal lobe atrophy (TLA). Brain magnetic resonance images (MRIs) from 471 Chinese ischemic stroke patients were analyzed. Lobar atrophy was defined by a widely used visual rating scale. All patients were divided into non-severe, mild-moderate, and severe atrophy of the frontal, parietal, and temporal lobe groups. The severity of white matter lesions (WMLs) was rated with the Fazekas' scale. Clinical and radiological features were compared among the groups. Subsequent logistic regressions were performed to determine the risk factors of atrophy and severe atrophy of the frontal, parietal and temporal lobes. The frequency of FLA in our cohort was 36.9% (174/471). Severe FLA occurred in 30 (6.4%) patients. Age, previous stroke, and periventricular hyperintensities (PVH) (odds ratio (OR)=1.640, p=0.039) were independent risk factors of FLA. Age and deep white matter hyperintensities (DWMH) (OR=3.634, p=0.002) were independent risk factors of severe FLA. PVH and DWMH were not independent risk factors of PLA and TLA. Frontal lobe atrophy in ischemic stroke patients may be associated with small vessel disease. The association between WMLs and FLA was predominant over atrophy of the parietal and temporal lobes, which suggests that the frontal lobe may be vulnerable to subcortical ischemic changes.

  3. Learning to associate auditory and visual stimuli: behavioral and neural mechanisms.

    PubMed

    Altieri, Nicholas; Stevenson, Ryan A; Wallace, Mark T; Wenger, Michael J

    2015-05-01

    The ability to effectively combine sensory inputs across modalities is vital for acquiring a unified percept of events. For example, watching a hammer hit a nail while simultaneously identifying the sound as originating from the event requires the ability to identify spatio-temporal congruencies and statistical regularities. In this study, we applied a reaction time and hazard function measure known as capacity (e.g., Townsend and AshbyCognitive Theory 200-239, 1978) to quantify the extent to which observers learn paired associations between simple auditory and visual patterns in a model theoretic manner. As expected, results showed that learning was associated with an increase in accuracy, but more significantly, an increase in capacity. The aim of this study was to associate capacity measures of multisensory learning, with neural based measures, namely mean global field power (GFP). We observed a co-variation between an increase in capacity, and a decrease in GFP amplitude as learning occurred. This suggests that capacity constitutes a reliable behavioral index of efficient energy expenditure in the neural domain.

  4. The role of human ventral visual cortex in motion perception

    PubMed Central

    Saygin, Ayse P.; Lorenzi, Lauren J.; Egan, Ryan; Rees, Geraint; Behrmann, Marlene

    2013-01-01

    Visual motion perception is fundamental to many aspects of visual perception. Visual motion perception has long been associated with the dorsal (parietal) pathway and the involvement of the ventral ‘form’ (temporal) visual pathway has not been considered critical for normal motion perception. Here, we evaluated this view by examining whether circumscribed damage to ventral visual cortex impaired motion perception. The perception of motion in basic, non-form tasks (motion coherence and motion detection) and complex structure-from-motion, for a wide range of motion speeds, all centrally displayed, was assessed in five patients with a circumscribed lesion to either the right or left ventral visual pathway. Patients with a right, but not with a left, ventral visual lesion displayed widespread impairments in central motion perception even for non-form motion, for both slow and for fast speeds, and this held true independent of the integrity of areas MT/V5, V3A or parietal regions. In contrast with the traditional view in which only the dorsal visual stream is critical for motion perception, these novel findings implicate a more distributed circuit in which the integrity of the right ventral visual pathway is also necessary even for the perception of non-form motion. PMID:23983030

  5. Spatio-temporal variability of ichthyophagous bird assemblage around western Mediterranean open-sea cage fish farms.

    PubMed

    Aguado-Giménez, Felipe; Eguía-Martínez, Sergio; Cerezo-Valverde, Jesús; García-García, Benjamín

    2018-06-14

    Ichthyophagous birds aggregate at cage fish farms attracted by caged and associated wild fish. Spatio-temporal variability of such birds was studied for a year through seasonal visual counts at eight farms in the western Mediterranean. Correlation with farm and location descriptors was assessed. Considerable spatio-temporal variability in fish-eating bird density and assemblage structure was observed among farms and seasons. Bird density increased from autumn to winter, with the great cormorant being the most abundant species, also accounting largely for differences among farms. Grey heron and little egret were also numerous at certain farms during the coldest seasons. Cattle egret was only observed at one farm. No shags were observed during winter. During spring and summer, bird density decreased markedly and only shags and little egrets were observed at only a few farms. Season and distance from farms to bird breeding/wintering grounds helped to explain some of the spatio-temporal variability. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Spatial Attention Reduces Burstiness in Macaque Visual Cortical Area MST.

    PubMed

    Xue, Cheng; Kaping, Daniel; Ray, Sonia Baloni; Krishna, B Suresh; Treue, Stefan

    2017-01-01

    Visual attention modulates the firing rate of neurons in many primate cortical areas. In V4, a cortical area in the ventral visual pathway, spatial attention has also been shown to reduce the tendency of neurons to fire closely separated spikes (burstiness). A recent model proposes that a single mechanism accounts for both the firing rate enhancement and the burstiness reduction in V4, but this has not been empirically tested. It is also unclear if the burstiness reduction by spatial attention is found in other visual areas and for other attentional types. We therefore recorded from single neurons in the medial superior temporal area (MST), a key motion-processing area along the dorsal visual pathway, of two rhesus monkeys while they performed a task engaging both spatial and feature-based attention. We show that in MST, spatial attention is associated with a clear reduction in burstiness that is independent of the concurrent enhancement of firing rate. In contrast, feature-based attention enhances firing rate but is not associated with a significant reduction in burstiness. These results establish burstiness reduction as a widespread effect of spatial attention. They also suggest that in contrast to the recently proposed model, the effects of spatial attention on burstiness and firing rate emerge from different mechanisms. © The Author 2016. Published by Oxford University Press.

  7. Neuroimaging of amblyopia and binocular vision: a review

    PubMed Central

    Joly, Olivier; Frankó, Edit

    2014-01-01

    Amblyopia is a cerebral visual impairment considered to derive from abnormal visual experience (e.g., strabismus, anisometropia). Amblyopia, first considered as a monocular disorder, is now often seen as a primarily binocular disorder resulting in more and more studies examining the binocular deficits in the patients. The neural mechanisms of amblyopia are not completely understood even though they have been investigated with electrophysiological recordings in animal models and more recently with neuroimaging techniques in humans. In this review, we summarize the current knowledge about the brain regions that underlie the visual deficits associated with amblyopia with a focus on binocular vision using functional magnetic resonance imaging. The first studies focused on abnormal responses in the primary and secondary visual areas whereas recent evidence shows that there are also deficits at higher levels of the visual pathways within the parieto-occipital and temporal cortices. These higher level areas are part of the cortical network involved in 3D vision from binocular cues. Therefore, reduced responses in these areas could be related to the impaired binocular vision in amblyopic patients. Promising new binocular treatments might at least partially correct the activation in these areas. Future neuroimaging experiments could help to characterize the brain response changes associated with these treatments and help devise them. PMID:25147511

  8. Stimulus-driven changes in the direction of neural priming during visual word recognition.

    PubMed

    Pas, Maciej; Nakamura, Kimihiro; Sawamoto, Nobukatsu; Aso, Toshihiko; Fukuyama, Hidenao

    2016-01-15

    Visual object recognition is generally known to be facilitated when targets are preceded by the same or relevant stimuli. For written words, however, the beneficial effect of priming can be reversed when primes and targets share initial syllables (e.g., "boca" and "bono"). Using fMRI, the present study explored neuroanatomical correlates of this negative syllabic priming. In each trial, participants made semantic judgment about a centrally presented target, which was preceded by a masked prime flashed either to the left or right visual field. We observed that the inhibitory priming during reading was associated with a left-lateralized effect of repetition enhancement in the inferior frontal gyrus (IFG), rather than repetition suppression in the ventral visual region previously associated with facilitatory behavioral priming. We further performed a second fMRI experiment using a classical whole-word repetition priming paradigm with the same hemifield procedure and task instruction, and obtained well-known effects of repetition suppression in the left occipito-temporal cortex. These results therefore suggest that the left IFG constitutes a fast word processing system distinct from the posterior visual word-form system and that the directions of repetition effects can change with intrinsic properties of stimuli even when participants' cognitive and attentional states are kept constant. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Spatial Attention Reduces Burstiness in Macaque Visual Cortical Area MST

    PubMed Central

    Xue, Cheng; Kaping, Daniel; Ray, Sonia Baloni; Krishna, B. Suresh; Treue, Stefan

    2017-01-01

    Abstract Visual attention modulates the firing rate of neurons in many primate cortical areas. In V4, a cortical area in the ventral visual pathway, spatial attention has also been shown to reduce the tendency of neurons to fire closely separated spikes (burstiness). A recent model proposes that a single mechanism accounts for both the firing rate enhancement and the burstiness reduction in V4, but this has not been empirically tested. It is also unclear if the burstiness reduction by spatial attention is found in other visual areas and for other attentional types. We therefore recorded from single neurons in the medial superior temporal area (MST), a key motion-processing area along the dorsal visual pathway, of two rhesus monkeys while they performed a task engaging both spatial and feature-based attention. We show that in MST, spatial attention is associated with a clear reduction in burstiness that is independent of the concurrent enhancement of firing rate. In contrast, feature-based attention enhances firing rate but is not associated with a significant reduction in burstiness. These results establish burstiness reduction as a widespread effect of spatial attention. They also suggest that in contrast to the recently proposed model, the effects of spatial attention on burstiness and firing rate emerge from different mechanisms. PMID:28365773

  10. Neuroimaging of amblyopia and binocular vision: a review.

    PubMed

    Joly, Olivier; Frankó, Edit

    2014-01-01

    Amblyopia is a cerebral visual impairment considered to derive from abnormal visual experience (e.g., strabismus, anisometropia). Amblyopia, first considered as a monocular disorder, is now often seen as a primarily binocular disorder resulting in more and more studies examining the binocular deficits in the patients. The neural mechanisms of amblyopia are not completely understood even though they have been investigated with electrophysiological recordings in animal models and more recently with neuroimaging techniques in humans. In this review, we summarize the current knowledge about the brain regions that underlie the visual deficits associated with amblyopia with a focus on binocular vision using functional magnetic resonance imaging. The first studies focused on abnormal responses in the primary and secondary visual areas whereas recent evidence shows that there are also deficits at higher levels of the visual pathways within the parieto-occipital and temporal cortices. These higher level areas are part of the cortical network involved in 3D vision from binocular cues. Therefore, reduced responses in these areas could be related to the impaired binocular vision in amblyopic patients. Promising new binocular treatments might at least partially correct the activation in these areas. Future neuroimaging experiments could help to characterize the brain response changes associated with these treatments and help devise them.

  11. Asymmetric temporal integration of layer 4 and layer 2/3 inputs in visual cortex.

    PubMed

    Hang, Giao B; Dan, Yang

    2011-01-01

    Neocortical neurons in vivo receive concurrent synaptic inputs from multiple sources, including feedforward, horizontal, and feedback pathways. Layer 2/3 of the visual cortex receives feedforward input from layer 4 and horizontal input from layer 2/3. Firing of the pyramidal neurons, which carries the output to higher cortical areas, depends critically on the interaction of these pathways. Here we examined synaptic integration of inputs from layer 4 and layer 2/3 in rat visual cortical slices. We found that the integration is sublinear and temporally asymmetric, with larger responses if layer 2/3 input preceded layer 4 input. The sublinearity depended on inhibition, and the asymmetry was largely attributable to the difference between the two inhibitory inputs. Interestingly, the asymmetric integration was specific to pyramidal neurons, and it strongly affected their spiking output. Thus via cortical inhibition, the temporal order of activation of layer 2/3 and layer 4 pathways can exert powerful control of cortical output during visual processing.

  12. Mechanisms for Rapid Adaptive Control of Motion Processing in Macaque Visual Cortex.

    PubMed

    McLelland, Douglas; Baker, Pamela M; Ahmed, Bashir; Kohn, Adam; Bair, Wyeth

    2015-07-15

    A key feature of neural networks is their ability to rapidly adjust their function, including signal gain and temporal dynamics, in response to changes in sensory inputs. These adjustments are thought to be important for optimizing the sensitivity of the system, yet their mechanisms remain poorly understood. We studied adaptive changes in temporal integration in direction-selective cells in macaque primary visual cortex, where specific hypotheses have been proposed to account for rapid adaptation. By independently stimulating direction-specific channels, we found that the control of temporal integration of motion at one direction was independent of motion signals driven at the orthogonal direction. We also found that individual neurons can simultaneously support two different profiles of temporal integration for motion in orthogonal directions. These findings rule out a broad range of adaptive mechanisms as being key to the control of temporal integration, including untuned normalization and nonlinearities of spike generation and somatic adaptation in the recorded direction-selective cells. Such mechanisms are too broadly tuned, or occur too far downstream, to explain the channel-specific and multiplexed temporal integration that we observe in single neurons. Instead, we are compelled to conclude that parallel processing pathways are involved, and we demonstrate one such circuit using a computer model. This solution allows processing in different direction/orientation channels to be separately optimized and is sensible given that, under typical motion conditions (e.g., translation or looming), speed on the retina is a function of the orientation of image components. Many neurons in visual cortex are understood in terms of their spatial and temporal receptive fields. It is now known that the spatiotemporal integration underlying visual responses is not fixed but depends on the visual input. For example, neurons that respond selectively to motion direction integrate signals over a shorter time window when visual motion is fast and a longer window when motion is slow. We investigated the mechanisms underlying this useful adaptation by recording from neurons as they responded to stimuli moving in two different directions at different speeds. Computer simulations of our results enabled us to rule out several candidate theories in favor of a model that integrates across multiple parallel channels that operate at different time scales. Copyright © 2015 the authors 0270-6474/15/3510268-13$15.00/0.

  13. Learning of goal-relevant and -irrelevant complex visual sequences in human V1.

    PubMed

    Rosenthal, Clive R; Mallik, Indira; Caballero-Gaudes, Cesar; Sereno, Martin I; Soto, David

    2018-06-12

    Learning and memory are supported by a network involving the medial temporal lobe and linked neocortical regions. Emerging evidence indicates that primary visual cortex (i.e., V1) may contribute to recognition memory, but this has been tested only with a single visuospatial sequence as the target memorandum. The present study used functional magnetic resonance imaging to investigate whether human V1 can support the learning of multiple, concurrent complex visual sequences involving discontinous (second-order) associations. Two peripheral, goal-irrelevant but structured sequences of orientated gratings appeared simultaneously in fixed locations of the right and left visual fields alongside a central, goal-relevant sequence that was in the focus of spatial attention. Pseudorandom sequences were introduced at multiple intervals during the presentation of the three structured visual sequences to provide an online measure of sequence-specific knowledge at each retinotopic location. We found that a network involving the precuneus and V1 was involved in learning the structured sequence presented at central fixation, whereas right V1 was modulated by repeated exposure to the concurrent structured sequence presented in the left visual field. The same result was not found in left V1. These results indicate for the first time that human V1 can support the learning of multiple concurrent sequences involving complex discontinuous inter-item associations, even peripheral sequences that are goal-irrelevant. Copyright © 2018. Published by Elsevier Inc.

  14. Learning and Discrimination of Audiovisual Events in Human Infants: The Hierarchical Relation between Intersensory Temporal Synchrony and Rhythmic Pattern Cues.

    ERIC Educational Resources Information Center

    Lewkowicz, David J.

    2003-01-01

    Three experiments examined 4- to 10-month-olds' perception of audio-visual (A-V) temporal synchrony cues in the presence or absence of rhythmic pattern cues. Results established that infants of all ages could discriminate between two different audio-visual rhythmic events. Only 10-month-olds detected a desynchronization of the auditory and visual…

  15. A Pencil Rescues Impaired Performance on a Visual Discrimination Task in Patients with Medial Temporal Lobe Lesions

    ERIC Educational Resources Information Center

    Knutson, Ashley R.; Hopkins, Ramona O.; Squire, Larry R.

    2013-01-01

    We tested proposals that medial temporal lobe (MTL) structures support not just memory but certain kinds of visual perception as well. Patients with hippocampal lesions or larger MTL lesions attempted to identify the unique object among twin pairs of objects that had a high degree of feature overlap. Patients were markedly impaired under the more…

  16. Preservation of perceptual integration improves temporal stability of bimanual coordination in the elderly: an evidence of age-related brain plasticity.

    PubMed

    Blais, Mélody; Martin, Elodie; Albaret, Jean-Michel; Tallet, Jessica

    2014-12-15

    Despite the apparent age-related decline in perceptual-motor performance, recent studies suggest that the elderly people can improve their reaction time when relevant sensory information are available. However, little is known about which sensory information may improve motor behaviour itself. Using a synchronization task, the present study investigates how visual and/or auditory stimulations could increase accuracy and stability of three bimanual coordination modes produced by elderly and young adults. Neurophysiological activations are recorded with ElectroEncephaloGraphy (EEG) to explore neural mechanisms underlying behavioural effects. Results reveal that the elderly stabilize all coordination modes when auditory or audio-visual stimulations are available, compared to visual stimulation alone. This suggests that auditory stimulations are sufficient to improve temporal stability of rhythmic coordination, even more in the elderly. This behavioural effect is primarily associated with increased attentional and sensorimotor-related neural activations in the elderly but similar perceptual-related activations in elderly and young adults. This suggests that, despite a degradation of attentional and sensorimotor neural processes, perceptual integration of auditory stimulations is preserved in the elderly. These results suggest that perceptual-related brain plasticity is, at least partially, conserved in normal aging. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Attributing intentions to random motion engages the posterior superior temporal sulcus.

    PubMed

    Lee, Su Mei; Gao, Tao; McCarthy, Gregory

    2014-01-01

    The right posterior superior temporal sulcus (pSTS) is a neural region involved in assessing the goals and intentions underlying the motion of social agents. Recent research has identified visual cues, such as chasing, that trigger animacy detection and intention attribution. When readily available in a visual display, these cues reliably activate the pSTS. Here, using functional magnetic resonance imaging, we examined if attributing intentions to random motion would likewise engage the pSTS. Participants viewed displays of four moving circles and were instructed to search for chasing or mirror-correlated motion. On chasing trials, one circle chased another circle, invoking the percept of an intentional agent; while on correlated motion trials, one circle's motion was mirror reflected by another. On the remaining trials, all circles moved randomly. As expected, pSTS activation was greater when participants searched for chasing vs correlated motion when these cues were present in the displays. Of critical importance, pSTS activation was also greater when participants searched for chasing compared to mirror-correlated motion when the displays in both search conditions were statistically identical random motion. We conclude that pSTS activity associated with intention attribution can be invoked by top-down processes in the absence of reliable visual cues for intentionality.

  18. fMR-adaptation indicates selectivity to audiovisual content congruency in distributed clusters in human superior temporal cortex.

    PubMed

    van Atteveldt, Nienke M; Blau, Vera C; Blomert, Leo; Goebel, Rainer

    2010-02-02

    Efficient multisensory integration is of vital importance for adequate interaction with the environment. In addition to basic binding cues like temporal and spatial coherence, meaningful multisensory information is also bound together by content-based associations. Many functional Magnetic Resonance Imaging (fMRI) studies propose the (posterior) superior temporal cortex (STC) as the key structure for integrating meaningful multisensory information. However, a still unanswered question is how superior temporal cortex encodes content-based associations, especially in light of inconsistent results from studies comparing brain activation to semantically matching (congruent) versus nonmatching (incongruent) multisensory inputs. Here, we used fMR-adaptation (fMR-A) in order to circumvent potential problems with standard fMRI approaches, including spatial averaging and amplitude saturation confounds. We presented repetitions of audiovisual stimuli (letter-speech sound pairs) and manipulated the associative relation between the auditory and visual inputs (congruent/incongruent pairs). We predicted that if multisensory neuronal populations exist in STC and encode audiovisual content relatedness, adaptation should be affected by the manipulated audiovisual relation. The results revealed an occipital-temporal network that adapted independently of the audiovisual relation. Interestingly, several smaller clusters distributed over superior temporal cortex within that network, adapted stronger to congruent than to incongruent audiovisual repetitions, indicating sensitivity to content congruency. These results suggest that the revealed clusters contain multisensory neuronal populations that encode content relatedness by selectively responding to congruent audiovisual inputs, since unisensory neuronal populations are assumed to be insensitive to the audiovisual relation. These findings extend our previously revealed mechanism for the integration of letters and speech sounds and demonstrate that fMR-A is sensitive to multisensory congruency effects that may not be revealed in BOLD amplitude per se.

  19. Multiple asynchronous stimulus- and task-dependent hierarchies (STDH) within the visual brain's parallel processing systems.

    PubMed

    Zeki, Semir

    2016-10-01

    Results from a variety of sources, some many years old, lead ineluctably to a re-appraisal of the twin strategies of hierarchical and parallel processing used by the brain to construct an image of the visual world. Contrary to common supposition, there are at least three 'feed-forward' anatomical hierarchies that reach the primary visual cortex (V1) and the specialized visual areas outside it, in parallel. These anatomical hierarchies do not conform to the temporal order with which visual signals reach the specialized visual areas through V1. Furthermore, neither the anatomical hierarchies nor the temporal order of activation through V1 predict the perceptual hierarchies. The latter shows that we see (and become aware of) different visual attributes at different times, with colour leading form (orientation) and directional visual motion, even though signals from fast-moving, high-contrast stimuli are among the earliest to reach the visual cortex (of area V5). Parallel processing, on the other hand, is much more ubiquitous than commonly supposed but is subject to a barely noticed but fundamental aspect of brain operations, namely that different parallel systems operate asynchronously with respect to each other and reach perceptual endpoints at different times. This re-assessment leads to the conclusion that the visual brain is constituted of multiple, parallel and asynchronously operating task- and stimulus-dependent hierarchies (STDH); which of these parallel anatomical hierarchies have temporal and perceptual precedence at any given moment is stimulus and task related, and dependent on the visual brain's ability to undertake multiple operations asynchronously. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  20. Multimodal integration of fMRI and EEG data for high spatial and temporal resolution analysis of brain networks

    PubMed Central

    Mantini, D.; Marzetti, L.; Corbetta, M.; Romani, G.L.; Del Gratta, C.

    2017-01-01

    Two major non-invasive brain mapping techniques, electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), have complementary advantages with regard to their spatial and temporal resolution. We propose an approach based on the integration of EEG and fMRI, enabling the EEG temporal dynamics of information processing to be characterized within spatially well-defined fMRI large-scale networks. First, the fMRI data are decomposed into networks by means of spatial independent component analysis (sICA), and those associated with intrinsic activity and/or responding to task performance are selected using information from the related time-courses. Next, the EEG data over all sensors are averaged with respect to event timing, thus calculating event-related potentials (ERPs). The ERPs are subjected to temporal ICA (tICA), and the resulting components are localized with the weighted minimum norm (WMNLS) algorithm using the task-related fMRI networks as priors. Finally, the temporal contribution of each ERP component in the areas belonging to the fMRI large-scale networks is estimated. The proposed approach has been evaluated on visual target detection data. Our results confirm that two different components, commonly observed in EEG when presenting novel and salient stimuli respectively, are related to the neuronal activation in large-scale networks, operating at different latencies and associated with different functional processes. PMID:20052528

  1. Similar ventral occipito-temporal cortex activations in literate and illiterate adults during the Chinese character matching task: an fMRI study.

    PubMed

    Qi, Geqi; Li, Xiujun; Yan, Tianyi; Wang, Bin; Yang, Jiajia; Wu, Jinglong; Guo, Qiyong

    2014-04-30

    Visual word expertise is typically associated with enhanced ventral occipito-temporal (vOT) cortex activation in response to written words. Previous study utilized a passive viewing task and found that vOT response to written words was significantly stronger in literate compared to the illiterate subjects. However, recent neuroimaging findings have suggested that vOT response properties are highly dependent upon the task demand. Thus, it is unknown whether literate adults would show stronger vOT response to written words compared to illiterate adults during other cognitive tasks, such as perceptual matching. We addressed this issue by comparing vOT activations between literate and illiterate adults during a Chinese character and simple figure matching task. Unlike passive viewing, a perceptual matching task requires active shape comparison, therefore minimizing automatic word processing bias. We found that although the literate group performed better at Chinese character matching task, the two subject groups showed similar strong vOT responses during this task. Overall, the findings indicate that the vOT response to written words is not affected by expertise during a perceptual matching task, suggesting that the association between visual word expertise and vOT response may depend on the task demand. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. The iconography of mourning and its neural correlates: a functional neuroimaging study.

    PubMed

    Labek, Karin; Berger, Samantha; Buchheim, Anna; Bosch, Julia; Spohrs, Jennifer; Dommes, Lisa; Beschoner, Petra; Stingl, Julia C; Viviani, Roberto

    2017-08-01

    The present functional neuroimaging study focuses on the iconography of mourning. A culture-specific pattern of body postures of mourning individuals, mostly suggesting withdrawal, emerged from a survey of visual material. When used in different combinations in stylized drawings in our neuroimaging study, this material activated cortical areas commonly seen in studies of social cognition (temporo-parietal junction, superior temporal gyrus, and inferior temporal lobe), empathy for pain (somatosensory cortex), and loss (precuneus, middle/posterior cingular gyrus). This pattern of activation developed over time. While in the early phases of exposure lower association areas, such as the extrastriate body area, were active, in the late phases activation in parietal and temporal association areas and the prefrontal cortex was more prominent. These findings are consistent with the conventional and contextual character of iconographic material, and further differentiate it from emotionally negatively valenced and high-arousing stimuli. In future studies, this neuroimaging assay may be useful in characterizing interpretive appraisal of material of negative emotional valence. © The Author (2017). Published by Oxford University Press.

  3. Specificity and timescales of cortical adaptation as inferences about natural movie statistics.

    PubMed

    Snow, Michoel; Coen-Cagli, Ruben; Schwartz, Odelia

    2016-10-01

    Adaptation is a phenomenological umbrella term under which a variety of temporal contextual effects are grouped. Previous models have shown that some aspects of visual adaptation reflect optimal processing of dynamic visual inputs, suggesting that adaptation should be tuned to the properties of natural visual inputs. However, the link between natural dynamic inputs and adaptation is poorly understood. Here, we extend a previously developed Bayesian modeling framework for spatial contextual effects to the temporal domain. The model learns temporal statistical regularities of natural movies and links these statistics to adaptation in primary visual cortex via divisive normalization, a ubiquitous neural computation. In particular, the model divisively normalizes the present visual input by the past visual inputs only to the degree that these are inferred to be statistically dependent. We show that this flexible form of normalization reproduces classical findings on how brief adaptation affects neuronal selectivity. Furthermore, prior knowledge acquired by the Bayesian model from natural movies can be modified by prolonged exposure to novel visual stimuli. We show that this updating can explain classical results on contrast adaptation. We also simulate the recent finding that adaptation maintains population homeostasis, namely, a balanced level of activity across a population of neurons with different orientation preferences. Consistent with previous disparate observations, our work further clarifies the influence of stimulus-specific and neuronal-specific normalization signals in adaptation.

  4. Specificity and timescales of cortical adaptation as inferences about natural movie statistics

    PubMed Central

    Snow, Michoel; Coen-Cagli, Ruben; Schwartz, Odelia

    2016-01-01

    Adaptation is a phenomenological umbrella term under which a variety of temporal contextual effects are grouped. Previous models have shown that some aspects of visual adaptation reflect optimal processing of dynamic visual inputs, suggesting that adaptation should be tuned to the properties of natural visual inputs. However, the link between natural dynamic inputs and adaptation is poorly understood. Here, we extend a previously developed Bayesian modeling framework for spatial contextual effects to the temporal domain. The model learns temporal statistical regularities of natural movies and links these statistics to adaptation in primary visual cortex via divisive normalization, a ubiquitous neural computation. In particular, the model divisively normalizes the present visual input by the past visual inputs only to the degree that these are inferred to be statistically dependent. We show that this flexible form of normalization reproduces classical findings on how brief adaptation affects neuronal selectivity. Furthermore, prior knowledge acquired by the Bayesian model from natural movies can be modified by prolonged exposure to novel visual stimuli. We show that this updating can explain classical results on contrast adaptation. We also simulate the recent finding that adaptation maintains population homeostasis, namely, a balanced level of activity across a population of neurons with different orientation preferences. Consistent with previous disparate observations, our work further clarifies the influence of stimulus-specific and neuronal-specific normalization signals in adaptation. PMID:27699416

  5. Designing a visualization system for hydrological data

    NASA Astrophysics Data System (ADS)

    Fuhrmann, Sven

    2000-02-01

    The field of hydrology is, as any other scientific field, strongly affected by a massive technological evolution. The spread of modern information and communication technology within the last three decades has led to an increased collection, availability and use of spatial and temporal digital hydrological data. In a two-year research period a working group in Muenster applied and developed methods for the visualization of digital hydrological data and the documentation of hydrological models. A low-cost multimedial, hydrological visualization system (HydroVIS) for the Weser river catchment was developed. The research group designed HydroVIS under freeware constraints and tried to show what kind of multimedia visualization techniques can be effectively used with a nonprofit hydrological visualization system. The system's visual components include features such as electronic maps, temporal and nontemporal cartographic animations, the display of geologic profiles, interactive diagrams and hypertext, including photographs and tables.

  6. A lightning strike to the head causing a visual cortex defect with simple and complex visual hallucinations

    PubMed Central

    Kleiter, Ingo; Luerding, Ralf; Diendorfer, Gerhard; Rek, Helga; Bogdahn, Ulrich; Schalke, Berthold

    2007-01-01

    The case of a 23‐year‐old mountaineer who was hit by a lightning strike to the occiput causing a large central visual field defect and bilateral tympanic membrane ruptures is described. Owing to extreme agitation, the patient was set to a drug‐induced coma for 3 days. After extubation, she experienced simple and complex visual hallucinations for several days, but otherwise recovered largely. Neuropsychological tests revealed deficits in fast visual detection tasks and non‐verbal learning, and indicated a right temporal lobe dysfunction, consistent with a right temporal focus on electroencephalography. Four months after the accident, she developed a psychological reaction consisting of nightmares with reappearance of the complex visual hallucinations and a depressive syndrome. Using the European Cooperation for Lightning Detection network, a meteorological system for lightning surveillance, the exact geographical location and nature of the lightning flash were retrospectively retraced. PMID:17369595

  7. A lightning strike to the head causing a visual cortex defect with simple and complex visual hallucinations

    PubMed Central

    Kleiter, Ingo; Luerding, Ralf; Diendorfer, Gerhard; Rek, Helga; Bogdahn, Ulrich; Schalke, Berthold

    2009-01-01

    The case of a 23-year-old mountaineer who was hit by a lightning strike to the occiput causing a large central visual field defect and bilateral tympanic membrane ruptures is described. Owing to extreme agitation, the patient was sent into a drug-induced coma for 3 days. After extubation, she experienced simple and complex visual hallucinations for several days, but otherwise largely recovered. Neuropsychological tests revealed deficits in fast visual detection tasks and non-verbal learning and indicated a right temporal lobe dysfunction, consistent with a right temporal focus on electroencephalography. At 4 months after the accident, she developed a psychological reaction consisting of nightmares, with reappearance of the complex visual hallucinations and a depressive syndrome. Using the European Cooperation for Lightning Detection network, a meteorological system for lightning surveillance, the exact geographical location and nature of the lightning strike were retrospectively retraced PMID:21734915

  8. Measuring temporal summation in visual detection with a single-photon source.

    PubMed

    Holmes, Rebecca; Victora, Michelle; Wang, Ranxiao Frances; Kwiat, Paul G

    2017-11-01

    Temporal summation is an important feature of the visual system which combines visual signals that arrive at different times. Previous research estimated complete summation to last for 100ms for stimuli judged "just detectable." We measured the full range of temporal summation for much weaker stimuli using a new paradigm and a novel light source, developed in the field of quantum optics for generating small numbers of photons with precise timing characteristics and reduced variance in photon number. Dark-adapted participants judged whether a light was presented to the left or right of their fixation in each trial. In Experiment 1, stimuli contained a stream of photons delivered at a constant rate while the duration was systematically varied. Accuracy should increase with duration as long as the later photons can be integrated with the proceeding ones into a single signal. The temporal integration window was estimated as the point that performance no longer improved, and was found to be 650ms on average. In Experiment 2, the duration of the visual stimuli was kept short (100ms or <30ms) while the number of photons was varied to explore the efficiency of summation over the integration window compared to Experiment 1. There was some indication that temporal summation remains efficient over the integration window, although there is variation between individuals. The relatively long integration window measured in this study may be relevant to studies of the absolute visual threshold, i.e., tests of single-photon vision, where "single" photons should be separated by greater than the integration window to avoid summation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Audio-Visual Speaker Diarization Based on Spatiotemporal Bayesian Fusion.

    PubMed

    Gebru, Israel D; Ba, Sileye; Li, Xiaofei; Horaud, Radu

    2018-05-01

    Speaker diarization consists of assigning speech signals to people engaged in a dialogue. An audio-visual spatiotemporal diarization model is proposed. The model is well suited for challenging scenarios that consist of several participants engaged in multi-party interaction while they move around and turn their heads towards the other participants rather than facing the cameras and the microphones. Multiple-person visual tracking is combined with multiple speech-source localization in order to tackle the speech-to-person association problem. The latter is solved within a novel audio-visual fusion method on the following grounds: binaural spectral features are first extracted from a microphone pair, then a supervised audio-visual alignment technique maps these features onto an image, and finally a semi-supervised clustering method assigns binaural spectral features to visible persons. The main advantage of this method over previous work is that it processes in a principled way speech signals uttered simultaneously by multiple persons. The diarization itself is cast into a latent-variable temporal graphical model that infers speaker identities and speech turns, based on the output of an audio-visual association process, executed at each time slice, and on the dynamics of the diarization variable itself. The proposed formulation yields an efficient exact inference procedure. A novel dataset, that contains audio-visual training data as well as a number of scenarios involving several participants engaged in formal and informal dialogue, is introduced. The proposed method is thoroughly tested and benchmarked with respect to several state-of-the art diarization algorithms.

  10. Top-down alpha oscillatory network interactions during visuospatial attention orienting.

    PubMed

    Doesburg, Sam M; Bedo, Nicolas; Ward, Lawrence M

    2016-05-15

    Neuroimaging and lesion studies indicate that visual attention is controlled by a distributed network of brain areas. The covert control of visuospatial attention has also been associated with retinotopic modulation of alpha-band oscillations within early visual cortex, which are thought to underlie inhibition of ignored areas of visual space. The relation between distributed networks mediating attention control and more focal oscillatory mechanisms, however, remains unclear. The present study evaluated the hypothesis that alpha-band, directed, network interactions within the attention control network are systematically modulated by the locus of visuospatial attention. We localized brain areas involved in visuospatial attention orienting using magnetoencephalographic (MEG) imaging and investigated alpha-band Granger-causal interactions among activated regions using narrow-band transfer entropy. The deployment of attention to one side of visual space was indexed by lateralization of alpha power changes between about 400ms and 700ms post-cue onset. The changes in alpha power were associated, in the same time period, with lateralization of anterior-to-posterior information flow in the alpha-band from various brain areas involved in attention control, including the anterior cingulate cortex, left middle and inferior frontal gyri, left superior temporal gyrus, and right insula, and inferior parietal lobule, to early visual areas. We interpreted these results to indicate that distributed network interactions mediated by alpha oscillations exert top-down influences on early visual cortex to modulate inhibition of processing for ignored areas of visual space. Copyright © 2016. Published by Elsevier Inc.

  11. Spatio-temporal visualization of air-sea CO2 flux and carbon budget using volume rendering

    NASA Astrophysics Data System (ADS)

    Du, Zhenhong; Fang, Lei; Bai, Yan; Zhang, Feng; Liu, Renyi

    2015-04-01

    This paper presents a novel visualization method to show the spatio-temporal dynamics of carbon sinks and sources, and carbon fluxes in the ocean carbon cycle. The air-sea carbon budget and its process of accumulation are demonstrated in the spatial dimension, while the distribution pattern and variation of CO2 flux are expressed by color changes. In this way, we unite spatial and temporal characteristics of satellite data through visualization. A GPU-based direct volume rendering technique using half-angle slicing is adopted to dynamically visualize the released or absorbed CO2 gas with shadow effects. A data model is designed to generate four-dimensional (4D) data from satellite-derived air-sea CO2 flux products, and an out-of-core scheduling strategy is also proposed for on-the-fly rendering of time series of satellite data. The presented 4D visualization method is implemented on graphics cards with vertex, geometry and fragment shaders. It provides a visually realistic simulation and user interaction for real-time rendering. This approach has been integrated into the Information System of Ocean Satellite Monitoring for Air-sea CO2 Flux (IssCO2) for the research and assessment of air-sea CO2 flux in the China Seas.

  12. Visual cortex responses reflect temporal structure of continuous quasi-rhythmic sensory stimulation.

    PubMed

    Keitel, Christian; Thut, Gregor; Gross, Joachim

    2017-02-01

    Neural processing of dynamic continuous visual input, and cognitive influences thereon, are frequently studied in paradigms employing strictly rhythmic stimulation. However, the temporal structure of natural stimuli is hardly ever fully rhythmic but possesses certain spectral bandwidths (e.g. lip movements in speech, gestures). Examining periodic brain responses elicited by strictly rhythmic stimulation might thus represent ideal, yet isolated cases. Here, we tested how the visual system reflects quasi-rhythmic stimulation with frequencies continuously varying within ranges of classical theta (4-7Hz), alpha (8-13Hz) and beta bands (14-20Hz) using EEG. Our findings substantiate a systematic and sustained neural phase-locking to stimulation in all three frequency ranges. Further, we found that allocation of spatial attention enhances EEG-stimulus locking to theta- and alpha-band stimulation. Our results bridge recent findings regarding phase locking ("entrainment") to quasi-rhythmic visual input and "frequency-tagging" experiments employing strictly rhythmic stimulation. We propose that sustained EEG-stimulus locking can be considered as a continuous neural signature of processing dynamic sensory input in early visual cortices. Accordingly, EEG-stimulus locking serves to trace the temporal evolution of rhythmic as well as quasi-rhythmic visual input and is subject to attentional bias. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Cross-Hemispheric Collaboration and Segregation Associated with Task Difficulty as Revealed by Structural and Functional Connectivity

    PubMed Central

    Cabeza, Roberto

    2015-01-01

    Although it is known that brain regions in one hemisphere may interact very closely with their corresponding contralateral regions (collaboration) or operate relatively independent of them (segregation), the specific brain regions (where) and conditions (how) associated with collaboration or segregation are largely unknown. We investigated these issues using a split field-matching task in which participants matched the meaning of words or the visual features of faces presented to the same (unilateral) or to different (bilateral) visual fields. Matching difficulty was manipulated by varying the semantic similarity of words or the visual similarity of faces. We assessed the white matter using the fractional anisotropy (FA) measure provided by diffusion tensor imaging (DTI) and cross-hemispheric communication in terms of fMRI-based connectivity between homotopic pairs of cortical regions. For both perceptual and semantic matching, bilateral trials became faster than unilateral trials as difficulty increased (bilateral processing advantage, BPA). The study yielded three novel findings. First, whereas FA in anterior corpus callosum (genu) correlated with word-matching BPA, FA in posterior corpus callosum (splenium-occipital) correlated with face-matching BPA. Second, as matching difficulty intensified, cross-hemispheric functional connectivity (CFC) increased in domain-general frontopolar cortex (for both word and face matching) but decreased in domain-specific ventral temporal lobe regions (temporal pole for word matching and fusiform gyrus for face matching). Last, a mediation analysis linking DTI and fMRI data showed that CFC mediated the effect of callosal FA on BPA. These findings clarify the mechanisms by which the hemispheres interact to perform complex cognitive tasks. PMID:26019335

  14. Stimulus Value Signals in Ventromedial PFC Reflect the Integration of Attribute Value Signals Computed in Fusiform Gyrus and Posterior Superior Temporal Gyrus

    PubMed Central

    Lim, Seung-Lark; O'Doherty, John P.

    2013-01-01

    We often have to make choices among multiattribute stimuli (e.g., a food that differs on its taste and health). Behavioral data suggest that choices are made by computing the value of the different attributes and then integrating them into an overall stimulus value signal. However, it is not known whether this theory describes the way the brain computes the stimulus value signals, or how the underlying computations might be implemented. We investigated these questions using a human fMRI task in which individuals had to evaluate T-shirts that varied in their visual esthetic (e.g., color) and semantic (e.g., meaning of logo printed in T-shirt) components. We found that activity in the fusiform gyrus, an area associated with the processing of visual features, correlated with the value of the visual esthetic attributes, but not with the value of the semantic attributes. In contrast, activity in posterior superior temporal gyrus, an area associated with the processing of semantic meaning, exhibited the opposite pattern. Furthermore, both areas exhibited functional connectivity with an area of ventromedial prefrontal cortex that reflects the computation of overall stimulus values at the time of decision. The results provide supporting evidence for the hypothesis that some attribute values are computed in cortical areas specialized in the processing of such features, and that those attribute-specific values are then passed to the vmPFC to be integrated into an overall stimulus value signal to guide the decision. PMID:23678116

  15. Stimulus value signals in ventromedial PFC reflect the integration of attribute value signals computed in fusiform gyrus and posterior superior temporal gyrus.

    PubMed

    Lim, Seung-Lark; O'Doherty, John P; Rangel, Antonio

    2013-05-15

    We often have to make choices among multiattribute stimuli (e.g., a food that differs on its taste and health). Behavioral data suggest that choices are made by computing the value of the different attributes and then integrating them into an overall stimulus value signal. However, it is not known whether this theory describes the way the brain computes the stimulus value signals, or how the underlying computations might be implemented. We investigated these questions using a human fMRI task in which individuals had to evaluate T-shirts that varied in their visual esthetic (e.g., color) and semantic (e.g., meaning of logo printed in T-shirt) components. We found that activity in the fusiform gyrus, an area associated with the processing of visual features, correlated with the value of the visual esthetic attributes, but not with the value of the semantic attributes. In contrast, activity in posterior superior temporal gyrus, an area associated with the processing of semantic meaning, exhibited the opposite pattern. Furthermore, both areas exhibited functional connectivity with an area of ventromedial prefrontal cortex that reflects the computation of overall stimulus values at the time of decision. The results provide supporting evidence for the hypothesis that some attribute values are computed in cortical areas specialized in the processing of such features, and that those attribute-specific values are then passed to the vmPFC to be integrated into an overall stimulus value signal to guide the decision.

  16. Quantifying temporal glucose variability in diabetes via continuous glucose monitoring: mathematical methods and clinical application.

    PubMed

    Kovatchev, Boris P; Clarke, William L; Breton, Marc; Brayman, Kenneth; McCall, Anthony

    2005-12-01

    Continuous glucose monitors (CGMs) collect detailed blood glucose (BG) time series, which carry significant information about the dynamics of BG fluctuations. In contrast, the methods for analysis of CGM data remain those developed for infrequent BG self-monitoring. As a result, important information about the temporal structure of the data is lost during the translation of raw sensor readings into clinically interpretable statistics and images. The following mathematical methods are introduced into the field of CGM data interpretation: (1) analysis of BG rate of change; (2) risk analysis using previously reported Low/High BG Indices and Poincare (lag) plot of risk associated with temporal BG variability; and (3) spatial aggregation of the process of BG fluctuations and its Markov chain visualization. The clinical application of these methods is illustrated by analysis of data of a patient with Type 1 diabetes mellitus who underwent islet transplantation and with data from clinical trials. Normative data [12,025 reference (YSI device, Yellow Springs Instruments, Yellow Springs, OH) BG determinations] in patients with Type 1 diabetes mellitus who underwent insulin and glucose challenges suggest that the 90%, 95%, and 99% confidence intervals of BG rate of change that could be maximally sustained over 15-30 min are [-2,2], [-3,3], and [-4,4] mg/dL/min, respectively. BG dynamics and risk parameters clearly differentiated the stages of transplantation and the effects of medication. Aspects of treatment were clearly visualized by graphs of BG rate of change and Low/High BG Indices, by a Poincare plot of risk for rapid BG fluctuations, and by a plot of the aggregated Markov process. Advanced analysis and visualization of CGM data allow for evaluation of dynamical characteristics of diabetes and reveal clinical information that is inaccessible via standard statistics, which do not take into account the temporal structure of the data. The use of such methods improves the assessment of patients' glycemic control.

  17. Invariant visual object recognition: a model, with lighting invariance.

    PubMed

    Rolls, Edmund T; Stringer, Simon M

    2006-01-01

    How are invariant representations of objects formed in the visual cortex? We describe a neurophysiological and computational approach which focusses on a feature hierarchy model in which invariant representations can be built by self-organizing learning based on the statistics of the visual input. The model can use temporal continuity in an associative synaptic learning rule with a short term memory trace, and/or it can use spatial continuity in Continuous Transformation learning. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and in this paper we show also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in for example spatial and object search tasks. The model has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene.

  18. Music in film and animation: experimental semiotics applied to visual, sound and musical structures

    NASA Astrophysics Data System (ADS)

    Kendall, Roger A.

    2010-02-01

    The relationship of music to film has only recently received the attention of experimental psychologists and quantificational musicologists. This paper outlines theory, semiotical analysis, and experimental results using relations among variables of temporally organized visuals and music. 1. A comparison and contrast is developed among the ideas in semiotics and experimental research, including historical and recent developments. 2. Musicological Exploration: The resulting multidimensional structures of associative meanings, iconic meanings, and embodied meanings are applied to the analysis and interpretation of a range of film with music. 3. Experimental Verification: A series of experiments testing the perceptual fit of musical and visual patterns layered together in animations determined goodness of fit between all pattern combinations, results of which confirmed aspects of the theory. However, exceptions were found when the complexity of the stratified stimuli resulted in cognitive overload.

  19. Sensitivity to timing and order in human visual cortex.

    PubMed

    Singer, Jedediah M; Madsen, Joseph R; Anderson, William S; Kreiman, Gabriel

    2015-03-01

    Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. Copyright © 2015 the American Physiological Society.

  20. How Temporal and Spatial Aspects of Presenting Visualizations Affect Learning about Locomotion Patterns

    ERIC Educational Resources Information Center

    Imhof, Birgit; Scheiter, Katharina; Edelmann, Jorg; Gerjets, Peter

    2012-01-01

    Two studies investigated the effectiveness of dynamic and static visualizations for a perceptual learning task (locomotion pattern classification). In Study 1, seventy-five students viewed either dynamic, static-sequential, or static-simultaneous visualizations. For tasks of intermediate difficulty, dynamic visualizations led to better…

  1. Visual short-term memory deficits in REM sleep behaviour disorder mirror those in Parkinson's disease.

    PubMed

    Rolinski, Michal; Zokaei, Nahid; Baig, Fahd; Giehl, Kathrin; Quinnell, Timothy; Zaiwalla, Zenobia; Mackay, Clare E; Husain, Masud; Hu, Michele T M

    2016-01-01

    Individuals with REM sleep behaviour disorder are at significantly higher risk of developing Parkinson's disease. Here we examined visual short-term memory deficits--long associated with Parkinson's disease--in patients with REM sleep behaviour disorder without Parkinson's disease using a novel task that measures recall precision. Visual short-term memory for sequentially presented coloured bars of different orientation was assessed in 21 patients with polysomnography-proven idiopathic REM sleep behaviour disorder, 26 cases with early Parkinson's disease and 26 healthy controls. Three tasks using the same stimuli controlled for attentional filtering ability, sensorimotor and temporal decay factors. Both patients with REM sleep behaviour disorder and Parkinson's disease demonstrated a deficit in visual short-term memory, with recall precision significantly worse than in healthy controls with no deficit observed in any of the control tasks. Importantly, the pattern of memory deficit in both patient groups was specifically explained by an increase in random responses. These results demonstrate that it is possible to detect the signature of memory impairment associated with Parkinson's disease in individuals with REM sleep behaviour disorder, a condition associated with a high risk of developing Parkinson's disease. The pattern of visual short-term memory deficit potentially provides a cognitive marker of 'prodromal' Parkinson's disease that might be useful in tracking disease progression and for disease-modifying intervention trials. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain.

  2. Cortical Neural Synchronization Underlies Primary Visual Consciousness of Qualia: Evidence from Event-Related Potentials

    PubMed Central

    Babiloni, Claudio; Marzano, Nicola; Soricelli, Andrea; Cordone, Susanna; Millán-Calenti, José Carlos; Del Percio, Claudio; Buján, Ana

    2016-01-01

    This article reviews three experiments on event-related potentials (ERPs) testing the hypothesis that primary visual consciousness (stimulus self-report) is related to enhanced cortical neural synchronization as a function of stimulus features. ERP peak latency and sources were compared between “seen” trials and “not seen” trials, respectively related and unrelated to the primary visual consciousness. Three salient features of visual stimuli were considered (visuospatial, emotional face expression, and written words). Results showed the typical visual ERP components in both “seen” and “not seen” trials. There was no statistical difference in the ERP peak latencies between the “seen” and “not seen” trials, suggesting a similar timing of the cortical neural synchronization regardless the primary visual consciousness. In contrast, ERP sources showed differences between “seen” and “not seen” trials. For the visuospatial stimuli, the primary consciousness was related to higher activity in dorsal occipital and parietal sources at about 400 ms post-stimulus. For the emotional face expressions, there was greater activity in parietal and frontal sources at about 180 ms post-stimulus. For the written letters, there was higher activity in occipital, parietal and temporal sources at about 230 ms post-stimulus. These results hint that primary visual consciousness is associated with an enhanced cortical neural synchronization having entirely different spatiotemporal characteristics as a function of the features of the visual stimuli and possibly, the relative qualia (i.e., visuospatial, face expression, and words). In this framework, the dorsal visual stream may be synchronized in association with the primary consciousness of visuospatial and emotional face contents. Analogously, both dorsal and ventral visual streams may be synchronized in association with the primary consciousness of linguistic contents. In this line of reasoning, the ensemble of the cortical neural networks underpinning the single visual features would constitute a sort of multi-dimensional palette of colors, shapes, regions of the visual field, movements, emotional face expressions, and words. The synchronization of one or more of these cortical neural networks, each with its peculiar timing, would produce the primary consciousness of one or more of the visual features of the scene. PMID:27445750

  3. Impact of Audio-Visual Asynchrony on Lip-Reading Effects -Neuromagnetic and Psychophysical Study-

    PubMed Central

    Yahata, Izumi; Kanno, Akitake; Sakamoto, Shuichi; Takanashi, Yoshitaka; Takata, Shiho; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio

    2016-01-01

    The effects of asynchrony between audio and visual (A/V) stimuli on the N100m responses of magnetoencephalography in the left hemisphere were compared with those on the psychophysical responses in 11 participants. The latency and amplitude of N100m were significantly shortened and reduced in the left hemisphere by the presentation of visual speech as long as the temporal asynchrony between A/V stimuli was within 100 ms, but were not significantly affected with audio lags of -500 and +500 ms. However, some small effects were still preserved on average with audio lags of 500 ms, suggesting similar asymmetry of the temporal window to that observed in psychophysical measurements, which tended to be more robust (wider) for audio lags; i.e., the pattern of visual-speech effects as a function of A/V lag observed in the N100m in the left hemisphere grossly resembled that in psychophysical measurements on average, although the individual responses were somewhat varied. The present results suggest that the basic configuration of the temporal window of visual effects on auditory-speech perception could be observed from the early auditory processing stage. PMID:28030631

  4. Temporal expectancy in the context of a theory of visual attention.

    PubMed

    Vangkilde, Signe; Petersen, Anders; Bundesen, Claus

    2013-10-19

    Temporal expectation is expectation with respect to the timing of an event such as the appearance of a certain stimulus. In this paper, temporal expectancy is investigated in the context of the theory of visual attention (TVA), and we begin by summarizing the foundations of this theoretical framework. Next, we present a parametric experiment exploring the effects of temporal expectation on perceptual processing speed in cued single-stimulus letter recognition with unspeeded motor responses. The length of the cue-stimulus foreperiod was exponentially distributed with one of six hazard rates varying between blocks. We hypothesized that this manipulation would result in a distinct temporal expectation in each hazard rate condition. Stimulus exposures were varied such that both the temporal threshold of conscious perception (t0 ms) and the perceptual processing speed (v letters s(-1)) could be estimated using TVA. We found that the temporal threshold t0 was unaffected by temporal expectation, but the perceptual processing speed v was a strikingly linear function of the logarithm of the hazard rate of the stimulus presentation. We argue that the effects on the v values were generated by changes in perceptual biases, suggesting that our perceptual biases are directly related to our temporal expectations.

  5. Temporal precision in the visual pathway through the interplay of excitation and stimulus-driven suppression.

    PubMed

    Butts, Daniel A; Weng, Chong; Jin, Jianzhong; Alonso, Jose-Manuel; Paninski, Liam

    2011-08-03

    Visual neurons can respond with extremely precise temporal patterning to visual stimuli that change on much slower time scales. Here, we investigate how the precise timing of cat thalamic spike trains-which can have timing as precise as 1 ms-is related to the stimulus, in the context of both artificial noise and natural visual stimuli. Using a nonlinear modeling framework applied to extracellular data, we demonstrate that the precise timing of thalamic spike trains can be explained by the interplay between an excitatory input and a delayed suppressive input that resembles inhibition, such that neuronal responses only occur in brief windows where excitation exceeds suppression. The resulting description of thalamic computation resembles earlier models of contrast adaptation, suggesting a more general role for mechanisms of contrast adaptation in visual processing. Thus, we describe a more complex computation underlying thalamic responses to artificial and natural stimuli that has implications for understanding how visual information is represented in the early stages of visual processing.

  6. Towards a Visual Quality Metric for Digital Video

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1998-01-01

    The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.

  7. Automated Assessment of Visual Quality of Digital Video

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ellis, Stephen R. (Technical Monitor)

    1997-01-01

    The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images[1-4]. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.

  8. Temporal Dynamics of Visual Attention Measured with Event-Related Potentials

    PubMed Central

    Kashiwase, Yoshiyuki; Matsumiya, Kazumichi; Kuriki, Ichiro; Shioiri, Satoshi

    2013-01-01

    How attentional modulation on brain activities determines behavioral performance has been one of the most important issues in cognitive neuroscience. This issue has been addressed by comparing the temporal relationship between attentional modulations on neural activities and behavior. Our previous study measured the time course of attention with amplitude and phase coherence of steady-state visual evoked potential (SSVEP) and found that the modulation latency of phase coherence rather than that of amplitude was consistent with the latency of behavioral performance. In this study, as a complementary report, we compared the time course of visual attention shift measured by event-related potentials (ERPs) with that by target detection task. We developed a novel technique to compare ERPs with behavioral results and analyzed the EEG data in our previous study. Two sets of flickering stimulus at different frequencies were presented in the left and right visual hemifields, and a target or distracter pattern was presented randomly at various moments after an attention-cue presentation. The observers were asked to detect targets on the attended stimulus after the cue. We found that two ERP components, P300 and N2pc, were elicited by the target presented at the attended location. Time-course analyses revealed that attentional modulation of the P300 and N2pc amplitudes increased gradually until reaching a maximum and lasted at least 1.5 s after the cue onset, which is similar to the temporal dynamics of behavioral performance. However, attentional modulation of these ERP components started later than that of behavioral performance. Rather, the time course of attentional modulation of behavioral performance was more closely associated with that of the concurrently recorded SSVEPs analyzed. These results suggest that neural activities reflected not by either the P300 or N2pc, but by the SSVEPs, are the source of attentional modulation of behavioral performance. PMID:23976966

  9. Premotor neural correlates of predictive motor timing for speech production and hand movement: evidence for a temporal predictive code in the motor system.

    PubMed

    Johari, Karim; Behroozmand, Roozbeh

    2017-05-01

    The predictive coding model suggests that neural processing of sensory information is facilitated for temporally-predictable stimuli. This study investigated how temporal processing of visually-presented sensory cues modulates movement reaction time and neural activities in speech and hand motor systems. Event-related potentials (ERPs) were recorded in 13 subjects while they were visually-cued to prepare to produce a steady vocalization of a vowel sound or press a button in a randomized order, and to initiate the cued movement following the onset of a go signal on the screen. Experiment was conducted in two counterbalanced blocks in which the time interval between visual cue and go signal was temporally-predictable (fixed delay at 1000 ms) or unpredictable (variable between 1000 and 2000 ms). Results of the behavioral response analysis indicated that movement reaction time was significantly decreased for temporally-predictable stimuli in both speech and hand modalities. We identified premotor ERP activities with a left-lateralized parietal distribution for hand and a frontocentral distribution for speech that were significantly suppressed in response to temporally-predictable compared with unpredictable stimuli. The premotor ERPs were elicited approximately -100 ms before movement and were significantly correlated with speech and hand motor reaction times only in response to temporally-predictable stimuli. These findings suggest that the motor system establishes a predictive code to facilitate movement in response to temporally-predictable sensory stimuli. Our data suggest that the premotor ERP activities are robust neurophysiological biomarkers of such predictive coding mechanisms. These findings provide novel insights into the temporal processing mechanisms of speech and hand motor systems.

  10. Retinal nerve fiber layer thickness measured with optical coherence tomography is related to visual function in glaucomatous eyes.

    PubMed

    El Beltagi, Tarek A; Bowd, Christopher; Boden, Catherine; Amini, Payam; Sample, Pamela A; Zangwill, Linda M; Weinreb, Robert N

    2003-11-01

    To determine the relationship between areas of glaucomatous retinal nerve fiber layer thinning identified by optical coherence tomography and areas of decreased visual field sensitivity identified by standard automated perimetry in glaucomatous eyes. Retrospective observational case series. Forty-three patients with glaucomatous optic neuropathy identified by optic disc stereo photographs and standard automated perimetry mean deviations >-8 dB were included. Participants were imaged with optical coherence tomography within 6 months of reliable standard automated perimetry testing. The location and number of optical coherence tomography clock hour retinal nerve fiber layer thickness measures outside normal limits were compared with the location and number of standard automated perimetry visual field zones outside normal limits. Further, the relationship between the deviation from normal optical coherence tomography-measured retinal nerve fiber layer thickness at each clock hour and the average pattern deviation in each visual field zone was examined by using linear regression (R(2)). The retinal nerve fiber layer areas most frequently outside normal limits were the inferior and inferior temporal regions. The least sensitive visual field zones were in the superior hemifield. Linear regression results (R(2)) showed that deviation from the normal retinal nerve fiber layer thickness at optical coherence tomography clock hour positions 6 o'clock, 7 o'clock, and 8 o'clock (inferior and inferior temporal) was best correlated with standard automated perimetry pattern deviation in visual field zones corresponding to the superior arcuate and nasal step regions (R(2) range, 0.34-0.57). These associations were much stronger than those between clock hour position 6 o'clock and the visual field zone corresponding to the inferior nasal step region (R(2) = 0.01). Localized retinal nerve fiber layer thinning, measured by optical coherence tomography, is topographically related to decreased localized standard automated perimetry sensitivity in glaucoma patients.

  11. Cerebral Glucose Metabolism is Associated with Verbal but not Visual Memory Performance in Community-Dwelling Older Adults.

    PubMed

    Gardener, Samantha L; Sohrabi, Hamid R; Shen, Kai-Kai; Rainey-Smith, Stephanie R; Weinborn, Michael; Bates, Kristyn A; Shah, Tejal; Foster, Jonathan K; Lenzo, Nat; Salvado, Olivier; Laske, Christoph; Laws, Simon M; Taddei, Kevin; Verdile, Giuseppe; Martins, Ralph N

    2016-03-31

    Increasing evidence suggests that Alzheimer's disease (AD) sufferers show region-specific reductions in cerebral glucose metabolism, as measured by [18F]-fluoro-2-deoxyglucose positron emission tomography (18F-FDG PET). We investigated preclinical disease stage by cross-sectionally examining the association between global cognition, verbal and visual memory, and 18F-FDG PET standardized uptake value ratio (SUVR) in 43 healthy control individuals, subsequently focusing on differences between subjective memory complainers and non-memory complainers. The 18F-FDG PET regions of interest investigated include the hippocampus, amygdala, posterior cingulate, superior parietal, entorhinal cortices, frontal cortex, temporal cortex, and inferior parietal region. In the cohort as a whole, verbal logical memory immediate recall was positively associated with 18F-FDG PET SUVR in both the left hippocampus and right amygdala. There were no associations observed between global cognition, delayed recall in logical memory, or visual reproduction and 18F-FDG PET SUVR. Following stratification of the cohort into subjective memory complainers and non-complainers, verbal logical memory immediate recall was positively associated with 18F-FDG PET SUVR in the right amygdala in those with subjective memory complaints. There were no significant associations observed in non-memory complainers between 18F-FDG PET SUVR in regions of interest and cognitive performance. We observed subjective memory complaint-specific associations between 18F-FDG PET SUVR and immediate verbal memory performance in our cohort, however found no associations between delayed recall of verbal memory performance or visual memory performance. It is here argued that the neural mechanisms underlying verbal and visual memory performance may in fact differ in their pathways, and the characteristic reduction of 18F-FDG PET SUVR observed in this and previous studies likely reflects the pathophysiological changes in specific brain regions that occur in preclinical AD.

  12. Changes in connectivity of the posterior default network node during visual processing in mild cognitive impairment: staged decline between normal aging and Alzheimer's disease.

    PubMed

    Krajcovicova, Lenka; Barton, Marek; Elfmarkova-Nemcova, Nela; Mikl, Michal; Marecek, Radek; Rektorova, Irena

    2017-12-01

    Visual processing difficulties are often present in Alzheimer's disease (AD), even in its pre-dementia phase (i.e. in mild cognitive impairment, MCI). The default mode network (DMN) modulates the brain connectivity depending on the specific cognitive demand, including visual processes. The aim of the present study was to analyze specific changes in connectivity of the posterior DMN node (i.e. the posterior cingulate cortex and precuneus, PCC/P) associated with visual processing in 17 MCI patients and 15 AD patients as compared to 18 healthy controls (HC) using functional magnetic resonance imaging. We used psychophysiological interaction (PPI) analysis to detect specific alterations in PCC connectivity associated with visual processing while controlling for brain atrophy. In the HC group, we observed physiological changes in PCC connectivity in ventral visual stream areas and with PCC/P during the visual task, reflecting the successful involvement of these regions in visual processing. In the MCI group, the PCC connectivity changes were disturbed and remained significant only with the anterior precuneus. In between-group comparison, we observed significant PPI effects in the right superior temporal gyrus in both MCI and AD as compared to HC. This change in connectivity may reflect ineffective "compensatory" mechanism present in the early pre-dementia stages of AD or abnormal modulation of brain connectivity due to the disease pathology. With the disease progression, these changes become more evident but less efficient in terms of compensation. This approach can separate the MCI from HC with 77% sensitivity and 89% specificity.

  13. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality.

    PubMed

    Cecere, Roberto; Gross, Joachim; Thut, Gregor

    2016-06-01

    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  14. Dissociating Medial Temporal and Striatal Memory Systems With a Same/Different Matching Task: Evidence for Two Neural Systems in Human Recognition.

    PubMed

    Sinha, Neha; Glass, Arnold Lewis

    2017-01-01

    The medial temporal lobe and striatum have both been implicated as brain substrates of memory and learning. Here, we show dissociation between these two memory systems using a same/different matching task, in which subjects judged whether four-letter strings were the same or different. Different RT was determined by the left-to-right location of the first letter different between the study and test string, consistent with a left-to-right comparison of the study and test strings, terminating when a difference was found. This comparison process results in same responses being slower than different responses. Nevertheless, same responses were faster than different responses. Same responses were associated with hippocampus activation. Different responses were associated with both caudate and hippocampus activation. These findings are consistent with the dual-system hypothesis of mammalian memory and extend the model to human visual recognition.

  15. Visual Functions of the Thalamus

    PubMed Central

    Usrey, W. Martin; Alitto, Henry J.

    2017-01-01

    The thalamus is the heavily interconnected partner of the neocortex. All areas of the neocortex receive afferent input from and send efferent projections to specific thalamic nuclei. Through these connections, the thalamus serves to provide the cortex with sensory input, and to facilitate interareal cortical communication and motor and cognitive functions. In the visual system, the lateral geniculate nucleus (LGN) of the dorsal thalamus is the gateway through which visual information reaches the cerebral cortex. Visual processing in the LGN includes spatial and temporal influences on visual signals that serve to adjust response gain, transform the temporal structure of retinal activity patterns, and increase the signal-to-noise ratio of the retinal signal while preserving its basic content. This review examines recent advances in our understanding of LGN function and circuit organization and places these findings in a historical context. PMID:28217740

  16. Hemispheric Asymmetries for Temporal Information Processing: Transient Detection versus Sustained Monitoring

    ERIC Educational Resources Information Center

    Okubo, Matia; Nicholls, Michael E. R.

    2008-01-01

    This study investigated functional differences in the processing of visual temporal information between the left and right hemispheres (LH and RH). Participants indicated whether or not a checkerboard pattern contained a temporal gap lasting between 10 and 40 ms. When the stimulus contained a temporal signal (i.e. a gap), responses were more…

  17. Neural mechanisms underlying sound-induced visual motion perception: An fMRI study.

    PubMed

    Hidaka, Souta; Higuchi, Satomi; Teramoto, Wataru; Sugita, Yoichi

    2017-07-01

    Studies of crossmodal interactions in motion perception have reported activation in several brain areas, including those related to motion processing and/or sensory association, in response to multimodal (e.g., visual and auditory) stimuli that were both in motion. Recent studies have demonstrated that sounds can trigger illusory visual apparent motion to static visual stimuli (sound-induced visual motion: SIVM): A visual stimulus blinking at a fixed location is perceived to be moving laterally when an alternating left-right sound is also present. Here, we investigated brain activity related to the perception of SIVM using a 7T functional magnetic resonance imaging technique. Specifically, we focused on the patterns of neural activities in SIVM and visually induced visual apparent motion (VIVM). We observed shared activations in the middle occipital area (V5/hMT), which is thought to be involved in visual motion processing, for SIVM and VIVM. Moreover, as compared to VIVM, SIVM resulted in greater activation in the superior temporal area and dominant functional connectivity between the V5/hMT area and the areas related to auditory and crossmodal motion processing. These findings indicate that similar but partially different neural mechanisms could be involved in auditory-induced and visually-induced motion perception, and neural signals in auditory, visual, and, crossmodal motion processing areas closely and directly interact in the perception of SIVM. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Data Visualization Challenges and Opportunities in User-Oriented Application Development

    NASA Astrophysics Data System (ADS)

    Pilone, D.; Quinn, P.; Mitchell, A. E.; Baynes, K.; Shum, D.

    2015-12-01

    This talk introduces the audience to some of the very real challenges associated with visualizing data from disparate data sources as encountered during the development of real world applications. In addition to the fundamental challenges of dealing with the data and imagery, this talk discusses usability problems encountered while trying to provide interactive and user-friendly visualization tools. At the end of this talk the audience will be aware of some of the pitfalls of data visualization along with tools and techniques to help mitigate them. There are many sources of variable resolution visualizations of science data available to application developers including NASA's Global Imagery Browse Services (GIBS), however integrating and leveraging visualizations in modern applications faces a number of challenges, including: - Varying visualized Earth "tile sizes" resulting in challenges merging disparate sources - Multiple visualization frameworks and toolkits with varying strengths and weaknesses - Global composite imagery vs. imagery matching EOSDIS granule distribution - Challenges visualizing geographically overlapping data with different temporal bounds - User interaction with overlapping or collocated data - Complex data boundaries and shapes combined with multi-orbit data and polar projections - Discovering the availability of visualizations and the specific parameters, color palettes, and configurations used to produce them In addition to discussing the challenges and approaches involved in visualizing disparate data, we will discuss solutions and components we'll be making available as open source to encourage reuse and accelerate application development.

  19. Reprint of: Early Behavioural Facilitation by Temporal Expectations in Complex Visual-motor Sequences.

    PubMed

    Heideman, Simone G; van Ede, Freek; Nobre, Anna C

    2018-05-24

    In daily life, temporal expectations may derive from incidental learning of recurring patterns of intervals. We investigated the incidental acquisition and utilisation of combined temporal-ordinal (spatial/effector) structure in complex visual-motor sequences using a modified version of a serial reaction time (SRT) task. In this task, not only the series of targets/responses, but also the series of intervals between subsequent targets was repeated across multiple presentations of the same sequence. Each participant completed three sessions. In the first session, only the repeating sequence was presented. During the second and third session, occasional probe blocks were presented, where a new (unlearned) spatial-temporal sequence was introduced. We first confirm that participants not only got faster over time, but that they were slower and less accurate during probe blocks, indicating that they incidentally learned the sequence structure. Having established a robust behavioural benefit induced by the repeating spatial-temporal sequence, we next addressed our central hypothesis that implicit temporal orienting (evoked by the learned temporal structure) would have the largest influence on performance for targets following short (as opposed to longer) intervals between temporally structured sequence elements, paralleling classical observations in tasks using explicit temporal cues. We found that indeed, reaction time differences between new and repeated sequences were largest for the short interval, compared to the medium and long intervals, and that this was the case, even when comparing late blocks (where the repeated sequence had been incidentally learned), to early blocks (where this sequence was still unfamiliar). We conclude that incidentally acquired temporal expectations that follow a sequential structure can have a robust facilitatory influence on visually-guided behavioural responses and that, like more explicit forms of temporal orienting, this effect is most pronounced for sequence elements that are expected at short inter-element intervals. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  20. Bottlenose dolphin iris asymmetries enhance aerial and underwater vision

    NASA Astrophysics Data System (ADS)

    Rivamonte, Andre

    2009-02-01

    When the iris of the Bottlenose dolphin (Tursiops truncatus) contracts it constrains the path of light that can focus onto the two areas of the retina having a finer retinal mosaic. Under high ambient light conditions the operculum of the iris shields the lens and forms in the process two asymmetrically shaped, sized and positioned slit pupils. Tracing rays of light in the reverse direction through the pupils from the retinal regions associated with higher resolution confirm behaviorally observed preferred aerial and underwater viewing directions. In the forward and downward viewing direction, the larger temporal pupil admits light that is focused by the weakly refractive margin of a bifocal lens onto the temporal area centralis compensating for the addition of the optically strong front surface of the cornea in air. A schematic dolphin eye model incorporating a bifocal lens offers an explanation for a dolphin's comparable visual acuities in air and water for both high and low ambient light conditions. Comparison of methods for curve fitting psychometric ogive functions to behavioral visual acuity and spectral sensitivity data are discussed.

  1. The brain adapts to orthography with experience: Evidence from English and Chinese

    PubMed Central

    Cao, Fan; Brennan, Christine; Booth, James R.

    2016-01-01

    Using functional magnetic resonance imaging (fMRI), we examined the process of language specialization in the brain by comparing developmental changes in two contrastive orthographies: Chinese and English. In a visual word rhyming judgment task, we found a significant interaction between age and language in left inferior parietal lobule and left superior temporal gyrus, which was due to greater developmental increases in English than in Chinese. Moreover, we found that higher skill only in English children was correlated with greater activation in left inferior parietal lobule. These findings suggest that the regions associated with phonological processing are essential in English reading development. We also found greater developmental increases in English than in Chinese in left inferior temporal gyrus, suggesting refinement of this region for fine-grained word form recognition. In contrast, greater developmental increases in Chinese than in English were found in right middle occipital gyrus, suggesting the importance of holistic visual-orthographic analysis in Chinese reading acquisition. Our results suggest that the brain adapts to the special features of the orthography by engaging relevant brain regions to a greater degree over development. PMID:25444089

  2. Neural dynamics of reward probability coding: a Magnetoencephalographic study in humans

    PubMed Central

    Thomas, Julie; Vanni-Mercier, Giovanna; Dreher, Jean-Claude

    2013-01-01

    Prediction of future rewards and discrepancy between actual and expected outcomes (prediction error) are crucial signals for adaptive behavior. In humans, a number of fMRI studies demonstrated that reward probability modulates these two signals in a large brain network. Yet, the spatio-temporal dynamics underlying the neural coding of reward probability remains unknown. Here, using magnetoencephalography, we investigated the neural dynamics of prediction and reward prediction error computations while subjects learned to associate cues of slot machines with monetary rewards with different probabilities. We showed that event-related magnetic fields (ERFs) arising from the visual cortex coded the expected reward value 155 ms after the cue, demonstrating that reward value signals emerge early in the visual stream. Moreover, a prediction error was reflected in ERF peaking 300 ms after the rewarded outcome and showing decreasing amplitude with higher reward probability. This prediction error signal was generated in a network including the anterior and posterior cingulate cortex. These findings pinpoint the spatio-temporal characteristics underlying reward probability coding. Together, our results provide insights into the neural dynamics underlying the ability to learn probabilistic stimuli-reward contingencies. PMID:24302894

  3. Biologically Inspired Visual Model With Preliminary Cognition and Active Attention Adjustment.

    PubMed

    Qiao, Hong; Xi, Xuanyang; Li, Yinlin; Wu, Wei; Li, Fengfu

    2015-11-01

    Recently, many computational models have been proposed to simulate visual cognition process. For example, the hierarchical Max-Pooling (HMAX) model was proposed according to the hierarchical and bottom-up structure of V1 to V4 in the ventral pathway of primate visual cortex, which could achieve position- and scale-tolerant recognition. In our previous work, we have introduced memory and association into the HMAX model to simulate visual cognition process. In this paper, we improve our theoretical framework by mimicking a more elaborate structure and function of the primate visual cortex. We will mainly focus on the new formation of memory and association in visual processing under different circumstances as well as preliminary cognition and active adjustment in the inferior temporal cortex, which are absent in the HMAX model. The main contributions of this paper are: 1) in the memory and association part, we apply deep convolutional neural networks to extract various episodic features of the objects since people use different features for object recognition. Moreover, to achieve a fast and robust recognition in the retrieval and association process, different types of features are stored in separated clusters and the feature binding of the same object is stimulated in a loop discharge manner and 2) in the preliminary cognition and active adjustment part, we introduce preliminary cognition to classify different types of objects since distinct neural circuits in a human brain are used for identification of various types of objects. Furthermore, active cognition adjustment of occlusion and orientation is implemented to the model to mimic the top-down effect in human cognition process. Finally, our model is evaluated on two face databases CAS-PEAL-R1 and AR. The results demonstrate that our model exhibits its efficiency on visual recognition process with much lower memory storage requirement and a better performance compared with the traditional purely computational methods.

  4. a Web-Based Interactive Platform for Co-Clustering Spatio-Temporal Data

    NASA Astrophysics Data System (ADS)

    Wu, X.; Poorthuis, A.; Zurita-Milla, R.; Kraak, M.-J.

    2017-09-01

    Since current studies on clustering analysis mainly focus on exploring spatial or temporal patterns separately, a co-clustering algorithm is utilized in this study to enable the concurrent analysis of spatio-temporal patterns. To allow users to adopt and adapt the algorithm for their own analysis, it is integrated within the server side of an interactive web-based platform. The client side of the platform, running within any modern browser, is a graphical user interface (GUI) with multiple linked visualizations that facilitates the understanding, exploration and interpretation of the raw dataset and co-clustering results. Users can also upload their own datasets and adjust clustering parameters within the platform. To illustrate the use of this platform, an annual temperature dataset from 28 weather stations over 20 years in the Netherlands is used. After the dataset is loaded, it is visualized in a set of linked visualizations: a geographical map, a timeline and a heatmap. This aids the user in understanding the nature of their dataset and the appropriate selection of co-clustering parameters. Once the dataset is processed by the co-clustering algorithm, the results are visualized in the small multiples, a heatmap and a timeline to provide various views for better understanding and also further interpretation. Since the visualization and analysis are integrated in a seamless platform, the user can explore different sets of co-clustering parameters and instantly view the results in order to do iterative, exploratory data analysis. As such, this interactive web-based platform allows users to analyze spatio-temporal data using the co-clustering method and also helps the understanding of the results using multiple linked visualizations.

  5. Haptic perception and body representation in lateral and medial occipito-temporal cortices.

    PubMed

    Costantini, Marcello; Urgesi, Cosimo; Galati, Gaspare; Romani, Gian Luca; Aglioti, Salvatore M

    2011-04-01

    Although vision is the primary sensory modality that humans and other primates use to identify objects in the environment, we can recognize crucial object features (e.g., shape, size) using the somatic modality. Previous studies have shown that the occipito-temporal areas dedicated to the visual processing of object forms, faces and bodies also show category-selective responses when the preferred stimuli are haptically explored out of view. Visual processing of human bodies engages specific areas in lateral (extrastriate body area, EBA) and medial (fusiform body area, FBA) occipito-temporal cortex. This study aimed at exploring the relative involvement of EBA and FBA in the haptic exploration of body parts. During fMRI scanning, participants were asked to haptically explore either real-size fake body parts or objects. We found a selective activation of right and left EBA, but not of right FBA, while participants haptically explored body parts as compared to real objects. This suggests that EBA may integrate visual body representations with somatosensory information regarding body parts and form a multimodal representation of the body. Furthermore, both left and right EBA showed a comparable level of body selectivity during haptic perception and visual imagery. However, right but not left EBA was more activated during haptic exploration than visual imagery of body parts, ruling out that the response to haptic body exploration was entirely due to the use of visual imagery. Overall, the results point to the existence of different multimodal body representations in the occipito-temporal cortex which are activated during perception and imagery of human body parts. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Experimental assessment of energy requirements and tool tip visibility for photoacoustic-guided endonasal surgery

    NASA Astrophysics Data System (ADS)

    Lediju Bell, Muyinatu A.; Dagle, Alicia B.; Kazanzides, Peter; Boctor, Emad M.

    2016-03-01

    Endonasal transsphenoidal surgery is an effective approach for pituitary adenoma resection, yet it poses the serious risk of internal carotid artery injury. We propose to visualize these carotid arteries, which are hidden by bone, with an optical fiber attached to a surgical tool and a transcranial ultrasound probe placed on the patient's temple (i.e. intraoperative photoacoustic imaging). To investigate energy requirements for vessel visualization, experiments were conducted with a phantom containing ex vivo sheep brain, ex vivo bovine blood, and 0.5-2.5 mm thick human cadaveric skull specimens. Photoacoustic images were acquired with 1.2-9.3 mJ laser energy, and the resulting vessel contrast was measured at each energy level. The distal vessel boundary was difficult to distinguish at the chosen contrast threshold for visibility (4.5 dB), which was used to determine the minimum energies for vessel visualization. The blood vessel was successfully visualized in the presence of the 0-2.0 mm thick sphenoid and temporal bones with up to 19.2 dB contrast. The minimum energy required ranged from 1.2-5.0 mJ, 4.2-5.9 mJ, and 4.6-5.2 mJ for the 1.0 temporal and 0-1.5 mm sphenoid bones, 1.5 mm temporal and 0-0.5 mm sphenoid bones, and 2.0 mm temporal and 0-0.5 mm sphenoid bones, respectively, which corresponds to a fluence range of 4-21 mJ/cm2. These results hold promise for vessel visualization within safety limits. In a separate experiment, a mock tool tip was placed, providing satisfactory preliminary evidence that surgical tool tips can be visualized simultaneously with blood vessels.

  7. Visual body perception in anorexia nervosa.

    PubMed

    Urgesi, Cosimo; Fornasari, Livia; Perini, Laura; Canalaz, Francesca; Cremaschi, Silvana; Faleschini, Laura; Balestrieri, Matteo; Fabbro, Franco; Aglioti, Salvatore Maria; Brambilla, Paolo

    2012-05-01

    Disturbance of body perception is a central aspect of anorexia nervosa (AN) and several neuroimaging studies have documented structural and functional alterations of occipito-temporal cortices involved in visual body processing. However, it is unclear whether these perceptual deficits involve more basic aspects of others' body perception. A consecutive sample of 15 adolescent patients with AN were compared with a group of 15 age- and gender-matched controls in delayed matching to sample tasks requiring the visual discrimination of the form or of the action of others' body. Patients showed better visual discrimination performance than controls in detail-based processing of body forms but not of body actions, which positively correlated with their increased tendency to convert a signal of punishment into a signal of reinforcement (higher persistence scores). The paradoxical advantage of patients with AN in detail-based body processing may be associated to their tendency to routinely explore body parts as a consequence of their obsessive worries about body appearance. Copyright © 2012 Wiley Periodicals, Inc.

  8. Role of semantic paradigms for optimization of language mapping in clinical FMRI studies.

    PubMed

    Zacà, D; Jarso, S; Pillai, J J

    2013-10-01

    The optimal paradigm choice for language mapping in clinical fMRI studies is challenging due to the variability in activation among different paradigms, the contribution to activation of cognitive processes other than language, and the difficulties in monitoring patient performance. In this study, we compared language localization and lateralization between 2 commonly used clinical language paradigms and 3 newly designed dual-choice semantic paradigms to define a streamlined and adequate language-mapping protocol. Twelve healthy volunteers performed 5 language paradigms: Silent Word Generation, Sentence Completion, Visual Antonym Pair, Auditory Antonym Pair, and Noun-Verb Association. Group analysis was performed to assess statistically significant differences in fMRI percentage signal change and lateralization index among these paradigms in 5 ROIs: inferior frontal gyrus, superior frontal gyrus, middle frontal gyrus for expressive language activation, middle temporal gyrus, and superior temporal gyrus for receptive language activation. In the expressive ROIs, Silent Word Generation was the most robust and best lateralizing paradigm (greater percentage signal change and lateralization index than semantic paradigms at P < .01 and P < .05 levels, respectively). In the receptive region of interest, Sentence Completion and Noun-Verb Association were the most robust activators (greater percentage signal change than other paradigms, P < .01). All except Auditory Antonym Pair were good lateralizing tasks (the lateralization index was significantly lower than other paradigms, P < .05). The combination of Silent Word Generation and ≥1 visual semantic paradigm, such as Sentence Completion and Noun-Verb Association, is adequate to determine language localization and lateralization; Noun-Verb Association has the additional advantage of objective monitoring of patient performance.

  9. Project DyAdd: Visual Attention in Adult Dyslexia and ADHD

    ERIC Educational Resources Information Center

    Laasonen, Marja; Salomaa, Jonna; Cousineau, Denis; Leppamaki, Sami; Tani, Pekka; Hokkanen, Laura; Dye, Matthew

    2012-01-01

    In this study of the project DyAdd, three aspects of visual attention were investigated in adults (18-55 years) with dyslexia (n = 35) or attention deficit/hyperactivity disorder (ADHD, n = 22), and in healthy controls (n = 35). Temporal characteristics of visual attention were assessed with Attentional Blink (AB), capacity of visual attention…

  10. Psycho acoustical Measures in Individuals with Congenital Visual Impairment.

    PubMed

    Kumar, Kaushlendra; Thomas, Teenu; Bhat, Jayashree S; Ranjan, Rajesh

    2017-12-01

    In congenital visual impaired individuals one modality is impaired (visual modality) this impairment is compensated by other sensory modalities. There is evidence that visual impaired performed better in different auditory task like localization, auditory memory, verbal memory, auditory attention, and other behavioural tasks when compare to normal sighted individuals. The current study was aimed to compare the temporal resolution, frequency resolution and speech perception in noise ability in individuals with congenital visual impaired and normal sighted. Temporal resolution, frequency resolution, and speech perception in noise were measured using MDT, GDT, DDT, SRDT, and SNR50 respectively. Twelve congenital visual impaired participants with age range of 18 to 40 years were taken and equal in number with normal sighted participants. All the participants had normal hearing sensitivity with normal middle ear functioning. Individual with visual impairment showed superior threshold in MDT, SRDT and SNR50 as compared to normal sighted individuals. This may be due to complexity of the tasks; MDT, SRDT and SNR50 are complex tasks than GDT and DDT. Visual impairment showed superior performance in auditory processing and speech perception with complex auditory perceptual tasks.

  11. Perceptual learning modifies the functional specializations of visual cortical areas.

    PubMed

    Chen, Nihong; Cai, Peng; Zhou, Tiangang; Thompson, Benjamin; Fang, Fang

    2016-05-17

    Training can improve performance of perceptual tasks. This phenomenon, known as perceptual learning, is strongest for the trained task and stimulus, leading to a widely accepted assumption that the associated neuronal plasticity is restricted to brain circuits that mediate performance of the trained task. Nevertheless, learning does transfer to other tasks and stimuli, implying the presence of more widespread plasticity. Here, we trained human subjects to discriminate the direction of coherent motion stimuli. The behavioral learning effect substantially transferred to noisy motion stimuli. We used transcranial magnetic stimulation (TMS) and functional magnetic resonance imaging (fMRI) to investigate the neural mechanisms underlying the transfer of learning. The TMS experiment revealed dissociable, causal contributions of V3A (one of the visual areas in the extrastriate visual cortex) and MT+ (middle temporal/medial superior temporal cortex) to coherent and noisy motion processing. Surprisingly, the contribution of MT+ to noisy motion processing was replaced by V3A after perceptual training. The fMRI experiment complemented and corroborated the TMS finding. Multivariate pattern analysis showed that, before training, among visual cortical areas, coherent and noisy motion was decoded most accurately in V3A and MT+, respectively. After training, both kinds of motion were decoded most accurately in V3A. Our findings demonstrate that the effects of perceptual learning extend far beyond the retuning of specific neural populations for the trained stimuli. Learning could dramatically modify the inherent functional specializations of visual cortical areas and dynamically reweight their contributions to perceptual decisions based on their representational qualities. These neural changes might serve as the neural substrate for the transfer of perceptual learning.

  12. RNA sequencing from neural ensembles activated during fear conditioning in the mouse temporal association cortex

    PubMed Central

    Cho, Jin-Hyung; Huang, Ben S.; Gray, Jesse M.

    2016-01-01

    The stable formation of remote fear memories is thought to require neuronal gene induction in cortical ensembles that are activated during learning. However, the set of genes expressed specifically in these activated ensembles is not known; knowledge of such transcriptional profiles may offer insights into the molecular program underlying stable memory formation. Here we use RNA-Seq to identify genes whose expression is enriched in activated cortical ensembles labeled during associative fear learning. We first establish that mouse temporal association cortex (TeA) is required for remote recall of auditory fear memories. We then perform RNA-Seq in TeA neurons that are labeled by the activity reporter Arc-dVenus during learning. We identify 944 genes with enriched expression in Arc-dVenus+ neurons. These genes include markers of L2/3, L5b, and L6 excitatory neurons but not glial or inhibitory markers, confirming Arc-dVenus to be an excitatory neuron-specific but non-layer-specific activity reporter. Cross comparisons to other transcriptional profiles show that 125 of the enriched genes are also activity-regulated in vitro or induced by visual stimulus in the visual cortex, suggesting that they may be induced generally in the cortex in an experience-dependent fashion. Prominent among the enriched genes are those encoding potassium channels that down-regulate neuronal activity, suggesting the possibility that part of the molecular program induced by fear conditioning may initiate homeostatic plasticity. PMID:27557751

  13. Neural Correlates of Individual Differences in Infant Visual Attention and Recognition Memory

    PubMed Central

    Reynolds, Greg D.; Guy, Maggie W.; Zhang, Dantong

    2010-01-01

    Past studies have identified individual differences in infant visual attention based upon peak look duration during initial exposure to a stimulus. Colombo and colleagues (e.g., Colombo & Mitchell, 1990) found that infants that demonstrate brief visual fixations (i.e., short lookers) during familiarization are more likely to demonstrate evidence of recognition memory during subsequent stimulus exposure than infants that demonstrate long visual fixations (i.e., long lookers). The current study utilized event-related potentials to examine possible neural mechanisms associated with individual differences in visual attention and recognition memory for 6- and 7.5-month-old infants. Short- and long-looking infants viewed images of familiar and novel objects during ERP testing. There was a stimulus type by looker type interaction at temporal and frontal electrodes on the late slow wave (LSW). Short lookers demonstrated a LSW that was significantly greater in amplitude in response to novel stimulus presentations. No significant differences in LSW amplitude were found based on stimulus type for long lookers. These results indicate deeper processing and recognition memory of the familiar stimulus for short lookers. PMID:21666833

  14. [Atypical optic neuritis in systemic lupus erythematosus (SLE)].

    PubMed

    Eckstein, A; Kötter, I; Wilhelm, H

    1995-11-01

    A 67-year-old woman experienced acute unilateral visual loss accompanied by pain with eye movements. There was a marked relative afferent pupillary defect and a nerve fiber bundle defect in the upper half of the visual field. Optic discs were normal. After 4 days vision worsened to motion detection and only a temporal island was left in the visual field. The optic disc margin was blurred. Since thirty years she had been suffering from renal insufficiency. Immunoserologic examination revealed elevated ANA and DS-DNA antibody titers. An optic neuritis in systemic lupus erythematosus was diagnosed, which is called atopic, because of its association to a systemic disease and the old age of the patient. The patient was treated with 100 mg prednisolone/day, slowly tapered. Within 6 weeks visual acuity improved to 0.6 and visual field normalized except for a small nerve fiber bundle defect. Autoimmune optic neuritis often responds to treatment with corticosteroids. Early onset of treatment is important. Immunopathologic examinations are an important diagnostic tool in atopic optic neuritis. Their results may even have consequences for the treatment of the underlying disease.

  15. Clinical characteristics of children with severe visual impairment but favorable retinal structural outcomes from the Early Treatment for Retinopathy of Prematurity (ETROP) study.

    PubMed

    Siatkowski, R Michael; Good, William V; Summers, C Gail; Quinn, Graham E; Tung, Betty

    2013-04-01

    To describe visual function and associated characteristics at the 6-year examination in children enrolled in the Early Treatment for Retinopathy of Prematurity Study who had unfavorable visual outcomes despite favorable structural outcomes in one or both eyes. The clinical examination records of children completing the 6-year follow-up examination were retrospectively reviewed. Eligible subjects were those with visual acuity of ≤20/200 in each eye (where recordable) and a normal fundus or straightening of the temporal retinal vessels with or without macular ectopia in at least one eye. Data regarding visual function, retinal structure, presence of nystagmus, optic atrophy, optic disk cupping, seizures/shunts, and Functional Independence Measure for Children (ie, WeeFIM: pediatric functional independence measure) developmental test scores were reviewed. Of 342 participants who completed the 6-year examination, 39 (11%) met inclusion criteria. Of these, 29 (74%) had normal retinal structure, 18 (46%) had optic atrophy, and 3 (8%) had increased cupping of the optic disk in at least one eye. Latent and/or manifest nystagmus occurred in 30 children (77%). The presence of nystagmus was not related to the presence of optic atrophy. Of the 39 children, 28 (72%) had a below-normal WeeFIM score. In 25 participants (7%) completing the 6-year examination, cortical visual impairment was considered the primary cause of visual loss. The remainder likely had components of both anterior and posterior visual pathway disease. Clinical synthesis of ocular anatomy and visual and neurologic function is required to determine the etiology of poor vision in these children. Copyright © 2013 American Association for Pediatric Ophthalmology and Strabismus. Published by Mosby, Inc. All rights reserved.

  16. Neuroanatomical substrates of action perception and understanding: an anatomic likelihood estimation meta-analysis of lesion-symptom mapping studies in brain injured patients

    PubMed Central

    Urgesi, Cosimo; Candidi, Matteo; Avenanti, Alessio

    2014-01-01

    Several neurophysiologic and neuroimaging studies suggested that motor and perceptual systems are tightly linked along a continuum rather than providing segregated mechanisms supporting different functions. Using correlational approaches, these studies demonstrated that action observation activates not only visual but also motor brain regions. On the other hand, brain stimulation and brain lesion evidence allows tackling the critical question of whether our action representations are necessary to perceive and understand others’ actions. In particular, recent neuropsychological studies have shown that patients with temporal, parietal, and frontal lesions exhibit a number of possible deficits in the visual perception and the understanding of others’ actions. The specific anatomical substrates of such neuropsychological deficits however, are still a matter of debate. Here we review the existing literature on this issue and perform an anatomic likelihood estimation meta-analysis of studies using lesion-symptom mapping methods on the causal relation between brain lesions and non-linguistic action perception and understanding deficits. The meta-analysis encompassed data from 361 patients tested in 11 studies and identified regions in the inferior frontal cortex, the inferior parietal cortex and the middle/superior temporal cortex, whose damage is consistently associated with poor performance in action perception and understanding tasks across studies. Interestingly, these areas correspond to the three nodes of the action observation network that are strongly activated in response to visual action perception in neuroimaging research and that have been targeted in previous brain stimulation studies. Thus, brain lesion mapping research provides converging causal evidence that premotor, parietal and temporal regions play a crucial role in action recognition and understanding. PMID:24910603

  17. Visualizing Terrestrial and Aquatic Systems in 3-D

    EPA Science Inventory

    The environmental modeling community has a long-standing need for affordable, easy-to-use tools that support 3-D visualization of complex spatial and temporal model output. The Visualization of Terrestrial and Aquatic Systems project (VISTAS) aims to help scientists produce effe...

  18. Near Real Time Integration of Satellite and Radar Data for Probabilistic Nearcasting of Severe Weather

    NASA Astrophysics Data System (ADS)

    Pilone, D.; Quinn, P.; Mitchell, A. E.; Baynes, K.; Shum, D.

    2014-12-01

    This talk introduces the audience to some of the very real challenges associated with visualizing data from disparate data sources as encountered during the development of real world applications. In addition to the fundamental challenges of dealing with the data and imagery, this talk discusses usability problems encountered while trying to provide interactive and user-friendly visualization tools. At the end of this talk the audience will be aware of some of the pitfalls of data visualization along with tools and techniques to help mitigate them. There are many sources of variable resolution visualizations of science data available to application developers including NASA's Global Imagery Browse Services (GIBS), however integrating and leveraging visualizations in modern applications faces a number of challenges, including: - Varying visualized Earth "tile sizes" resulting in challenges merging disparate sources - Multiple visualization frameworks and toolkits with varying strengths and weaknesses - Global composite imagery vs. imagery matching EOSDIS granule distribution - Challenges visualizing geographically overlapping data with different temporal bounds - User interaction with overlapping or collocated data - Complex data boundaries and shapes combined with multi-orbit data and polar projections - Discovering the availability of visualizations and the specific parameters, color palettes, and configurations used to produce them In addition to discussing the challenges and approaches involved in visualizing disparate data, we will discuss solutions and components we'll be making available as open source to encourage reuse and accelerate application development.

  19. Emotion-induced trade-offs in spatiotemporal vision.

    PubMed

    Bocanegra, Bruno R; Zeelenberg, René

    2011-05-01

    It is generally assumed that emotion facilitates human vision in order to promote adaptive responses to a potential threat in the environment. Surprisingly, we recently found that emotion in some cases impairs the perception of elementary visual features (Bocanegra & Zeelenberg, 2009b). Here, we demonstrate that emotion improves fast temporal vision at the expense of fine-grained spatial vision. We tested participants' threshold resolution with Landolt circles containing a small spatial or brief temporal discontinuity. The prior presentation of a fearful face cue, compared with a neutral face cue, impaired spatial resolution but improved temporal resolution. In addition, we show that these benefits and deficits were triggered selectively by the global configural properties of the faces, which were transmitted only through low spatial frequencies. Critically, the common locus of these opposite effects suggests a trade-off between magno- and parvocellular-type visual channels, which contradicts the common assumption that emotion invariably improves vision. We show that, rather than being a general "boost" for all visual features, affective neural circuits sacrifice the slower processing of small details for a coarser but faster visual signal.

  20. Application of Information Visualization Techniques in Representing Patients' Temporal Personal History Data

    NASA Astrophysics Data System (ADS)

    Noah, Shahrul Azman; Yaakob, Suraya; Shahar, Suzana

    The anthropometries and nutrients records of patients are usually vast in quantity, complex and exhibit temporal features. Therefore, the information acceptance among users will become blur and give cognitive burden if such data is not displayed using effective techniques. The aim of this study is to apply, use and evaluate Information Visualization (IV) techniques for displaying the Personal History Data (PHD) of patients for dietitians during counseling sessions. Since PHD values change consistently with the counseling session, our implementation mainly focused on quantitative temporal data such as Body Mass Index (BMI), blood pressure and blood glucose readings. This data is mapped into orientation circle type of visual representation, whereas data about medicinal and supplement intake are mapped into timeline segment which is based on the thickness of lines as well as the colors. A usability testing has been conducted among dietitians at Faculty of Allied Health Sciences, UKM. The result of the testing has shown that the use of visual representations capable of summarising complex data which ease the dietitian task of checking the PHD.

  1. Temporal ventriloquism: crossmodal interaction on the time dimension. 1. Evidence from auditory-visual temporal order judgment.

    PubMed

    Bertelson, Paul; Aschersleben, Gisa

    2003-10-01

    In the well-known visual bias of auditory location (alias the ventriloquist effect), auditory and visual events presented in separate locations appear closer together, provided the presentations are synchronized. Here, we consider the possibility of the converse phenomenon: crossmodal attraction on the time dimension conditional on spatial proximity. Participants judged the order of occurrence of sound bursts and light flashes, respectively, separated in time by varying stimulus onset asynchronies (SOAs) and delivered either in the same or in different locations. Presentation was organized using randomly mixed psychophysical staircases, by which the SOA was reduced progressively until a point of uncertainty was reached. This point was reached at longer SOAs with the sounds in the same frontal location as the flashes than in different places, showing that apparent temporal separation is effectively longer in the first condition. Together with a similar one obtained recently in a case of tactile-visual discrepancy, this result supports a view in which timing and spatial layout of the inputs play to some extent inter-changeable roles in the pairing operation at the base of crossmodal interaction.

  2. The neural circuitry of visual artistic production and appreciation: A proposition.

    PubMed

    Chakravarty, Ambar

    2012-04-01

    The nondominant inferior parietal lobule is probably a major "store house" of artistic creativity. The ventromedial prefrontal lobe (VMPFL) is supposed to be involved in creative cognition and the dorsolateral prefrontal lobe (DLPFL) in creative output. The conceptual ventral and dorsal visual system pathways likely represent the inferior and superior longitudinal fasciculi. During artistic production, conceptualization is conceived in the VMPFL and the executive part is operated through the DLFPL. The latter transfers the concept to the visual brain through the superior longitudinal fasciculus (SLF), relaying on its path to the parietal cortex. The conceptualization at VMPFL is influenced by activity from the anterior temporal lobe through the uncinate fasciculus and limbic system pathways. The final visual image formed in the visual brain is subsequently transferred back to the DLPFL through the SLF and then handed over to the motor cortex for execution. During art appreciation, the image at the visual brain is transferred to the frontal lobe through the SLF and there it is matched with emotional and memory inputs from the anterior temporal lobe transmitted through the uncinate fasiculus. Beauty is perceived at the VMPFL and transferred through the uncinate fasciculus to the hippocampo-amygdaloid complex in the anterior temporal lobe. The limbic system (Papez circuit) is activated and emotion of appreciation is evoked. It is postulated that in practice the entire circuitry is activated simultaneously.

  3. The neural circuitry of visual artistic production and appreciation: A proposition

    PubMed Central

    Chakravarty, Ambar

    2012-01-01

    The nondominant inferior parietal lobule is probably a major “store house” of artistic creativity. The ventromedial prefrontal lobe (VMPFL) is supposed to be involved in creative cognition and the dorsolateral prefrontal lobe (DLPFL) in creative output. The conceptual ventral and dorsal visual system pathways likely represent the inferior and superior longitudinal fasciculi. During artistic production, conceptualization is conceived in the VMPFL and the executive part is operated through the DLFPL. The latter transfers the concept to the visual brain through the superior longitudinal fasciculus (SLF), relaying on its path to the parietal cortex. The conceptualization at VMPFL is influenced by activity from the anterior temporal lobe through the uncinate fasciculus and limbic system pathways. The final visual image formed in the visual brain is subsequently transferred back to the DLPFL through the SLF and then handed over to the motor cortex for execution. During art appreciation, the image at the visual brain is transferred to the frontal lobe through the SLF and there it is matched with emotional and memory inputs from the anterior temporal lobe transmitted through the uncinate fasiculus. Beauty is perceived at the VMPFL and transferred through the uncinate fasciculus to the hippocampo–amygdaloid complex in the anterior temporal lobe. The limbic system (Papez circuit) is activated and emotion of appreciation is evoked. It is postulated that in practice the entire circuitry is activated simultaneously. PMID:22566716

  4. The neuropsychology of the Klüver-Bucy syndrome in children.

    PubMed

    Lippe, S; Gonin-Flambois, C; Jambaqué, I

    2013-01-01

    The Klüver-Bucy syndrome (KBS) is characterized by a number of peculiar behavioral symptoms. The syndrome was first observed in 1939 by Heinrich Klüver and Paul Bucy in the rhesus monkey following removal of the greater portion of the monkey's temporal lobes and rhinencephalon. The animal showed (a) visual agnosia (inability to recognize objects without general loss of visual discrimination), (b) excessive oral tendency (oral exploration of objects), (c) hypermetamorphosis (excessive visual attentiveness), (d) placidity with loss of normal fear and anger responses, (e) altered sexual behavior manifesting mainly as marked and indiscriminate hypersexuality, and (f) changes in eating behavior. In humans, KBS can be complete or incomplete. It occurs as a consequence of neurological disorders that essentially cause destruction or dysfunction of bilateral mesial temporal lobe structures (i.e., Pick disease, Alzheimer disease, cerebral trauma, cerebrovascular accidents, temporal lobe epilepsy, herpetic encephalopathy, heat stroke). As for epilepsy, complete and incomplete KBS are well documented in temporal lobe epilepsy, temporal lobectomy, and partial status epilepticus. KBS can occur at any age. Children seem to show similar symptoms to adults, although some differences in the manifestations of symptoms may be related to the fact that children have not yet learned certain behaviors. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Functional differentiation of macaque visual temporal cortical neurons using a parametric action space.

    PubMed

    Vangeneugden, Joris; Pollick, Frank; Vogels, Rufin

    2009-03-01

    Neurons in the rostral superior temporal sulcus (STS) are responsive to displays of body movements. We employed a parametric action space to determine how similarities among actions are represented by visual temporal neurons and how form and motion information contributes to their responses. The stimulus space consisted of a stick-plus-point-light figure performing arm actions and their blends. Multidimensional scaling showed that the responses of temporal neurons represented the ordinal similarity between these actions. Further tests distinguished neurons responding equally strongly to static presentations and to actions ("snapshot" neurons), from those responding much less strongly to static presentations, but responding well when motion was present ("motion" neurons). The "motion" neurons were predominantly found in the upper bank/fundus of the STS, and "snapshot" neurons in the lower bank of the STS and inferior temporal convexity. Most "motion" neurons showed strong response modulation during the course of an action, thus responding to action kinematics. "Motion" neurons displayed a greater average selectivity for these simple arm actions than did "snapshot" neurons. We suggest that the "motion" neurons code for visual kinematics, whereas the "snapshot" neurons code for form/posture, and that both can contribute to action recognition, in agreement with computation models of action recognition.

  6. Learning of Temporal and Spatial Movement Aspects: A Comparison of Four Types of Haptic Control and Concurrent Visual Feedback.

    PubMed

    Rauter, Georg; Sigrist, Roland; Riener, Robert; Wolf, Peter

    2015-01-01

    In literature, the effectiveness of haptics for motor learning is controversially discussed. Haptics is believed to be effective for motor learning in general; however, different types of haptic control enhance different movement aspects. Thus, in dependence on the movement aspects of interest, one type of haptic control may be effective whereas another one is not. Therefore, in the current work, it was investigated if and how different types of haptic controllers affect learning of spatial and temporal movement aspects. In particular, haptic controllers that enforce active participation of the participants were expected to improve spatial aspects. Only haptic controllers that provide feedback about the task's velocity profile were expected to improve temporal aspects. In a study on learning a complex trunk-arm rowing task, the effect of training with four different types of haptic control was investigated: position control, path control, adaptive path control, and reactive path control. A fifth group (control) trained with visual concurrent augmented feedback. As hypothesized, the position controller was most effective for learning of temporal movement aspects, while the path controller was most effective in teaching spatial movement aspects of the rowing task. Visual feedback was also effective for learning temporal and spatial movement aspects.

  7. Visual Field Map Clusters in High-Order Visual Processing: Organization of V3A/V3B and a New Cloverleaf Cluster in the Posterior Superior Temporal Sulcus

    PubMed Central

    Barton, Brian; Brewer, Alyssa A.

    2017-01-01

    The cortical hierarchy of the human visual system has been shown to be organized around retinal spatial coordinates throughout much of low- and mid-level visual processing. These regions contain visual field maps (VFMs) that each follows the organization of the retina, with neighboring aspects of the visual field processed in neighboring cortical locations. On a larger, macrostructural scale, groups of such sensory cortical field maps (CFMs) in both the visual and auditory systems are organized into roughly circular cloverleaf clusters. CFMs within clusters tend to share properties such as receptive field distribution, cortical magnification, and processing specialization. Here we use fMRI and population receptive field (pRF) modeling to investigate the extent of VFM and cluster organization with an examination of higher-level visual processing in temporal cortex and compare these measurements to mid-level visual processing in dorsal occipital cortex. In human temporal cortex, the posterior superior temporal sulcus (pSTS) has been implicated in various neuroimaging studies as subserving higher-order vision, including face processing, biological motion perception, and multimodal audiovisual integration. In human dorsal occipital cortex, the transverse occipital sulcus (TOS) contains the V3A/B cluster, which comprises two VFMs subserving mid-level motion perception and visuospatial attention. For the first time, we present the organization of VFMs in pSTS in a cloverleaf cluster. This pSTS cluster contains four VFMs bilaterally: pSTS-1:4. We characterize these pSTS VFMs as relatively small at ∼125 mm2 with relatively large pRF sizes of ∼2–8° of visual angle across the central 10° of the visual field. V3A and V3B are ∼230 mm2 in surface area, with pRF sizes here similarly ∼1–8° of visual angle across the same region. In addition, cortical magnification measurements show that a larger extent of the pSTS VFM surface areas are devoted to the peripheral visual field than those in the V3A/B cluster. Reliability measurements of VFMs in pSTS and V3A/B reveal that these cloverleaf clusters are remarkably consistent and functionally differentiable. Our findings add to the growing number of measurements of widespread sensory CFMs organized into cloverleaf clusters, indicating that CFMs and cloverleaf clusters may both be fundamental organizing principles in cortical sensory processing. PMID:28293182

  8. Encoding and immediate retrieval tasks in patients with epilepsy: A functional MRI study of verbal and visual memory.

    PubMed

    Saddiki, Najat; Hennion, Sophie; Viard, Romain; Ramdane, Nassima; Lopes, Renaud; Baroncini, Marc; Szurhaj, William; Reyns, Nicolas; Pruvo, Jean Pierre; Delmaire, Christine

    2018-05-01

    Medial lobe temporal structures and more specifically the hippocampus play a decisive role in episodic memory. Most of the memory functional magnetic resonance imaging (fMRI) studies evaluate the encoding phase; the retrieval phase being performed outside the MRI. We aimed to determine the ability to reveal greater hippocampal fMRI activations during retrieval phase. Thirty-five epileptic patients underwent a two-step memory fMRI. During encoding phase, subjects were requested to identify the feminine or masculine gender of faces and words presented, in order to encourage stimulus encoding. One hour after, during retrieval phase, subjects had to recognize the word and face. We used an event-related design to identify hippocampal activations. There was no significant difference between patients with left temporal lobe epilepsy, patients with right temporal lobe epilepsy and patients with extratemporal lobe epilepsy on verbal and visual learning task. For words, patients demonstrated significantly more bilateral hippocampal activation for retrieval task than encoding task and when the tasks were associated than during encoding alone. Significant difference was seen between face-encoding alone and face retrieval alone. This study demonstrates the essential contribution of the retrieval task during a fMRI memory task but the number of patients with hippocampal activations was greater when the two tasks were taken into account. Copyright © 2018. Published by Elsevier Masson SAS.

  9. Ear-EEG detects ictal and interictal abnormalities in focal and generalized epilepsy - A comparison with scalp EEG monitoring.

    PubMed

    Zibrandtsen, I C; Kidmose, P; Christensen, C B; Kjaer, T W

    2017-12-01

    Ear-EEG is recording of electroencephalography from a small device in the ear. This is the first study to compare ictal and interictal abnormalities recorded with ear-EEG and simultaneous scalp-EEG in an epilepsy monitoring unit. We recorded and compared simultaneous ear-EEG and scalp-EEG from 15 patients with suspected temporal lobe epilepsy. EEGs were compared visually by independent neurophysiologists. Correlation and time-frequency analysis was used to quantify the similarity between ear and scalp electrodes. Spike-averages were used to assess similarity of interictal spikes. There were no differences in sensitivity or specificity for seizure detection. Mean correlation coefficient between ear-EEG and nearest scalp electrode was above 0.6 with a statistically significant decreasing trend with increasing distance away from the ear. Ictal morphology and frequency dynamics can be observed from visual inspection and time-frequency analysis. Spike averages derived from ear-EEG electrodes yield a recognizable spike appearance. Our results suggest that ear-EEG can reliably detect electroencephalographic patterns associated with focal temporal lobe seizures. Interictal spike morphology from sufficiently large temporal spike sources can be sampled using ear-EEG. Ear-EEG is likely to become an important tool in clinical epilepsy monitoring and diagnosis. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  10. The Temporal Pole Top-Down Modulates the Ventral Visual Stream During Social Cognition.

    PubMed

    Pehrs, Corinna; Zaki, Jamil; Schlochtermeier, Lorna H; Jacobs, Arthur M; Kuchinke, Lars; Koelsch, Stefan

    2017-01-01

    The temporal pole (TP) has been associated with diverse functions of social cognition and emotion processing. Although the underlying mechanism remains elusive, one possibility is that TP acts as domain-general hub integrating socioemotional information. To test this, 26 participants were presented with 60 empathy-evoking film clips during fMRI scanning. The film clips were preceded by a linguistic sad or neutral context and half of the clips were accompanied by sad music. In line with its hypothesized role, TP was involved in the processing of sad context and furthermore tracked participants' empathic concern. To examine the neuromodulatory impact of TP, we applied nonlinear dynamic causal modeling to a multisensory integration network from previous work consisting of superior temporal gyrus (STG), fusiform gyrus (FG), and amygdala, which was extended by an additional node in the TP. Bayesian model comparison revealed a gating of STG and TP on fusiform-amygdalar coupling and an increase of TP to FG connectivity during the integration of contextual information. Moreover, these backward projections were strengthened by emotional music. The findings indicate that during social cognition, TP integrates information from different modalities and top-down modulates lower-level perceptual areas in the ventral visual stream as a function of integration demands. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Natural asynchronies in audiovisual communication signals regulate neuronal multisensory interactions in voice-sensitive cortex.

    PubMed

    Perrodin, Catherine; Kayser, Christoph; Logothetis, Nikos K; Petkov, Christopher I

    2015-01-06

    When social animals communicate, the onset of informative content in one modality varies considerably relative to the other, such as when visual orofacial movements precede a vocalization. These naturally occurring asynchronies do not disrupt intelligibility or perceptual coherence. However, they occur on time scales where they likely affect integrative neuronal activity in ways that have remained unclear, especially for hierarchically downstream regions in which neurons exhibit temporally imprecise but highly selective responses to communication signals. To address this, we exploited naturally occurring face- and voice-onset asynchronies in primate vocalizations. Using these as stimuli we recorded cortical oscillations and neuronal spiking responses from functional MRI (fMRI)-localized voice-sensitive cortex in the anterior temporal lobe of macaques. We show that the onset of the visual face stimulus resets the phase of low-frequency oscillations, and that the face-voice asynchrony affects the prominence of two key types of neuronal multisensory responses: enhancement or suppression. Our findings show a three-way association between temporal delays in audiovisual communication signals, phase-resetting of ongoing oscillations, and the sign of multisensory responses. The results reveal how natural onset asynchronies in cross-sensory inputs regulate network oscillations and neuronal excitability in the voice-sensitive cortex of macaques, a suggested animal model for human voice areas. These findings also advance predictions on the impact of multisensory input on neuronal processes in face areas and other brain regions.

  12. Impact of hippocampal subfield histopathology in episodic memory impairment in mesial temporal lobe epilepsy and hippocampal sclerosis.

    PubMed

    Comper, Sandra Mara; Jardim, Anaclara Prada; Corso, Jeana Torres; Gaça, Larissa Botelho; Noffs, Maria Helena Silva; Lancellotti, Carmen Lúcia Penteado; Cavalheiro, Esper Abrão; Centeno, Ricardo Silva; Yacubian, Elza Márcia Targas

    2017-10-01

    The objective of the study was to analyze preoperative visual and verbal episodic memories in a homogeneous series of patients with mesial temporal lobe epilepsy (MTLE) and unilateral hippocampal sclerosis (HS) submitted to corticoamygdalohippocampectomy and its association with neuronal cell density of each hippocampal subfield. The hippocampi of 72 right-handed patients were collected and prepared for histopathological examination. Hippocampal sclerosis patterns were determined, and neuronal cell density was calculated. Preoperatively, two verbal and two visual memory tests (immediate and delayed recalls) were applied, and patients were divided into two groups, left and right MTLE (36/36). There were no statistical differences between groups regarding demographic and clinical data. Cornu Ammonis 4 (CA4) neuronal density was significantly lower in the right hippocampus compared with the left (p=0.048). The groups with HS presented different memory performance - the right HS were worse in visual memory test [Complex Rey Figure, immediate (p=0.001) and delayed (p=0.009)], but better in one verbal task [RAVLT delayed (p=0.005)]. Multiple regression analysis suggested that the verbal memory performance of the group with left HS was explained by CA1 neuronal density since both tasks were significantly influenced by CA1 [Logical Memory immediate recall (p=0.050) and Logical Memory and RAVLT delayed recalls (p=0.004 and p=0.001, respectively)]. For patients with right HS, both CA1 subfield integrity (p=0.006) and epilepsy duration (p=0.012) explained Complex Rey Figure immediate recall performance. Ultimately, epilepsy duration also explained the performance in the Complex Rey Figure delayed recall (p<0.001). Cornu Ammonis 1 (CA1) hippocampal subfield was related to immediate and delayed recalls of verbal memory tests in left HS, while CA1 and epilepsy duration were associated with visual memory performance in patients with right HS. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. High-speed imaging of submerged jet: visualization analysis using proper orthogonality decomposition

    NASA Astrophysics Data System (ADS)

    Liu, Yingzheng; He, Chuangxin

    2016-11-01

    In the present study, the submerged jet at low Reynolds numbers was visualized using laser induced fluoresce and high-speed imaging in a water tank. Well-controlled calibration was made to determine linear dependency region of the fluoresce intensity on its concentration. Subsequently, the jet fluid issuing from a circular pipe was visualized using a high-speed camera. The animation sequence of the visualized jet flow field was supplied for the snapshot proper orthogonality decomposition (POD) analysis. Spatio-temporally varying structures superimposed in the unsteady fluid flow were identified, e.g., the axisymmetric mode and the helical mode, which were reflected from the dominant POD modes. The coefficients of the POD modes give strong indication of temporal and spectral features of the corresponding unsteady events. The reconstruction using the time-mean visualization and the selected POD modes was conducted to reveal the convective motion of the buried vortical structures. National Natural Science Foundation of China.

  14. Temporal Processing, Attention, and Learning Disorders

    ERIC Educational Resources Information Center

    Landerl, Karin; Willburger, Edith

    2010-01-01

    In a large sample (N = 439) of literacy impaired and unimpaired elementary school children the predictions of the temporal processing theory of dyslexia were tested while controlling for (sub)clininal attentional deficits. Visual and Auditory Temporal Order Judgement were administered as well as three subtests of a standardized attention test. The…

  15. Eye shape and retinal topography in owls (Aves: Strigiformes).

    PubMed

    Lisney, Thomas J; Iwaniuk, Andrew N; Bandet, Mischa V; Wylie, Douglas R

    2012-01-01

    The eyes of vertebrates show adaptations to the visual environments in which they evolve. For example, eye shape is associated with activity pattern, while retinal topography is related to the symmetry or 'openness' of the habitat of a species. Although these relationships are well documented in many vertebrates including birds, the extent to which they hold true for species within the same avian order is not well understood. Owls (Strigiformes) represent an ideal group for the study of interspecific variation in the avian visual system because they are one of very few avian orders to contain species that vary in both activity pattern and habitat preference. Here, we examined interspecific variation in eye shape and retinal topography in nine species of owl. Eye shape (the ratio of corneal diameter to eye axial length) differed among species, with nocturnal species having relatively larger corneal diameters than diurnal species. All the owl species have an area of high retinal ganglion cell (RGC) density in the temporal retina and a visual streak of increased cell density extending across the central retina from temporal to nasal. However, the organization and degree of elongation of the visual streak varied considerably among species and this variation was quantified using H:V ratios. Species that live in open habitats and/or that are more diurnally active have well-defined, elongated visual streaks and high H:V ratios (3.88-2.33). In contrast, most nocturnal and/or forest-dwelling owls have a poorly defined visual streak, a more radially symmetrical arrangement of RGCs and lower H:V ratios (1.77-1.27). The results of a hierarchical cluster analysis indicate that the apparent interspecific variation is associated with activity pattern and habitat as opposed to the phylogenetic relationships among species. In seven species, the presence of a fovea was confirmed and it is suggested that all strigid owls may possess a fovea, whereas the tytonid barn owl (Tyto alba) does not. A size-frequency analysis of cell soma area indicates that a number of different RGC classes are represented in owls, including a population of large RGCs (cell soma area >150 µm(2)) that resemble the giant RGCs reported in other vertebrates. In conclusion, eye shape and retinal topography in owls vary among species and this variation is associated with different activity patterns and habitat preferences, thereby supporting similar observations in other vertebrates. Copyright © 2012 S. Karger AG, Basel.

  16. Dissociation of working memory processing associated with native and second languages: PET investigation.

    PubMed

    Kim, Jae-Jin; Kim, Myung Sun; Lee, Jae Sung; Lee, Dong Soo; Lee, Myung Chul; Kwon, Jun Soo

    2002-04-01

    Verbal working memory plays a significant role in language comprehension and problem-solving. The prefrontal cortex has been suggested as a critical area in working memory. Given that domain-specific dissociations of working memory may exist within the prefrontal cortex, it is possible that there may also be further functional divisions within the verbal working memory processing. While differences in the areas of the brain engaged in native and second languages have been demonstrated, little is known about the dissociation of verbal working memory associated with native and second languages. We have used H2(15)O positron emission tomography in 14 normal subjects in order to identify the neural correlates selectively involved in working memory of native (Korean) and second (English) languages. All subjects were highly proficient in the native language but poorly proficient in the second language. Cognitive tasks were a two-back task for three kinds of visually presented objects: simple pictures, English words, and Korean words. The anterior portion of the right dorsolateral prefrontal cortex and the left superior temporal gyrus were activated in working memory for the native language, whereas the posterior portion of the right dorsolateral prefrontal cortex and the left inferior temporal gyrus were activated in working memory for the second language. The results suggest that the right dorsolateral prefrontal cortex and left temporal lobe may be organized into two discrete, language-related functional systems. Internal phonological processing seems to play a predominant role in working memory processing for the native language with a high proficiency, whereas visual higher order control does so for the second language with a low proficiency. (C)2002 Elsevier Science (USA).

  17. Functional Connectivity of Resting Hemodynamic Signals in Submillimeter Orientation Columns of the Visual Cortex.

    PubMed

    Vasireddi, Anil K; Vazquez, Alberto L; Whitney, David E; Fukuda, Mitsuhiro; Kim, Seong-Gi

    2016-09-07

    Resting-state functional magnetic resonance imaging has been increasingly used for examining connectivity across brain regions. The spatial scale by which hemodynamic imaging can resolve functional connections at rest remains unknown. To examine this issue, deoxyhemoglobin-weighted intrinsic optical imaging data were acquired from the visual cortex of lightly anesthetized ferrets. The neural activity of orientation domains, which span a distance of 0.7-0.8 mm, has been shown to be correlated during evoked activity and at rest. We performed separate analyses to assess the degree to which the spatial and temporal characteristics of spontaneous hemodynamic signals depend on the known functional organization of orientation columns. As a control, artificial orientation column maps were generated. Spatially, resting hemodynamic patterns showed a higher spatial resemblance to iso-orientation maps than artificially generated maps. Temporally, a correlation analysis was used to establish whether iso-orientation domains are more correlated than orthogonal orientation domains. After accounting for a significant decrease in correlation as a function of distance, a small but significant temporal correlation between iso-orientation domains was found, which decreased with increasing difference in orientation preference. This dependence was abolished when using artificially synthetized orientation maps. Finally, the temporal correlation coefficient as a function of orientation difference at rest showed a correspondence with that calculated during visual stimulation suggesting that the strength of resting connectivity is related to the strength of the visual stimulation response. Our results suggest that temporal coherence of hemodynamic signals measured by optical imaging of intrinsic signals exists at a submillimeter columnar scale in resting state.

  18. Electrostimulation mapping of comprehension of auditory and visual words.

    PubMed

    Roux, Franck-Emmanuel; Miskin, Krasimir; Durand, Jean-Baptiste; Sacko, Oumar; Réhault, Emilie; Tanova, Rositsa; Démonet, Jean-François

    2015-10-01

    In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Three-way ROC validation of rs-fMRI visual information propagation transfer functions used to differentiate between RRMS and CIS optic neuritis patients.

    PubMed

    Farahani, Ehsan Shahrabi; Choudhury, Samiul H; Cortese, Filomeno; Costello, Fiona; Goodyear, Bradley; Smith, Michael R

    2017-07-01

    Resting-state fMRI (rs-fMRI) measures the temporal synchrony between different brain regions while the subject is at rest. We present an investigation using visual information propagation transfer functions as potential optic neuritis (ON) markers for the pathways between the lateral geniculate nuclei, the primary visual cortex, the lateral occipital cortex and the superior parietal cortex. We investigate marker reliability in differentiating between healthy controls and ON patients with clinically isolated syndrome (CIS), and relapsing-remitting multiple sclerosis (RRMS) using a three-way receiver operating characteristics analysis. We identify useful and reliable three-way ON related metrics in the rs-fMRI low-frequency band 0.0 Hz to 0.1 Hz, with potential markers associated with the higher frequency harmonics of these signals in the 0.1 Hz to 0.2 Hz and 0.2 Hz to 0.3 Hz bands.

  20. Relative suppression of magical thinking: a transcranial magnetic stimulation study.

    PubMed

    Bell, Vaughan; Reddy, Venu; Halligan, Peter; Kirov, George; Ellis, Hadyn

    2007-05-01

    The tendency to perceive meaning in noise (apophenia) has been linked to "magical thinking" (MT), a distinctive form of thinking associated with a range of normal cognitive styles, anomalous perceptual experiences and frank psychosis. Important aspects of MT include the propensity to imbue meaning or causality to events that might otherwise be considered coincidental. Structures in the lateral temporal lobes have been hypothesised to be involved in both the clinical and nonclinical aspects of MT. Accordingly, in this study we used single-pulse transcranial magnetic stimulation (TMS) to stimulate either the left or right lateral temporal areas, or the vertex, of 12 healthy participants (balanced for similar levels of MT, delusional ideation and temporal lobe disturbance) while they were required to indicate if they had "detected" pictures, claimed to be present by the experimenters, in visual noise. Relative to the vertex, TMS inhibition of the left lateral temporal area produced significant reduced tendency to report meaningful information, suggesting that left lateral temporal activation may be more important in MT and therefore producing and supporting anomalous beliefs and experiences. The effect cannot simply be explained by TMS induced cognitive slowing as reaction times were not affected.

Top