Sample records for visual input enhancement

  1. Effect of Power Point Enhanced Teaching (Visual Input) on Iranian Intermediate EFL Learners' Listening Comprehension Ability

    ERIC Educational Resources Information Center

    Sehati, Samira; Khodabandehlou, Morteza

    2017-01-01

    The present investigation was an attempt to study on the effect of power point enhanced teaching (visual input) on Iranian Intermediate EFL learners' listening comprehension ability. To that end, a null hypothesis was formulated as power point enhanced teaching (visual input) has no effect on Iranian Intermediate EFL learners' listening…

  2. Visual Working Memory Enhances the Neural Response to Matching Visual Input.

    PubMed

    Gayet, Surya; Guggenmos, Matthias; Christophel, Thomas B; Haynes, John-Dylan; Paffen, Chris L E; Van der Stigchel, Stefan; Sterzer, Philipp

    2017-07-12

    Visual working memory (VWM) is used to maintain visual information available for subsequent goal-directed behavior. The content of VWM has been shown to affect the behavioral response to concurrent visual input, suggesting that visual representations originating from VWM and from sensory input draw upon a shared neural substrate (i.e., a sensory recruitment stance on VWM storage). Here, we hypothesized that visual information maintained in VWM would enhance the neural response to concurrent visual input that matches the content of VWM. To test this hypothesis, we measured fMRI BOLD responses to task-irrelevant stimuli acquired from 15 human participants (three males) performing a concurrent delayed match-to-sample task. In this task, observers were sequentially presented with two shape stimuli and a retro-cue indicating which of the two shapes should be memorized for subsequent recognition. During the retention interval, a task-irrelevant shape (the probe) was briefly presented in the peripheral visual field, which could either match or mismatch the shape category of the memorized stimulus. We show that this probe stimulus elicited a stronger BOLD response, and allowed for increased shape-classification performance, when it matched rather than mismatched the concurrently memorized content, despite identical visual stimulation. Our results demonstrate that VWM enhances the neural response to concurrent visual input in a content-specific way. This finding is consistent with the view that neural populations involved in sensory processing are recruited for VWM storage, and it provides a common explanation for a plethora of behavioral studies in which VWM-matching visual input elicits a stronger behavioral and perceptual response. SIGNIFICANCE STATEMENT Humans heavily rely on visual information to interact with their environment and frequently must memorize such information for later use. Visual working memory allows for maintaining such visual information in the mind's eye after termination of its retinal input. It is hypothesized that information maintained in visual working memory relies on the same neural populations that process visual input. Accordingly, the content of visual working memory is known to affect our conscious perception of concurrent visual input. Here, we demonstrate for the first time that visual input elicits an enhanced neural response when it matches the content of visual working memory, both in terms of signal strength and information content. Copyright © 2017 the authors 0270-6474/17/376638-10$15.00/0.

  3. Learning Complex Grammar in the Virtual Classroom: A Comparison of Processing Instruction, Structured Input, Computerized Visual Input Enhancement, and Traditional Instruction

    ERIC Educational Resources Information Center

    Russell, Victoria

    2012-01-01

    This study investigated the effects of processing instruction (PI) and structured input (SI) on the acquisition of the subjunctive in adjectival clauses by 92 second-semester distance learners of Spanish. Computerized visual input enhancement (VIE) was combined with PI and SI in an attempt to increase the salience of the targeted grammatical form…

  4. Impact of enhanced sensory input on treadmill step frequency: infants born with myelomeningocele.

    PubMed

    Pantall, Annette; Teulier, Caroline; Smith, Beth A; Moerchen, Victoria; Ulrich, Beverly D

    2011-01-01

    To determine the effect of enhanced sensory input on the step frequency of infants with myelomeningocele (MMC) when supported on a motorized treadmill. Twenty-seven infants aged 2 to 10 months with MMC lesions at, or caudal to, L1 participated. We supported infants upright on the treadmill for 2 sets of 6 trials, each 30 seconds long. Enhanced sensory inputs within each set were presented in random order and included baseline, visual flow, unloading, weights, Velcro, and friction. Overall friction and visual flow significantly increased step rate, particularly for the older subjects. Friction and Velcro increased stance-phase duration. Enhanced sensory input had minimal effect on leg activity when infants were not stepping. : Increased friction via Dycem and enhancing visual flow via a checkerboard pattern on the treadmill belt appear to be more effective than the traditional smooth black belt surface for eliciting stepping patterns in infants with MMC.

  5. Impact of Enhanced Sensory Input on Treadmill Step Frequency: Infants Born With Myelomeningocele

    PubMed Central

    Pantall, Annette; Teulier, Caroline; Smith, Beth A; Moerchen, Victoria; Ulrich, Beverly D.

    2012-01-01

    Purpose To determine the effect of enhanced sensory input on the step frequency of infants with myelomeningocele (MMC) when supported on a motorized treadmill. Methods Twenty seven infants aged 2 to 10 months with MMC lesions at or caudal to L1 participated. We supported infants upright on the treadmill for 2 sets of 6 trials, each 30s long. Enhanced sensory inputs within each set were presented in random order and included: baseline, visual flow, unloading, weights, Velcro and friction. Results Overall friction and visual flow significantly increased step rate, particularly for the older group. Friction and Velcro increased stance phase duration. Enhanced sensory input had minimal effect on leg activity when infants were not stepping. Conclusions Increased friction via Dycem and enhancing visual flow via a checkerboard pattern on the treadmill belt appear more effective than the traditional smooth black belt surface for eliciting stepping patterns in infants with MMC. PMID:21266940

  6. Determining the Effectiveness of Visual Input Enhancement across Multiple Linguistic Cues

    ERIC Educational Resources Information Center

    Comeaux, Ian; McDonald, Janet L.

    2018-01-01

    Visual input enhancement (VIE) increases the salience of grammatical forms, potentially facilitating acquisition through attention mechanisms. Native English speakers were exposed to an artificial language containing four linguistic cues (verb agreement, case marking, animacy, word order), with morphological cues either unmarked, marked in the…

  7. Visual input enhances selective speech envelope tracking in auditory cortex at a "cocktail party".

    PubMed

    Zion Golumbic, Elana; Cogan, Gregory B; Schroeder, Charles E; Poeppel, David

    2013-01-23

    Our ability to selectively attend to one auditory signal amid competing input streams, epitomized by the "Cocktail Party" problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared with responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker's face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a Cocktail Party setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive.

  8. Visual Input Enhances Selective Speech Envelope Tracking in Auditory Cortex at a ‘Cocktail Party’

    PubMed Central

    Golumbic, Elana Zion; Cogan, Gregory B.; Schroeder, Charles E.; Poeppel, David

    2013-01-01

    Our ability to selectively attend to one auditory signal amidst competing input streams, epitomized by the ‘Cocktail Party’ problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared to responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic (MEG) signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker’s face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a ‘Cocktail Party’ setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive. PMID:23345218

  9. Density of Visual Input Enhancement and Grammar Learning: A Research Proposal

    ERIC Educational Resources Information Center

    Tran, Thu Hoang

    2009-01-01

    Research in the field of second language acquisition (SLA) has been done to ascertain the effectiveness of visual input enhancement (VIE) on grammar learning. However, one issue remains unexplored: the effects of VIE density on grammar learning. This paper presents a research proposal to investigate the effects of the density of VIE on English…

  10. The rapid distraction of attentional resources toward the source of incongruent stimulus input during multisensory conflict.

    PubMed

    Donohue, Sarah E; Todisco, Alexandra E; Woldorff, Marty G

    2013-04-01

    Neuroimaging work on multisensory conflict suggests that the relevant modality receives enhanced processing in the face of incongruency. However, the degree of stimulus processing in the irrelevant modality and the temporal cascade of the attentional modulations in either the relevant or irrelevant modalities are unknown. Here, we employed an audiovisual conflict paradigm with a sensory probe in the task-irrelevant modality (vision) to gauge the attentional allocation to that modality. ERPs were recorded as participants attended to and discriminated spoken auditory letters while ignoring simultaneous bilateral visual letter stimuli that were either fully congruent, fully incongruent, or partially incongruent (one side incongruent, one congruent) with the auditory stimulation. Half of the audiovisual letter stimuli were followed 500-700 msec later by a bilateral visual probe stimulus. As expected, ERPs to the audiovisual stimuli showed an incongruency ERP effect (fully incongruent versus fully congruent) of an enhanced, centrally distributed, negative-polarity wave starting ∼250 msec. More critically here, the sensory ERP components to the visual probes were larger when they followed fully incongruent versus fully congruent multisensory stimuli, with these enhancements greatest on fully incongruent trials with the slowest RTs. In addition, on the slowest-response partially incongruent trials, the P2 sensory component to the visual probes was larger contralateral to the preceding incongruent visual stimulus. These data suggest that, in response to conflicting multisensory stimulus input, the initial cognitive effect is a capture of attention by the incongruent irrelevant-modality input, pulling neural processing resources toward that modality, resulting in rapid enhancement, rather than rapid suppression, of that input.

  11. Enhanced learning of natural visual sequences in newborn chicks.

    PubMed

    Wood, Justin N; Prasad, Aditya; Goldman, Jason G; Wood, Samantha M W

    2016-07-01

    To what extent are newborn brains designed to operate over natural visual input? To address this question, we used a high-throughput controlled-rearing method to examine whether newborn chicks (Gallus gallus) show enhanced learning of natural visual sequences at the onset of vision. We took the same set of images and grouped them into either natural sequences (i.e., sequences showing different viewpoints of the same real-world object) or unnatural sequences (i.e., sequences showing different images of different real-world objects). When raised in virtual worlds containing natural sequences, newborn chicks developed the ability to recognize familiar images of objects. Conversely, when raised in virtual worlds containing unnatural sequences, newborn chicks' object recognition abilities were severely impaired. In fact, the majority of the chicks raised with the unnatural sequences failed to recognize familiar images of objects despite acquiring over 100 h of visual experience with those images. Thus, newborn chicks show enhanced learning of natural visual sequences at the onset of vision. These results indicate that newborn brains are designed to operate over natural visual input.

  12. Frequency-band signatures of visual responses to naturalistic input in ferret primary visual cortex during free viewing.

    PubMed

    Sellers, Kristin K; Bennett, Davis V; Fröhlich, Flavio

    2015-02-19

    Neuronal firing responses in visual cortex reflect the statistics of visual input and emerge from the interaction with endogenous network dynamics. Artificial visual stimuli presented to animals in which the network dynamics were constrained by anesthetic agents or trained behavioral tasks have provided fundamental understanding of how individual neurons in primary visual cortex respond to input. In contrast, very little is known about the mesoscale network dynamics and their relationship to microscopic spiking activity in the awake animal during free viewing of naturalistic visual input. To address this gap in knowledge, we recorded local field potential (LFP) and multiunit activity (MUA) simultaneously in all layers of primary visual cortex (V1) of awake, freely viewing ferrets presented with naturalistic visual input (nature movie clips). We found that naturalistic visual stimuli modulated the entire oscillation spectrum; low frequency oscillations were mostly suppressed whereas higher frequency oscillations were enhanced. In average across all cortical layers, stimulus-induced change in delta and alpha power negatively correlated with the MUA responses, whereas sensory-evoked increases in gamma power positively correlated with MUA responses. The time-course of the band-limited power in these frequency bands provided evidence for a model in which naturalistic visual input switched V1 between two distinct, endogenously present activity states defined by the power of low (delta, alpha) and high (gamma) frequency oscillatory activity. Therefore, the two mesoscale activity states delineated in this study may define the degree of engagement of the circuit with the processing of sensory input. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. The Effects of Audiovisual Inputs on Solving the Cocktail Party Problem in the Human Brain: An fMRI Study.

    PubMed

    Li, Yuanqing; Wang, Fangyi; Chen, Yongbin; Cichocki, Andrzej; Sejnowski, Terrence

    2017-09-25

    At cocktail parties, our brains often simultaneously receive visual and auditory information. Although the cocktail party problem has been widely investigated under auditory-only settings, the effects of audiovisual inputs have not. This study explored the effects of audiovisual inputs in a simulated cocktail party. In our fMRI experiment, each congruent audiovisual stimulus was a synthesis of 2 facial movie clips, each of which could be classified into 1 of 2 emotion categories (crying and laughing). Visual-only (faces) and auditory-only stimuli (voices) were created by extracting the visual and auditory contents from the synthesized audiovisual stimuli. Subjects were instructed to selectively attend to 1 of the 2 objects contained in each stimulus and to judge its emotion category in the visual-only, auditory-only, and audiovisual conditions. The neural representations of the emotion features were assessed by calculating decoding accuracy and brain pattern-related reproducibility index based on the fMRI data. We compared the audiovisual condition with the visual-only and auditory-only conditions and found that audiovisual inputs enhanced the neural representations of emotion features of the attended objects instead of the unattended objects. This enhancement might partially explain the benefits of audiovisual inputs for the brain to solve the cocktail party problem. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Vocabulary Learning through Viewing Video: The Effect of Two Enhancement Techniques

    ERIC Educational Resources Information Center

    Montero Perez, Maribel; Peters, Elke; Desmet, Piet

    2018-01-01

    While most studies on L2 vocabulary learning through input have addressed learners' vocabulary uptake from written text, this study focuses on audio-visual input. In particular, we investigate the effects of enhancing video by (1) adding different types of L2 subtitling (i.e. no captioning, full captioning, keyword captioning, and glossed keyword…

  15. Visual and proprioceptive interaction in patients with bilateral vestibular loss☆

    PubMed Central

    Cutfield, Nicholas J.; Scott, Gregory; Waldman, Adam D.; Sharp, David J.; Bronstein, Adolfo M.

    2014-01-01

    Following bilateral vestibular loss (BVL) patients gradually adapt to the loss of vestibular input and rely more on other sensory inputs. Here we examine changes in the way proprioceptive and visual inputs interact. We used functional magnetic resonance imaging (fMRI) to investigate visual responses in the context of varying levels of proprioceptive input in 12 BVL subjects and 15 normal controls. A novel metal-free vibrator was developed to allow vibrotactile neck proprioceptive input to be delivered in the MRI system. A high level (100 Hz) and low level (30 Hz) control stimulus was applied over the left splenius capitis; only the high frequency stimulus generates a significant proprioceptive stimulus. The neck stimulus was applied in combination with static and moving (optokinetic) visual stimuli, in a factorial fMRI experimental design. We found that high level neck proprioceptive input had more cortical effect on brain activity in the BVL patients. This included a reduction in visual motion responses during high levels of proprioceptive input and differential activation in the midline cerebellum. In early visual cortical areas, the effect of high proprioceptive input was present for both visual conditions but in lateral visual areas, including V5/MT, the effect was only seen in the context of visual motion stimulation. The finding of a cortical visuo-proprioceptive interaction in BVL patients is consistent with behavioural data indicating that, in BVL patients, neck afferents partly replace vestibular input during the CNS-mediated compensatory process. An fMRI cervico-visual interaction may thus substitute the known visuo-vestibular interaction reported in normal subject fMRI studies. The results provide evidence for a cortical mechanism of adaptation to vestibular failure, in the form of an enhanced proprioceptive influence on visual processing. The results may provide the basis for a cortical mechanism involved in proprioceptive substitution of vestibular function in BVL patients. PMID:25061564

  16. Visual and proprioceptive interaction in patients with bilateral vestibular loss.

    PubMed

    Cutfield, Nicholas J; Scott, Gregory; Waldman, Adam D; Sharp, David J; Bronstein, Adolfo M

    2014-01-01

    Following bilateral vestibular loss (BVL) patients gradually adapt to the loss of vestibular input and rely more on other sensory inputs. Here we examine changes in the way proprioceptive and visual inputs interact. We used functional magnetic resonance imaging (fMRI) to investigate visual responses in the context of varying levels of proprioceptive input in 12 BVL subjects and 15 normal controls. A novel metal-free vibrator was developed to allow vibrotactile neck proprioceptive input to be delivered in the MRI system. A high level (100 Hz) and low level (30 Hz) control stimulus was applied over the left splenius capitis; only the high frequency stimulus generates a significant proprioceptive stimulus. The neck stimulus was applied in combination with static and moving (optokinetic) visual stimuli, in a factorial fMRI experimental design. We found that high level neck proprioceptive input had more cortical effect on brain activity in the BVL patients. This included a reduction in visual motion responses during high levels of proprioceptive input and differential activation in the midline cerebellum. In early visual cortical areas, the effect of high proprioceptive input was present for both visual conditions but in lateral visual areas, including V5/MT, the effect was only seen in the context of visual motion stimulation. The finding of a cortical visuo-proprioceptive interaction in BVL patients is consistent with behavioural data indicating that, in BVL patients, neck afferents partly replace vestibular input during the CNS-mediated compensatory process. An fMRI cervico-visual interaction may thus substitute the known visuo-vestibular interaction reported in normal subject fMRI studies. The results provide evidence for a cortical mechanism of adaptation to vestibular failure, in the form of an enhanced proprioceptive influence on visual processing. The results may provide the basis for a cortical mechanism involved in proprioceptive substitution of vestibular function in BVL patients.

  17. Attention Enhances Synaptic Efficacy and Signal-to-Noise in Neural Circuits

    PubMed Central

    Briggs, Farran; Mangun, George R.; Usrey, W. Martin

    2013-01-01

    Summary Attention is a critical component of perception. However, the mechanisms by which attention modulates neuronal communication to guide behavior are poorly understood. To elucidate the synaptic mechanisms of attention, we developed a sensitive assay of attentional modulation of neuronal communication. In alert monkeys performing a visual spatial attention task, we probed thalamocortical communication by electrically stimulating neurons in the lateral geniculate nucleus of the thalamus while simultaneously recording shock-evoked responses from monosynaptically connected neurons in primary visual cortex. We found that attention enhances neuronal communication by (1) increasing the efficacy of presynaptic input in driving postsynaptic responses, (2) increasing synchronous responses among ensembles of postsynaptic neurons receiving independent input, and (3) decreasing redundant signals between postsynaptic neurons receiving common input. These results demonstrate that attention finely tunes neuronal communication at the synaptic level by selectively altering synaptic weights, enabling enhanced detection of salient events in the noisy sensory milieu. PMID:23803766

  18. Semantic-based crossmodal processing during visual suppression.

    PubMed

    Cox, Dustin; Hong, Sang Wook

    2015-01-01

    To reveal the mechanisms underpinning the influence of auditory input on visual awareness, we examine, (1) whether purely semantic-based multisensory integration facilitates the access to visual awareness for familiar visual events, and (2) whether crossmodal semantic priming is the mechanism responsible for the semantic auditory influence on visual awareness. Using continuous flash suppression, we rendered dynamic and familiar visual events (e.g., a video clip of an approaching train) inaccessible to visual awareness. We manipulated the semantic auditory context of the videos by concurrently pairing them with a semantically matching soundtrack (congruent audiovisual condition), a semantically non-matching soundtrack (incongruent audiovisual condition), or with no soundtrack (neutral video-only condition). We found that participants identified the suppressed visual events significantly faster (an earlier breakup of suppression) in the congruent audiovisual condition compared to the incongruent audiovisual condition and video-only condition. However, this facilitatory influence of semantic auditory input was only observed when audiovisual stimulation co-occurred. Our results suggest that the enhanced visual processing with a semantically congruent auditory input occurs due to audiovisual crossmodal processing rather than semantic priming, which may occur even when visual information is not available to visual awareness.

  19. Suppressive and enhancing effects in early visual cortex during illusory shape perception: A comment on.

    PubMed

    Moors, Pieter

    2015-01-01

    In a recent functional magnetic resonance imaging study, Kok and de Lange (2014) observed that BOLD activity for a Kanizsa illusory shape stimulus, in which pacmen-like inducers elicit an illusory shape percept, was either enhanced or suppressed relative to a nonillusory control configuration depending on whether the spatial profile of BOLD activity in early visual cortex was related to the illusory shape or the inducers, respectively. The authors argued that these findings fit well with the predictive coding framework, because top-down predictions related to the illusory shape are not met with bottom-up sensory input and hence the feedforward error signal is enhanced. Conversely, for the inducing elements, there is a match between top-down predictions and input, leading to a decrease in error. Rather than invoking predictive coding as the explanatory framework, the suppressive effect related to the inducers might be caused by neural adaptation to perceptually stable input due to the trial sequence used in the experiment.

  20. Recent Visual Experience Shapes Visual Processing in Rats through Stimulus-Specific Adaptation and Response Enhancement.

    PubMed

    Vinken, Kasper; Vogels, Rufin; Op de Beeck, Hans

    2017-03-20

    From an ecological point of view, it is generally suggested that the main goal of vision in rats and mice is navigation and (aerial) predator evasion [1-3]. The latter requires fast and accurate detection of a change in the visual environment. An outstanding question is whether there are mechanisms in the rodent visual system that would support and facilitate visual change detection. An experimental protocol frequently used to investigate change detection in humans is the oddball paradigm, in which a rare, unexpected stimulus is presented in a train of stimulus repetitions [4]. A popular "predictive coding" theory of cortical responses states that neural responses should decrease for expected sensory input and increase for unexpected input [5, 6]. Despite evidence for response suppression and enhancement in noninvasive scalp recordings in humans with this paradigm [7, 8], it has proven challenging to observe both phenomena in invasive action potential recordings in other animals [9-11]. During a visual oddball experiment, we recorded multi-unit spiking activity in rat primary visual cortex (V1) and latero-intermediate area (LI), which is a higher area of the rodent ventral visual stream. In rat V1, there was only evidence for response suppression related to stimulus-specific adaptation, and not for response enhancement. However, higher up in area LI, spiking activity showed clear surprise-based response enhancement in addition to stimulus-specific adaptation. These results show that neural responses along the rat ventral visual stream become increasingly sensitive to changes in the visual environment, suggesting a system specialized in the detection of unexpected events. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Audiovisual Perception of Noise Vocoded Speech in Dyslexic and Non-Dyslexic Adults: The Role of Low-Frequency Visual Modulations

    ERIC Educational Resources Information Center

    Megnin-Viggars, Odette; Goswami, Usha

    2013-01-01

    Visual speech inputs can enhance auditory speech information, particularly in noisy or degraded conditions. The natural statistics of audiovisual speech highlight the temporal correspondence between visual and auditory prosody, with lip, jaw, cheek and head movements conveying information about the speech envelope. Low-frequency spatial and…

  2. Texture Segregation Causes Early Figure Enhancement and Later Ground Suppression in Areas V1 and V4 of Visual Cortex

    PubMed Central

    Poort, Jasper; Self, Matthew W.; van Vugt, Bram; Malkki, Hemi; Roelfsema, Pieter R.

    2016-01-01

    Segregation of images into figures and background is fundamental for visual perception. Cortical neurons respond more strongly to figural image elements than to background elements, but the mechanisms of figure–ground modulation (FGM) are only partially understood. It is unclear whether FGM in early and mid-level visual cortex is caused by an enhanced response to the figure, a suppressed response to the background, or both. We studied neuronal activity in areas V1 and V4 in monkeys performing a texture segregation task. We compared texture-defined figures with homogeneous textures and found an early enhancement of the figure representation, and a later suppression of the background. Across neurons, the strength of figure enhancement was independent of the strength of background suppression. We also examined activity in the different V1 layers. Both figure enhancement and ground suppression were strongest in superficial and deep layers and weaker in layer 4. The current–source density profiles suggested that figure enhancement was caused by stronger synaptic inputs in feedback-recipient layers 1, 2, and 5 and ground suppression by weaker inputs in these layers, suggesting an important role for feedback connections from higher level areas. These results provide new insights into the mechanisms for figure–ground organization. PMID:27522074

  3. Nonvisual influences on visual-information processing in the superior colliculus.

    PubMed

    Stein, B E; Jiang, W; Wallace, M T; Stanford, T R

    2001-01-01

    Although visually responsive neurons predominate in the deep layers of the superior colliculus (SC), the majority of them also receive sensory inputs from nonvisual sources (i.e. auditory and/or somatosensory). Most of these 'multisensory' neurons are able to synthesize their cross-modal inputs and, as a consequence, their responses to visual stimuli can be profoundly enhanced or depressed in the presence of a nonvisual cue. Whether response enhancement or response depression is produced by this multisensory interaction is predictable based on several factors. These include: the organization of a neuron's visual and nonvisual receptive fields; the relative spatial relationships of the different stimuli (to their respective receptive fields and to one another); and whether or not the neuron is innervated by a select population of cortical neurons. The response enhancement or depression of SC neurons via multisensory integration has significant survival value via its profound impact on overt attentive/orientation behaviors. Nevertheless, these multisensory processes are not present at birth, and require an extensive period of postnatal maturation. It seems likely that the sensory experiences obtained during this period play an important role in crafting the processes underlying these multisensory interactions.

  4. The Inversion of Sensory Processing by Feedback Pathways: A Model of Visual Cognitive Functions.

    ERIC Educational Resources Information Center

    Harth, E.; And Others

    1987-01-01

    Explains the hierarchic structure of the mammalian visual system. Proposes a model in which feedback pathways serve to modify sensory stimuli in ways that enhance and complete sensory input patterns. Investigates the functioning of the system through computer simulations. (ML)

  5. Perceptual Training Strongly Improves Visual Motion Perception in Schizophrenia

    ERIC Educational Resources Information Center

    Norton, Daniel J.; McBain, Ryan K.; Ongur, Dost; Chen, Yue

    2011-01-01

    Schizophrenia patients exhibit perceptual and cognitive deficits, including in visual motion processing. Given that cognitive systems depend upon perceptual inputs, improving patients' perceptual abilities may be an effective means of cognitive intervention. In healthy people, motion perception can be enhanced through perceptual learning, but it…

  6. Visual cortex responses reflect temporal structure of continuous quasi-rhythmic sensory stimulation.

    PubMed

    Keitel, Christian; Thut, Gregor; Gross, Joachim

    2017-02-01

    Neural processing of dynamic continuous visual input, and cognitive influences thereon, are frequently studied in paradigms employing strictly rhythmic stimulation. However, the temporal structure of natural stimuli is hardly ever fully rhythmic but possesses certain spectral bandwidths (e.g. lip movements in speech, gestures). Examining periodic brain responses elicited by strictly rhythmic stimulation might thus represent ideal, yet isolated cases. Here, we tested how the visual system reflects quasi-rhythmic stimulation with frequencies continuously varying within ranges of classical theta (4-7Hz), alpha (8-13Hz) and beta bands (14-20Hz) using EEG. Our findings substantiate a systematic and sustained neural phase-locking to stimulation in all three frequency ranges. Further, we found that allocation of spatial attention enhances EEG-stimulus locking to theta- and alpha-band stimulation. Our results bridge recent findings regarding phase locking ("entrainment") to quasi-rhythmic visual input and "frequency-tagging" experiments employing strictly rhythmic stimulation. We propose that sustained EEG-stimulus locking can be considered as a continuous neural signature of processing dynamic sensory input in early visual cortices. Accordingly, EEG-stimulus locking serves to trace the temporal evolution of rhythmic as well as quasi-rhythmic visual input and is subject to attentional bias. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  7. Rapid Simultaneous Enhancement of Visual Sensitivity and Perceived Contrast during Saccade Preparation

    PubMed Central

    Rolfs, Martin; Carrasco, Marisa

    2012-01-01

    Humans and other animals with foveate vision make saccadic eye movements to prioritize the visual analysis of behaviorally relevant information. Even before movement onset, visual processing is selectively enhanced at the target of a saccade, presumably gated by brain areas controlling eye movements. Here we assess concurrent changes in visual performance and perceived contrast before saccades, and show that saccade preparation enhances perception rapidly, altering early visual processing in a manner akin to increasing the physical contrast of the visual input. Observers compared orientation and contrast of a test stimulus, appearing briefly before a saccade, to a standard stimulus, presented previously during a fixation period. We found simultaneous progressive enhancement in both orientation discrimination performance and perceived contrast as time approached saccade onset. These effects were robust as early as 60 ms after the eye movement was cued, much faster than the voluntary deployment of covert attention (without eye movements), which takes ~300 ms. Our results link the dynamics of saccade preparation, visual performance, and subjective experience and show that upcoming eye movements alter visual processing by increasing the signal strength. PMID:23035086

  8. Hand placement near the visual stimulus improves orientation selectivity in V2 neurons

    PubMed Central

    Sergio, Lauren E.; Crawford, J. Douglas; Fallah, Mazyar

    2015-01-01

    Often, the brain receives more sensory input than it can process simultaneously. Spatial attention helps overcome this limitation by preferentially processing input from a behaviorally-relevant location. Recent neuropsychological and psychophysical studies suggest that attention is deployed to near-hand space much like how the oculomotor system can deploy attention to an upcoming gaze position. Here we provide the first neuronal evidence that the presence of a nearby hand enhances orientation selectivity in early visual processing area V2. When the hand was placed outside the receptive field, responses to the preferred orientation were significantly enhanced without a corresponding significant increase at the orthogonal orientation. Consequently, there was also a significant sharpening of orientation tuning. In addition, the presence of the hand reduced neuronal response variability. These results indicate that attention is automatically deployed to the space around a hand, improving orientation selectivity. Importantly, this appears to be optimal for motor control of the hand, as opposed to oculomotor mechanisms which enhance responses without sharpening orientation selectivity. Effector-based mechanisms for visual enhancement thus support not only the spatiotemporal dissociation of gaze and reach, but also the optimization of vision for their separate requirements for guiding movements. PMID:25717165

  9. High visual resolution matters in audiovisual speech perception, but only for some.

    PubMed

    Alsius, Agnès; Wayne, Rachel V; Paré, Martin; Munhall, Kevin G

    2016-07-01

    The basis for individual differences in the degree to which visual speech input enhances comprehension of acoustically degraded speech is largely unknown. Previous research indicates that fine facial detail is not critical for visual enhancement when auditory information is available; however, these studies did not examine individual differences in ability to make use of fine facial detail in relation to audiovisual speech perception ability. Here, we compare participants based on their ability to benefit from visual speech information in the presence of an auditory signal degraded with noise, modulating the resolution of the visual signal through low-pass spatial frequency filtering and monitoring gaze behavior. Participants who benefited most from the addition of visual information (high visual gain) were more adversely affected by the removal of high spatial frequency information, compared to participants with low visual gain, for materials with both poor and rich contextual cues (i.e., words and sentences, respectively). Differences as a function of gaze behavior between participants with the highest and lowest visual gains were observed only for words, with participants with the highest visual gain fixating longer on the mouth region. Our results indicate that the individual variance in audiovisual speech in noise performance can be accounted for, in part, by better use of fine facial detail information extracted from the visual signal and increased fixation on mouth regions for short stimuli. Thus, for some, audiovisual speech perception may suffer when the visual input (in addition to the auditory signal) is less than perfect.

  10. Neocortical Rebound Depolarization Enhances Visual Perception

    PubMed Central

    Funayama, Kenta; Ban, Hiroshi; Chan, Allen W.; Matsuki, Norio; Murphy, Timothy H.; Ikegaya, Yuji

    2015-01-01

    Animals are constantly exposed to the time-varying visual world. Because visual perception is modulated by immediately prior visual experience, visual cortical neurons may register recent visual history into a specific form of offline activity and link it to later visual input. To examine how preceding visual inputs interact with upcoming information at the single neuron level, we designed a simple stimulation protocol in which a brief, orientated flashing stimulus was subsequently coupled to visual stimuli with identical or different features. Using in vivo whole-cell patch-clamp recording and functional two-photon calcium imaging from the primary visual cortex (V1) of awake mice, we discovered that a flash of sinusoidal grating per se induces an early, transient activation as well as a long-delayed reactivation in V1 neurons. This late response, which started hundreds of milliseconds after the flash and persisted for approximately 2 s, was also observed in human V1 electroencephalogram. When another drifting grating stimulus arrived during the late response, the V1 neurons exhibited a sublinear, but apparently increased response, especially to the same grating orientation. In behavioral tests of mice and humans, the flashing stimulation enhanced the detection power of the identically orientated visual stimulation only when the second stimulation was presented during the time window of the late response. Therefore, V1 late responses likely provide a neural basis for admixing temporally separated stimuli and extracting identical features in time-varying visual environments. PMID:26274866

  11. Visuotactile motion congruence enhances gamma-band activity in visual and somatosensory cortices.

    PubMed

    Krebber, Martin; Harwood, James; Spitzer, Bernhard; Keil, Julian; Senkowski, Daniel

    2015-08-15

    When touching and viewing a moving surface our visual and somatosensory systems receive congruent spatiotemporal input. Behavioral studies have shown that motion congruence facilitates interplay between visual and tactile stimuli, but the neural mechanisms underlying this interplay are not well understood. Neural oscillations play a role in motion processing and multisensory integration. They may also be crucial for visuotactile motion processing. In this electroencephalography study, we applied linear beamforming to examine the impact of visuotactile motion congruence on beta and gamma band activity (GBA) in visual and somatosensory cortices. Visual and tactile inputs comprised of gratings that moved either in the same or different directions. Participants performed a target detection task that was unrelated to motion congruence. While there were no effects in the beta band (13-21Hz), the power of GBA (50-80Hz) in visual and somatosensory cortices was larger for congruent compared with incongruent motion stimuli. This suggests enhanced bottom-up multisensory processing when visual and tactile gratings moved in the same direction. Supporting its behavioral relevance, GBA was correlated with shorter reaction times in the target detection task. We conclude that motion congruence plays an important role for the integrative processing of visuotactile stimuli in sensory cortices, as reflected by oscillatory responses in the gamma band. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Texture Segregation Causes Early Figure Enhancement and Later Ground Suppression in Areas V1 and V4 of Visual Cortex.

    PubMed

    Poort, Jasper; Self, Matthew W; van Vugt, Bram; Malkki, Hemi; Roelfsema, Pieter R

    2016-10-01

    Segregation of images into figures and background is fundamental for visual perception. Cortical neurons respond more strongly to figural image elements than to background elements, but the mechanisms of figure-ground modulation (FGM) are only partially understood. It is unclear whether FGM in early and mid-level visual cortex is caused by an enhanced response to the figure, a suppressed response to the background, or both.We studied neuronal activity in areas V1 and V4 in monkeys performing a texture segregation task. We compared texture-defined figures with homogeneous textures and found an early enhancement of the figure representation, and a later suppression of the background. Across neurons, the strength of figure enhancement was independent of the strength of background suppression.We also examined activity in the different V1 layers. Both figure enhancement and ground suppression were strongest in superficial and deep layers and weaker in layer 4. The current-source density profiles suggested that figure enhancement was caused by stronger synaptic inputs in feedback-recipient layers 1, 2, and 5 and ground suppression by weaker inputs in these layers, suggesting an important role for feedback connections from higher level areas. These results provide new insights into the mechanisms for figure-ground organization. © The Author 2016. Published by Oxford University Press.

  13. The answer is blowing in the wind: free-flying honeybees can integrate visual and mechano-sensory inputs for making complex foraging decisions.

    PubMed

    Ravi, Sridhar; Garcia, Jair E; Wang, Chun; Dyer, Adrian G

    2016-11-01

    Bees navigate in complex environments using visual, olfactory and mechano-sensorial cues. In the lowest region of the atmosphere, the wind environment can be highly unsteady and bees employ fine motor-skills to enhance flight control. Recent work reveals sophisticated multi-modal processing of visual and olfactory channels by the bee brain to enhance foraging efficiency, but it currently remains unclear whether wind-induced mechano-sensory inputs are also integrated with visual information to facilitate decision making. Individual honeybees were trained in a linear flight arena with appetitive-aversive differential conditioning to use a context-setting cue of 3 m s -1 cross-wind direction to enable decisions about either a 'blue' or 'yellow' star stimulus being the correct alternative. Colour stimuli properties were mapped in bee-specific opponent-colour spaces to validate saliency, and to thus enable rapid reverse learning. Bees were able to integrate mechano-sensory and visual information to facilitate decisions that were significantly different to chance expectation after 35 learning trials. An independent group of bees were trained to find a single rewarding colour that was unrelated to the wind direction. In these trials, wind was not used as a context-setting cue and served only as a potential distracter in identifying the relevant rewarding visual stimuli. Comparison between respective groups shows that bees can learn to integrate visual and mechano-sensory information in a non-elemental fashion, revealing an unsuspected level of sensory processing in honeybees, and adding to the growing body of knowledge on the capacity of insect brains to use multi-modal sensory inputs in mediating foraging behaviour. © 2016. Published by The Company of Biologists Ltd.

  14. A comparison of ordinary fuzzy and intuitionistic fuzzy approaches in visualizing the image of flat electroencephalography

    NASA Astrophysics Data System (ADS)

    Zenian, Suzelawati; Ahmad, Tahir; Idris, Amidora

    2017-09-01

    Medical imaging is a subfield in image processing that deals with medical images. It is very crucial in visualizing the body parts in non-invasive way by using appropriate image processing techniques. Generally, image processing is used to enhance visual appearance of images for further interpretation. However, the pixel values of an image may not be precise as uncertainty arises within the gray values of an image due to several factors. In this paper, the input and output images of Flat Electroencephalography (fEEG) of an epileptic patient at varied time are presented. Furthermore, ordinary fuzzy and intuitionistic fuzzy approaches are implemented to the input images and the results are compared between these two approaches.

  15. Influence of callosal transfer on visual cortical evoked response and the implication in the development of a visual prosthesis.

    PubMed

    Siu, Timothy L; Morley, John W

    2007-12-01

    The development of a visual prosthesis has been limited by an incomplete understanding of functional changes of the visual cortex accompanying deafferentation. In particular, the role of the corpus callosum in modulating these changes has not been fully evaluated. Recent experimental evidence suggests that through synaptic modulation, short-term (4-5 days) visual deafferentation can induce plastic changes in the visual cortex, leading to adaptive enhancement of residual visual input. We therefore investigated whether a compensatory rerouting of visual information can occur via the indirect transcallosal linkage after deafferentation and the influence of this interhemispheric communication on the visual evoked response of each hemisphere. In albino rabbits, misrouting of uncrossed optic fibres reduces ipsilateral input to a negligible degree. We thus took advantage of this congenital anomaly to model unilateral cortical and ocular deafferentation by eliminating visual input from one eye and recorded the visual evoked potential (VEP) from the intact eye. In keeping with the chiasmal anomaly, no VEP was elicited from the hemisphere ipsilateral to the intact eye. This remained unchanged following unilateral visual deafferentation. The amplitude and latency of the VEP in the fellow hemisphere, however, were significantly decreased in the deafferented animals. Our data suggest that callosal linkage does not contribute to visual evoked responses and this is not changed after short-term deafferentation. The decrease in amplitude and latency of evoked responses in the hemisphere ipsilateral to the treated eye, however, confirms the facilitatory role of callosal transfer. This observation highlights the importance of bicortical stimulation in the future design of a cortical visual prosthesis.

  16. Learning enhances the relative impact of top-down processing in the visual cortex

    PubMed Central

    Makino, Hiroshi; Komiyama, Takaki

    2015-01-01

    Theories have proposed that in sensory cortices learning can enhance top-down modulation by higher brain areas while reducing bottom-up sensory inputs. To address circuit mechanisms underlying this process, we examined the activity of layer 2/3 (L2/3) excitatory neurons in the mouse primary visual cortex (V1) as well as L4 neurons, the main bottom-up source, and long-range top-down projections from the retrosplenial cortex (RSC) during associative learning over days using chronic two-photon calcium imaging. During learning, L4 responses gradually weakened, while RSC inputs became stronger. Furthermore, L2/3 acquired a ramp-up response temporal profile with learning, coinciding with a similar change in RSC inputs. Learning also reduced the activity of somatostatin-expressing inhibitory neurons (SOM-INs) in V1 that could potentially gate top-down inputs. Finally, RSC inactivation or SOM-IN activation was sufficient to partially reverse the learning-induced changes in L2/3. Together, these results reveal a learning-dependent dynamic shift in the balance between bottom-up and top-down information streams and uncover a role of SOM-INs in controlling this process. PMID:26167904

  17. Being First Matters: Topographical Representational Similarity Analysis of ERP Signals Reveals Separate Networks for Audiovisual Temporal Binding Depending on the Leading Sense.

    PubMed

    Cecere, Roberto; Gross, Joachim; Willis, Ashleigh; Thut, Gregor

    2017-05-24

    In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Intersensory timing is crucial in this process because only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window, revealing asymmetries in its size and plasticity depending on the leading input: auditory-visual (AV) or visual-auditory (VA). Here, we tested whether separate neuronal mechanisms underlie this AV-VA dichotomy in humans. We recorded high-density EEG while participants performed an audiovisual simultaneity judgment task including various AV-VA asynchronies and unisensory control conditions (visual-only, auditory-only) and tested whether AV and VA processing generate different patterns of brain activity. After isolating the multisensory components of AV-VA event-related potentials (ERPs) from the sum of their unisensory constituents, we ran a time-resolved topographical representational similarity analysis (tRSA) comparing the AV and VA ERP maps. Spatial cross-correlation matrices were built from real data to index the similarity between the AV and VA maps at each time point (500 ms window after stimulus) and then correlated with two alternative similarity model matrices: AV maps = VA maps versus AV maps ≠ VA maps The tRSA results favored the AV maps ≠ VA maps model across all time points, suggesting that audiovisual temporal binding (indexed by synchrony perception) engages different neural pathways depending on the leading sense. The existence of such dual route supports recent theoretical accounts proposing that multiple binding mechanisms are implemented in the brain to accommodate different information parsing strategies in auditory and visual sensory systems. SIGNIFICANCE STATEMENT Intersensory timing is a crucial aspect of multisensory integration, determining whether and how inputs in one modality enhance stimulus processing in another modality. Our research demonstrates that evaluating synchrony of auditory-leading (AV) versus visual-leading (VA) audiovisual stimulus pairs is characterized by two distinct patterns of brain activity. This suggests that audiovisual integration is not a unitary process and that different binding mechanisms are recruited in the brain based on the leading sense. These mechanisms may be relevant for supporting different classes of multisensory operations, for example, auditory enhancement of visual attention (AV) and visual enhancement of auditory speech (VA). Copyright © 2017 Cecere et al.

  18. Attention enhances contrast appearance via increased input baseline of neural responses

    PubMed Central

    Cutrone, Elizabeth K.; Heeger, David J.; Carrasco, Marisa

    2014-01-01

    Covert spatial attention increases the perceived contrast of stimuli at attended locations, presumably via enhancement of visual neural responses. However, the relation between perceived contrast and the underlying neural responses has not been characterized. In this study, we systematically varied stimulus contrast, using a two-alternative, forced-choice comparison task to probe the effect of attention on appearance across the contrast range. We modeled performance in the task as a function of underlying neural contrast-response functions. Fitting this model to the observed data revealed that an increased input baseline in the neural responses accounted for the enhancement of apparent contrast with spatial attention. PMID:25549920

  19. A Brief Period of Postnatal Visual Deprivation Alters the Balance between Auditory and Visual Attention.

    PubMed

    de Heering, Adélaïde; Dormal, Giulia; Pelland, Maxime; Lewis, Terri; Maurer, Daphne; Collignon, Olivier

    2016-11-21

    Is a short and transient period of visual deprivation early in life sufficient to induce lifelong changes in how we attend to, and integrate, simple visual and auditory information [1, 2]? This question is of crucial importance given the recent demonstration in both animals and humans that a period of blindness early in life permanently affects the brain networks dedicated to visual, auditory, and multisensory processing [1-16]. To address this issue, we compared a group of adults who had been treated for congenital bilateral cataracts during early infancy with a group of normally sighted controls on a task requiring simple detection of lateralized visual and auditory targets, presented alone or in combination. Redundancy gains obtained from the audiovisual conditions were similar between groups and surpassed the reaction time distribution predicted by Miller's race model. However, in comparison to controls, cataract-reversal patients were faster at processing simple auditory targets and showed differences in how they shifted attention across modalities. Specifically, they were faster at switching attention from visual to auditory inputs than in the reverse situation, while an opposite pattern was observed for controls. Overall, these results reveal that the absence of visual input during the first months of life does not prevent the development of audiovisual integration but enhances the salience of simple auditory inputs, leading to a different crossmodal distribution of attentional resources between auditory and visual stimuli. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. The effect of transcutaneous electrical nerve stimulation on postural sway on fatigued dorsi-plantar flexor.

    PubMed

    Yu, JaeHo; Lee, SoYeon; Kim, HyongJo; Seo, DongKwon; Hong, JiHeon; Lee, DongYeop

    2014-01-01

    The application of transcutaneous electrical nerve stimulation (TENS) enhances muscle weakness and static balance by muscle fatigue. It was said that TENS affects decrease of the postural sway. On the other hand, the applications of TENS to separate dorsi-plantar flexor and the comparison with and without visual input have not been studied. Thus, the aim of this study was to compare the effects of TENS on fatigued dorsi-plantar flexor with and without visual input. 13 healthy adult males and 12 females were recruited and agreed to participate as the subject (mean age 20.5 ± 1.4, total 25) in this study after a preliminary research. This experiment was a single group repeated measurements design in three days. The first day, after exercise-induced fatigue, the standing position was maintained for 30 minutes and then the postural sway was measured on eyes open(EO) and eyes closed(EC). The second, TENS was applied to dorsi flexor in standing position for 30 minutes after conducting exercise-induced fatigue. On the last day, plantar flexor applied by TENS was measured to the postural sway on EO and EC after same exercise-induced fatigue. The visual input was not statistically difference between the groups. However, when compared of dorsi-plantar flexor after applied to TENS without visual input, the postural sway of plantar flexor was lower than the dorsi flexor (p< 0.05). As the result, the application of TENS in GCM clinically decreases the postural sway with visual input it helps to stable posture control and prevent to falling down.

  1. The effect of visual context on manual localization of remembered targets

    NASA Technical Reports Server (NTRS)

    Barry, S. R.; Bloomberg, J. J.; Huebner, W. P.

    1997-01-01

    This paper examines the contribution of egocentric cues and visual context to manual localization of remembered targets. Subjects pointed in the dark to the remembered position of a target previously viewed without or within a structured visual scene. Without a remembered visual context, subjects pointed to within 2 degrees of the target. The presence of a visual context with cues of straight ahead enhanced pointing performance to the remembered location of central but not off-center targets. Thus, visual context provides strong visual cues of target position and the relationship of body position to target location. Without a visual context, egocentric cues provide sufficient input for accurate pointing to remembered targets.

  2. Anodal transcranial direct current stimulation transiently improves contrast sensitivity and normalizes visual cortex activation in individuals with amblyopia.

    PubMed

    Spiegel, Daniel P; Byblow, Winston D; Hess, Robert F; Thompson, Benjamin

    2013-10-01

    Amblyopia is a neurodevelopmental disorder of vision that is associated with abnormal patterns of neural inhibition within the visual cortex. This disorder is often considered to be untreatable in adulthood because of insufficient visual cortex plasticity. There is increasing evidence that interventions that target inhibitory interactions within the visual cortex, including certain types of noninvasive brain stimulation, can improve visual function in adults with amblyopia. We tested the hypothesis that anodal transcranial direct current stimulation (a-tDCS) would improve visual function in adults with amblyopia by enhancing the neural response to inputs from the amblyopic eye. Thirteen adults with amblyopia participated and contrast sensitivity in the amblyopic and fellow fixing eye was assessed before, during and after a-tDCS or cathodal tDCS (c-tDCS). Five participants also completed a functional magnetic resonance imaging (fMRI) study designed to investigate the effect of a-tDCS on the blood oxygen level-dependent response within the visual cortex to inputs from the amblyopic versus the fellow fixing eye. A subgroup of 8/13 participants showed a transient improvement in amblyopic eye contrast sensitivity for at least 30 minutes after a-tDCS. fMRI measurements indicated that the characteristic cortical response asymmetry in amblyopes, which favors the fellow eye, was reduced by a-tDCS. These preliminary results suggest that a-tDCS deserves further investigation as a potential tool to enhance amblyopia treatment outcomes in adults.

  3. Being First Matters: Topographical Representational Similarity Analysis of ERP Signals Reveals Separate Networks for Audiovisual Temporal Binding Depending on the Leading Sense

    PubMed Central

    2017-01-01

    In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Intersensory timing is crucial in this process because only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window, revealing asymmetries in its size and plasticity depending on the leading input: auditory–visual (AV) or visual–auditory (VA). Here, we tested whether separate neuronal mechanisms underlie this AV–VA dichotomy in humans. We recorded high-density EEG while participants performed an audiovisual simultaneity judgment task including various AV–VA asynchronies and unisensory control conditions (visual-only, auditory-only) and tested whether AV and VA processing generate different patterns of brain activity. After isolating the multisensory components of AV–VA event-related potentials (ERPs) from the sum of their unisensory constituents, we ran a time-resolved topographical representational similarity analysis (tRSA) comparing the AV and VA ERP maps. Spatial cross-correlation matrices were built from real data to index the similarity between the AV and VA maps at each time point (500 ms window after stimulus) and then correlated with two alternative similarity model matrices: AVmaps = VAmaps versus AVmaps ≠ VAmaps. The tRSA results favored the AVmaps ≠ VAmaps model across all time points, suggesting that audiovisual temporal binding (indexed by synchrony perception) engages different neural pathways depending on the leading sense. The existence of such dual route supports recent theoretical accounts proposing that multiple binding mechanisms are implemented in the brain to accommodate different information parsing strategies in auditory and visual sensory systems. SIGNIFICANCE STATEMENT Intersensory timing is a crucial aspect of multisensory integration, determining whether and how inputs in one modality enhance stimulus processing in another modality. Our research demonstrates that evaluating synchrony of auditory-leading (AV) versus visual-leading (VA) audiovisual stimulus pairs is characterized by two distinct patterns of brain activity. This suggests that audiovisual integration is not a unitary process and that different binding mechanisms are recruited in the brain based on the leading sense. These mechanisms may be relevant for supporting different classes of multisensory operations, for example, auditory enhancement of visual attention (AV) and visual enhancement of auditory speech (VA). PMID:28450537

  4. Nonlinear circuits for naturalistic visual motion estimation

    PubMed Central

    Fitzgerald, James E; Clark, Damon A

    2015-01-01

    Many animals use visual signals to estimate motion. Canonical models suppose that animals estimate motion by cross-correlating pairs of spatiotemporally separated visual signals, but recent experiments indicate that humans and flies perceive motion from higher-order correlations that signify motion in natural environments. Here we show how biologically plausible processing motifs in neural circuits could be tuned to extract this information. We emphasize how known aspects of Drosophila's visual circuitry could embody this tuning and predict fly behavior. We find that segregating motion signals into ON/OFF channels can enhance estimation accuracy by accounting for natural light/dark asymmetries. Furthermore, a diversity of inputs to motion detecting neurons can provide access to more complex higher-order correlations. Collectively, these results illustrate how non-canonical computations improve motion estimation with naturalistic inputs. This argues that the complexity of the fly's motion computations, implemented in its elaborate circuits, represents a valuable feature of its visual motion estimator. DOI: http://dx.doi.org/10.7554/eLife.09123.001 PMID:26499494

  5. Combined contributions of feedforward and feedback inputs to bottom-up attention

    PubMed Central

    Khorsand, Peyman; Moore, Tirin; Soltani, Alireza

    2015-01-01

    In order to deal with a large amount of information carried by visual inputs entering the brain at any given point in time, the brain swiftly uses the same inputs to enhance processing in one part of visual field at the expense of the others. These processes, collectively called bottom-up attentional selection, are assumed to solely rely on feedforward processing of the external inputs, as it is implied by the nomenclature. Nevertheless, evidence from recent experimental and modeling studies points to the role of feedback in bottom-up attention. Here, we review behavioral and neural evidence that feedback inputs are important for the formation of signals that could guide attentional selection based on exogenous inputs. Moreover, we review results from a modeling study elucidating mechanisms underlying the emergence of these signals in successive layers of neural populations and how they depend on feedback from higher visual areas. We use these results to interpret and discuss more recent findings that can further unravel feedforward and feedback neural mechanisms underlying bottom-up attention. We argue that while it is descriptively useful to separate feedforward and feedback processes underlying bottom-up attention, these processes cannot be mechanistically separated into two successive stages as they occur at almost the same time and affect neural activity within the same brain areas using similar neural mechanisms. Therefore, understanding the interaction and integration of feedforward and feedback inputs is crucial for better understanding of bottom-up attention. PMID:25784883

  6. Overview of sports vision

    NASA Astrophysics Data System (ADS)

    Moore, Linda A.; Ferreira, Jannie T.

    2003-03-01

    Sports vision encompasses the visual assessment and provision of sports-specific visual performance enhancement and ocular protection for athletes of all ages, genders and levels of participation. In recent years, sports vision has been identified as one of the key performance indicators in sport. It is built on four main cornerstones: corrective eyewear, protective eyewear, visual skills enhancement and performance enhancement. Although clinically well established in the US, it is still a relatively new area of optometric specialisation elsewhere in the world and is gaining increasing popularity with eyecare practitioners and researchers. This research is often multi-disciplinary and involves input from a variety of subject disciplines, mainly those of optometry, medicine, physiology, psychology, physics, chemistry, computer science and engineering. Collaborative research projects are currently underway between staff of the Schools of Physics and Computing (DIT) and the Academy of Sports Vision (RAU).

  7. Visual BOLD Response in Late Blind Subjects with Argus II Retinal Prosthesis

    PubMed Central

    Castaldi, E.; Cicchini, G. M.; Cinelli, L.; Rizzo, S.; Morrone, M. C.

    2016-01-01

    Retinal prosthesis technologies require that the visual system downstream of the retinal circuitry be capable of transmitting and elaborating visual signals. We studied the capability of plastic remodeling in late blind subjects implanted with the Argus II Retinal Prosthesis with psychophysics and functional MRI (fMRI). After surgery, six out of seven retinitis pigmentosa (RP) blind subjects were able to detect high-contrast stimuli using the prosthetic implant. However, direction discrimination to contrast modulated stimuli remained at chance level in all of them. No subject showed any improvement of contrast sensitivity in either eye when not using the Argus II. Before the implant, the Blood Oxygenation Level Dependent (BOLD) activity in V1 and the lateral geniculate nucleus (LGN) was very weak or absent. Surprisingly, after prolonged use of Argus II, BOLD responses to visual input were enhanced. This is, to our knowledge, the first study tracking the neural changes of visual areas in patients after retinal implant, revealing a capacity to respond to restored visual input even after years of deprivation. PMID:27780207

  8. Hypothalamic Projections to the Optic Tectum in Larval Zebrafish

    PubMed Central

    Heap, Lucy A.; Vanwalleghem, Gilles C.; Thompson, Andrew W.; Favre-Bulle, Itia; Rubinsztein-Dunlop, Halina; Scott, Ethan K.

    2018-01-01

    The optic tectum of larval zebrafish is an important model for understanding visual processing in vertebrates. The tectum has been traditionally viewed as dominantly visual, with a majority of studies focusing on the processes by which tectal circuits receive and process retinally-derived visual information. Recently, a handful of studies have shown a much more complex role for the optic tectum in larval zebrafish, and anatomical and functional data from these studies suggest that this role extends beyond the visual system, and beyond the processing of exclusively retinal inputs. Consistent with this evolving view of the tectum, we have used a Gal4 enhancer trap line to identify direct projections from rostral hypothalamus (RH) to the tectal neuropil of larval zebrafish. These projections ramify within the deepest laminae of the tectal neuropil, the stratum album centrale (SAC)/stratum griseum periventriculare (SPV), and also innervate strata distinct from those innervated by retinal projections. Using optogenetic stimulation of the hypothalamic projection neurons paired with calcium imaging in the tectum, we find rebound firing in tectal neurons consistent with hypothalamic inhibitory input. Our results suggest that tectal processing in larval zebrafish is modulated by hypothalamic inhibitory inputs to the deep tectal neuropil. PMID:29403362

  9. Hypothalamic Projections to the Optic Tectum in Larval Zebrafish.

    PubMed

    Heap, Lucy A; Vanwalleghem, Gilles C; Thompson, Andrew W; Favre-Bulle, Itia; Rubinsztein-Dunlop, Halina; Scott, Ethan K

    2017-01-01

    The optic tectum of larval zebrafish is an important model for understanding visual processing in vertebrates. The tectum has been traditionally viewed as dominantly visual, with a majority of studies focusing on the processes by which tectal circuits receive and process retinally-derived visual information. Recently, a handful of studies have shown a much more complex role for the optic tectum in larval zebrafish, and anatomical and functional data from these studies suggest that this role extends beyond the visual system, and beyond the processing of exclusively retinal inputs. Consistent with this evolving view of the tectum, we have used a Gal4 enhancer trap line to identify direct projections from rostral hypothalamus (RH) to the tectal neuropil of larval zebrafish. These projections ramify within the deepest laminae of the tectal neuropil, the stratum album centrale (SAC)/stratum griseum periventriculare (SPV), and also innervate strata distinct from those innervated by retinal projections. Using optogenetic stimulation of the hypothalamic projection neurons paired with calcium imaging in the tectum, we find rebound firing in tectal neurons consistent with hypothalamic inhibitory input. Our results suggest that tectal processing in larval zebrafish is modulated by hypothalamic inhibitory inputs to the deep tectal neuropil.

  10. The strength of attentional biases reduces as visual short-term memory load increases

    PubMed Central

    Shimi, A.

    2013-01-01

    Despite our visual system receiving irrelevant input that competes with task-relevant signals, we are able to pursue our perceptual goals. Attention enhances our visual processing by biasing the processing of the input that is relevant to the task at hand. The top-down signals enabling these biases are therefore important for regulating lower level sensory mechanisms. In three experiments, we examined whether we apply similar biases to successfully maintain information in visual short-term memory (VSTM). We presented participants with targets alongside distracters and we graded their perceptual similarity to vary the extent to which they competed. Experiments 1 and 2 showed that the more items held in VSTM before the onset of the distracters, the more perceptually distinct the distracters needed to be for participants to retain the target accurately. Experiment 3 extended these behavioral findings by demonstrating that the perceptual similarity between target and distracters exerted a significantly greater effect on occipital alpha amplitudes, depending on the number of items already held in VSTM. The trade-off between VSTM load and target-distracter competition suggests that VSTM and perceptual competition share a partially overlapping mechanism, namely top-down inputs into sensory areas. PMID:23576694

  11. Chronic amphetamine enhances visual input to and suppresses visual output from the superior colliculus in withdrawal.

    PubMed

    Turner, Amy C; Kraev, Igor; Stewart, Michael G; Stramek, Agata; Overton, Paul G; Dommett, Eleanor J

    2018-06-04

    Heightened distractibility is a core symptom of Attention Deficit Hyperactivity Disorder (ADHD). Effective treatment is normally with chronic orally administered psychostimulants including amphetamine. Treatment prevents worsening of symptoms but the site of therapeutic processes, and their nature, is unknown. Mounting evidence suggests that the superior colliculus (SC) is a key substrate in distractibility and a therapeutic target, so we assessed whether therapeutically-relevant changes are induced in this structure by chronic oral amphetamine. We hypothesized that amphetamine would alter visual responses and morphological measures. Six-week old healthy male rats were treated with oral amphetamine (2, 5 or 10 mg/kg) or a vehicle for one month after which local field potential and multiunit recordings were made from the superficial layers of the SC in response to whole-field light flashes in withdrawal. Rapid Golgi staining was also used to assess dendritic spines, and synaptophysin staining was used to assess synaptic integrity. Chronic amphetamine increased local field potential responses at higher doses, and increased synaptophysin expression, suggesting enhanced visual input involving presynaptic remodelling. No comparable increases in multiunit activity were found suggesting amphetamine suppresses collicular output activity, counterbalancing the increased input. We also report, for the first time, five different dendritic spine types in the superficial layers and show these to be unaffected by amphetamine, indicating that suppression does not involve gross postsynaptic structural alterations. In conclusion, we suggest that amphetamine produces changes at the collicular level that potentially stabilise the structure and may prevent the worsening of symptoms in disorders like ADHD. Copyright © 2018. Published by Elsevier Ltd.

  12. Enhancing Soldier Performance: A Nonlinear Model of Performance to Improve Selection Testing and Training

    DTIC Science & Technology

    1994-07-01

    psychological refractory period 15. Two-flash threshold 16. Critical flicker fusion (CFF) 17. Steady state visually evoked response 18. Auditory brain stem...States of awareness I: Subliminal erceoption relationships to situational awareness (AL-TR-1992-0085). Brooks Air Force BaSe, TX: Armstrong...the signals required different inputs (e.g., visual versus auditory ) (Colley & Beech, 1989). Despite support of this theory from such experiments

  13. The Development of Visual Interface Enhancements for Player Input to the JTLS (Joint Theater-Level Simulation) Wargame.

    DTIC Science & Technology

    1987-03-01

    38 7. STEP 4 - CURRENT VERSION ..................................... 40 8 . STEP 4 - PROTOTYPE...1- 4 respectively. Tables 2, 4 , 6, and 8 are the respective prototype versions of source code. There are several noticeable differences between the...prompt in the scroll area (to make an input). This is distracting and time consuming. 42 IL a- TABLE 8 STEP 4 - PROTOTYPE Ge tNextEvent MouseClick

  14. Not All Attention Orienting is Created Equal: Recognition Memory is Enhanced When Attention Orienting Involves Distractor Suppression

    PubMed Central

    Markant, Julie; Worden, Michael S.; Amso, Dima

    2015-01-01

    Learning through visual exploration often requires orienting of attention to meaningful information in a cluttered world. Previous work has shown that attention modulates visual cortex activity, with enhanced activity for attended targets and suppressed activity for competing inputs, thus enhancing the visual experience. Here we examined the idea that learning may be engaged differentially with variations in attention orienting mechanisms that drive driving eye movements during visual search and exploration. We hypothesized that attention orienting mechanisms that engaged suppression of a previously attended location will boost memory encoding of the currently attended target objects to a greater extent than those that involve target enhancement alone To test this hypothesis we capitalized on the classic spatial cueing task and the inhibition of return (IOR) mechanism (Posner, Rafal, & Choate, 1985; Posner, 1980) to demonstrate that object images encoded in the context of concurrent suppression at a previously attended location were encoded more effectively and remembered better than those encoded without concurrent suppression. Furthermore, fMRI analyses revealed that this memory benefit was driven by attention modulation of visual cortex activity, as increased suppression of the previously attended location in visual cortex during target object encoding predicted better subsequent recognition memory performance. These results suggest that not all attention orienting impacts learning and memory equally. PMID:25701278

  15. Visual Perceptual Echo Reflects Learning of Regularities in Rapid Luminance Sequences.

    PubMed

    Chang, Acer Y-C; Schwartzman, David J; VanRullen, Rufin; Kanai, Ryota; Seth, Anil K

    2017-08-30

    A novel neural signature of active visual processing has recently been described in the form of the "perceptual echo", in which the cross-correlation between a sequence of randomly fluctuating luminance values and occipital electrophysiological signals exhibits a long-lasting periodic (∼100 ms cycle) reverberation of the input stimulus (VanRullen and Macdonald, 2012). As yet, however, the mechanisms underlying the perceptual echo and its function remain unknown. Reasoning that natural visual signals often contain temporally predictable, though nonperiodic features, we hypothesized that the perceptual echo may reflect a periodic process associated with regularity learning. To test this hypothesis, we presented subjects with successive repetitions of a rapid nonperiodic luminance sequence, and examined the effects on the perceptual echo, finding that echo amplitude linearly increased with the number of presentations of a given luminance sequence. These data suggest that the perceptual echo reflects a neural signature of regularity learning.Furthermore, when a set of repeated sequences was followed by a sequence with inverted luminance polarities, the echo amplitude decreased to the same level evoked by a novel stimulus sequence. Crucially, when the original stimulus sequence was re-presented, the echo amplitude returned to a level consistent with the number of presentations of this sequence, indicating that the visual system retained sequence-specific information, for many seconds, even in the presence of intervening visual input. Altogether, our results reveal a previously undiscovered regularity learning mechanism within the human visual system, reflected by the perceptual echo. SIGNIFICANCE STATEMENT How the brain encodes and learns fast-changing but nonperiodic visual input remains unknown, even though such visual input characterizes natural scenes. We investigated whether the phenomenon of "perceptual echo" might index such learning. The perceptual echo is a long-lasting reverberation between a rapidly changing visual input and evoked neural activity, apparent in cross-correlations between occipital EEG and stimulus sequences, peaking in the alpha (∼10 Hz) range. We indeed found that perceptual echo is enhanced by repeatedly presenting the same visual sequence, indicating that the human visual system can rapidly and automatically learn regularities embedded within fast-changing dynamic sequences. These results point to a previously undiscovered regularity learning mechanism, operating at a rate defined by the alpha frequency. Copyright © 2017 the authors 0270-6474/17/378486-12$15.00/0.

  16. The Synaptic and Morphological Basis of Orientation Selectivity in a Polyaxonal Amacrine Cell of the Rabbit Retina.

    PubMed

    Murphy-Baum, Benjamin L; Taylor, W Rowland

    2015-09-30

    Much of the computational power of the retina derives from the activity of amacrine cells, a large and diverse group of GABAergic and glycinergic inhibitory interneurons. Here, we identify an ON-type orientation-selective, wide-field, polyaxonal amacrine cell (PAC) in the rabbit retina and demonstrate how its orientation selectivity arises from the structure of the dendritic arbor and the pattern of excitatory and inhibitory inputs. Excitation from ON bipolar cells and inhibition arising from the OFF pathway converge to generate a quasi-linear integration of visual signals in the receptive field center. This serves to suppress responses to high spatial frequencies, thereby improving sensitivity to larger objects and enhancing orientation selectivity. Inhibition also regulates the magnitude and time course of excitatory inputs to this PAC through serial inhibitory connections onto the presynaptic terminals of ON bipolar cells. This presynaptic inhibition is driven by graded potentials within local microcircuits, similar in extent to the size of single bipolar cell receptive fields. Additional presynaptic inhibition is generated by spiking amacrine cells on a larger spatial scale covering several hundred microns. The orientation selectivity of this PAC may be a substrate for the inhibition that mediates orientation selectivity in some types of ganglion cells. Significance statement: The retina comprises numerous excitatory and inhibitory circuits that encode specific features in the visual scene, such as orientation, contrast, or motion. Here, we identify a wide-field inhibitory neuron that responds to visual stimuli of a particular orientation, a feature selectivity that is primarily due to the elongated shape of the dendritic arbor. Integration of convergent excitatory and inhibitory inputs from the ON and OFF visual pathways suppress responses to small objects and fine textures, thus enhancing selectivity for larger objects. Feedback inhibition regulates the strength and speed of excitation on both local and wide-field spatial scales. This study demonstrates how different synaptic inputs are regulated to tune a neuron to respond to specific features in the visual scene. Copyright © 2015 the authors 0270-6474/15/3513336-15$15.00/0.

  17. Enhancing calibrated peer review for improved engineering communication education.

    DOT National Transportation Integrated Search

    2008-01-01

    The objectives of this study are to extend Calibrated Peer Review (CPR) to allow for the input and review of visual and verbal components to the process, develop assignments in a set of core engineering courses that use these facilities, assess the i...

  18. Visual cortex activation in late-onset, Braille naive blind individuals: an fMRI study during semantic and phonological tasks with heard words.

    PubMed

    Burton, Harold; McLaren, Donald G

    2006-01-09

    Visual cortex activity in the blind has been shown in Braille literate people, which raise the question of whether Braille literacy influences cross-modal reorganization. We used fMRI to examine visual cortex activation during semantic and phonological tasks with auditory presentation of words in two late-onset blind individuals who lacked Braille literacy. Multiple visual cortical regions were activated in the Braille naive individuals. Positive BOLD responses were noted in lower tier visuotopic (e.g., V1, V2, VP, and V3) and several higher tier visual areas (e.g., V4v, V8, and BA 37). Activity was more extensive and cross-correlation magnitudes were greater during the semantic compared to the phonological task. These results with Braille naive individuals plausibly suggest that visual deprivation alone induces visual cortex reorganization. Cross-modal reorganization of lower tier visual areas may be recruited by developing skills in attending to selected non-visual inputs (e.g., Braille literacy, enhanced auditory skills). Such learning might strengthen remote connections with multisensory cortical areas. Of necessity, the Braille naive participants must attend to auditory stimulation for language. We hypothesize that learning to attend to non-visual inputs probably strengthens the remaining active synapses following visual deprivation, and thereby, increases cross-modal activation of lower tier visual areas when performing highly demanding non-visual tasks of which reading Braille is just one example.

  19. Visual cortex activation in late-onset, Braille naive blind individuals: An fMRI study during semantic and phonological tasks with heard words

    PubMed Central

    Burton, Harold; McLaren, Donald G.

    2013-01-01

    Visual cortex activity in the blind has been shown in Braille literate people, which raise the question of whether Braille literacy influences cross-modal reorganization. We used fMRI to examine visual cortex activation during semantic and phonological tasks with auditory presentation of words in two late-onset blind individuals who lacked Braille literacy. Multiple visual cortical regions were activated in the Braille naive individuals. Positive BOLD responses were noted in lower tier visuotopic (e.g., V1, V2, VP, and V3) and several higher tier visual areas (e.g., V4v, V8, and BA 37). Activity was more extensive and cross-correlation magnitudes were greater during the semantic compared to the phonological task. These results with Braille naive individuals plausibly suggest that visual deprivation alone induces visual cortex reorganization. Cross-modal reorganization of lower tier visual areas may be recruited by developing skills in attending to selected non-visual inputs (e.g., Braille literacy, enhanced auditory skills). Such learning might strengthen remote connections with multisensory cortical areas. Of necessity, the Braille naive participants must attend to auditory stimulation for language. We hypothesize that learning to attend to non-visual inputs probably strengthens the remaining active synapses following visual deprivation, and thereby, increases cross-modal activation of lower tier visual areas when performing highly demanding non-visual tasks of which reading Braille is just one example. PMID:16198053

  20. Not all attention orienting is created equal: recognition memory is enhanced when attention orienting involves distractor suppression.

    PubMed

    Markant, Julie; Worden, Michael S; Amso, Dima

    2015-04-01

    Learning through visual exploration often requires orienting of attention to meaningful information in a cluttered world. Previous work has shown that attention modulates visual cortex activity, with enhanced activity for attended targets and suppressed activity for competing inputs, thus enhancing the visual experience. Here we examined the idea that learning may be engaged differentially with variations in attention orienting mechanisms that drive eye movements during visual search and exploration. We hypothesized that attention orienting mechanisms that engaged suppression of a previously attended location would boost memory encoding of the currently attended target objects to a greater extent than those that involve target enhancement alone. To test this hypothesis we capitalized on the classic spatial cueing task and the inhibition of return (IOR) mechanism (Posner, 1980; Posner, Rafal, & Choate, 1985) to demonstrate that object images encoded in the context of concurrent suppression at a previously attended location were encoded more effectively and remembered better than those encoded without concurrent suppression. Furthermore, fMRI analyses revealed that this memory benefit was driven by attention modulation of visual cortex activity, as increased suppression of the previously attended location in visual cortex during target object encoding predicted better subsequent recognition memory performance. These results suggest that not all attention orienting impacts learning and memory equally. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Contralateral Bias of High Spatial Frequency Tuning and Cardinal Direction Selectivity in Mouse Visual Cortex

    PubMed Central

    Zeitoun, Jack H.; Kim, Hyungtae

    2017-01-01

    Binocular mechanisms for visual processing are thought to enhance spatial acuity by combining matched input from the two eyes. Studies in the primary visual cortex of carnivores and primates have confirmed that eye-specific neuronal response properties are largely matched. In recent years, the mouse has emerged as a prominent model for binocular visual processing, yet little is known about the spatial frequency tuning of binocular responses in mouse visual cortex. Using calcium imaging in awake mice of both sexes, we show that the spatial frequency preference of cortical responses to the contralateral eye is ∼35% higher than responses to the ipsilateral eye. Furthermore, we find that neurons in binocular visual cortex that respond only to the contralateral eye are tuned to higher spatial frequencies. Binocular neurons that are well matched in spatial frequency preference are also matched in orientation preference. In contrast, we observe that binocularly mismatched cells are more mismatched in orientation tuning. Furthermore, we find that contralateral responses are more direction-selective than ipsilateral responses and are strongly biased to the cardinal directions. The contralateral bias of high spatial frequency tuning was found in both awake and anesthetized recordings. The distinct properties of contralateral cortical responses may reflect the functional segregation of direction-selective, high spatial frequency-preferring neurons in earlier stages of the central visual pathway. Moreover, these results suggest that the development of binocularity and visual acuity may engage distinct circuits in the mouse visual system. SIGNIFICANCE STATEMENT Seeing through two eyes is thought to improve visual acuity by enhancing sensitivity to fine edges. Using calcium imaging of cellular responses in awake mice, we find surprising asymmetries in the spatial processing of eye-specific visual input in binocular primary visual cortex. The contralateral visual pathway is tuned to higher spatial frequencies than the ipsilateral pathway. At the highest spatial frequencies, the contralateral pathway strongly prefers to respond to visual stimuli along the cardinal (horizontal and vertical) axes. These results suggest that monocular, and not binocular, mechanisms set the limit of spatial acuity in mice. Furthermore, they suggest that the development of visual acuity and binocularity in mice involves different circuits. PMID:28924011

  2. An Extended Normalization Model of Attention Accounts for Feature-Based Attentional Enhancement of Both Response and Coherence Gain

    PubMed Central

    Krishna, B. Suresh; Treue, Stefan

    2016-01-01

    Paying attention to a sensory feature improves its perception and impairs that of others. Recent work has shown that a Normalization Model of Attention (NMoA) can account for a wide range of physiological findings and the influence of different attentional manipulations on visual performance. A key prediction of the NMoA is that attention to a visual feature like an orientation or a motion direction will increase the response of neurons preferring the attended feature (response gain) rather than increase the sensory input strength of the attended stimulus (input gain). This effect of feature-based attention on neuronal responses should translate to similar patterns of improvement in behavioral performance, with psychometric functions showing response gain rather than input gain when attention is directed to the task-relevant feature. In contrast, we report here that when human subjects are cued to attend to one of two motion directions in a transparent motion display, attentional effects manifest as a combination of input and response gain. Further, the impact on input gain is greater when attention is directed towards a narrow range of motion directions than when it is directed towards a broad range. These results are captured by an extended NMoA, which either includes a stimulus-independent attentional contribution to normalization or utilizes direction-tuned normalization. The proposed extensions are consistent with the feature-similarity gain model of attention and the attentional modulation in extrastriate area MT, where neuronal responses are enhanced and suppressed by attention to preferred and non-preferred motion directions respectively. PMID:27977679

  3. The Euler’s Graphical User Interface Spreadsheet Calculator for Solving Ordinary Differential Equations by Visual Basic for Application Programming

    NASA Astrophysics Data System (ADS)

    Gaik Tay, Kim; Cheong, Tau Han; Foong Lee, Ming; Kek, Sie Long; Abdul-Kahar, Rosmila

    2017-08-01

    In the previous work on Euler’s spreadsheet calculator for solving an ordinary differential equation, the Visual Basic for Application (VBA) programming was used, however, a graphical user interface was not developed to capture users input. This weakness may make users confuse on the input and output since those input and output are displayed in the same worksheet. Besides, the existing Euler’s spreadsheet calculator is not interactive as there is no prompt message if there is a mistake in inputting the parameters. On top of that, there are no users’ instructions to guide users to input the derivative function. Hence, in this paper, we improved previous limitations by developing a user-friendly and interactive graphical user interface. This improvement is aimed to capture users’ input with users’ instructions and interactive prompt error messages by using VBA programming. This Euler’s graphical user interface spreadsheet calculator is not acted as a black box as users can click on any cells in the worksheet to see the formula used to implement the numerical scheme. In this way, it could enhance self-learning and life-long learning in implementing the numerical scheme in a spreadsheet and later in any programming language.

  4. Cholinergic, But Not Dopaminergic or Noradrenergic, Enhancement Sharpens Visual Spatial Perception in Humans

    PubMed Central

    Wallace, Deanna L.

    2017-01-01

    The neuromodulator acetylcholine modulates spatial integration in visual cortex by altering the balance of inputs that generate neuronal receptive fields. These cholinergic effects may provide a neurobiological mechanism underlying the modulation of visual representations by visual spatial attention. However, the consequences of cholinergic enhancement on visuospatial perception in humans are unknown. We conducted two experiments to test whether enhancing cholinergic signaling selectively alters perceptual measures of visuospatial interactions in human subjects. In Experiment 1, a double-blind placebo-controlled pharmacology study, we measured how flanking distractors influenced detection of a small contrast decrement of a peripheral target, as a function of target-flanker distance. We found that cholinergic enhancement with the cholinesterase inhibitor donepezil improved target detection, and modeling suggested that this was mainly due to a narrowing of the extent of facilitatory perceptual spatial interactions. In Experiment 2, we tested whether these effects were selective to the cholinergic system or would also be observed following enhancements of related neuromodulators dopamine or norepinephrine. Unlike cholinergic enhancement, dopamine (bromocriptine) and norepinephrine (guanfacine) manipulations did not improve performance or systematically alter the spatial profile of perceptual interactions between targets and distractors. These findings reveal mechanisms by which cholinergic signaling influences visual spatial interactions in perception and improves processing of a visual target among distractors, effects that are notably similar to those of spatial selective attention. SIGNIFICANCE STATEMENT Acetylcholine influences how visual cortical neurons integrate signals across space, perhaps providing a neurobiological mechanism for the effects of visual selective attention. However, the influence of cholinergic enhancement on visuospatial perception remains unknown. Here we demonstrate that cholinergic enhancement improves detection of a target flanked by distractors, consistent with sharpened visuospatial perceptual representations. Furthermore, whereas most pharmacological studies focus on a single neurotransmitter, many neuromodulators can have related effects on cognition and perception. Thus, we also demonstrate that enhancing noradrenergic and dopaminergic systems does not systematically improve visuospatial perception or alter its tuning. Our results link visuospatial tuning effects of acetylcholine at the neuronal and perceptual levels and provide insights into the connection between cholinergic signaling and visual attention. PMID:28336568

  5. A crossmodal crossover: opposite effects of visual and auditory perceptual load on steady-state evoked potentials to irrelevant visual stimuli.

    PubMed

    Jacoby, Oscar; Hall, Sarah E; Mattingley, Jason B

    2012-07-16

    Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. Parametric Study of Diffusion-Enhancement Networks for Spatiotemporal Grouping in Real-Time Artificial Vision

    DTIC Science & Technology

    1993-04-01

    suggesting it occurs in later visual motion processing (long-range or second-order system). STIMULUS PERCEPT L" FLASH DURATION FLASH DURATION (a) TIME ( b ...TIME Figure 2. Gamma motion. (a) A light of fixed spatial extent is illuminated then extim- guished. ( b ) The percept is of a light expanding and then...while smaller, type- B cells provide input to its parvocellular subdivision. From here the magnocellular pathway progresses up through visual cortex area V

  7. Application of Data Mining and Knowledge Discovery Techniques to Enhance Binary Target Detection and Decision-Making for Compromised Visual Images

    DTIC Science & Technology

    2004-11-01

    affords exciting opportunities in target detection. The input signal may be a sum of sine waves, it could be an auditory signal, or possibly a visual...rendering of a scene. Since image processing is an area in which the original data are stationary in some sense ( auditory signals suffer from...11 Example 1 of SR - Identification of a Subliminal Signal below a Threshold .......................... 13 Example 2 of SR

  8. The impact of attentional, linguistic, and visual features during object naming

    PubMed Central

    Clarke, Alasdair D. F.; Coco, Moreno I.; Keller, Frank

    2013-01-01

    Object detection and identification are fundamental to human vision, and there is mounting evidence that objects guide the allocation of visual attention. However, the role of objects in tasks involving multiple modalities is less clear. To address this question, we investigate object naming, a task in which participants have to verbally identify objects they see in photorealistic scenes. We report an eye-tracking study that investigates which features (attentional, visual, and linguistic) influence object naming. We find that the amount of visual attention directed toward an object, its position and saliency, along with linguistic factors such as word frequency, animacy, and semantic proximity, significantly influence whether the object will be named or not. We then ask how features from different modalities are combined during naming, and find significant interactions between saliency and position, saliency and linguistic features, and attention and position. We conclude that when the cognitive system performs tasks such as object naming, it uses input from one modality to constraint or enhance the processing of other modalities, rather than processing each input modality independently. PMID:24379792

  9. Perceptual Contrast Enhancement with Dynamic Range Adjustment

    PubMed Central

    Zhang, Hong; Li, Yuecheng; Chen, Hao; Yuan, Ding; Sun, Mingui

    2013-01-01

    Recent years, although great efforts have been made to improve its performance, few Histogram equalization (HE) methods take human visual perception (HVP) into account explicitly. The human visual system (HVS) is more sensitive to edges than brightness. This paper proposes to take use of this nature intuitively and develops a perceptual contrast enhancement approach with dynamic range adjustment through histogram modification. The use of perceptual contrast connects the image enhancement problem with the HVS. To pre-condition the input image before the HE procedure is implemented, a perceptual contrast map (PCM) is constructed based on the modified Difference of Gaussian (DOG) algorithm. As a result, the contrast of the image is sharpened and high frequency noise is suppressed. A modified Clipped Histogram Equalization (CHE) is also developed which improves visual quality by automatically detecting the dynamic range of the image with improved perceptual contrast. Experimental results show that the new HE algorithm outperforms several state-of-the-art algorithms in improving perceptual contrast and enhancing details. In addition, the new algorithm is simple to implement, making it suitable for real-time applications. PMID:24339452

  10. Cortical systems mediating visual attention to both objects and spatial locations

    PubMed Central

    Shomstein, Sarah; Behrmann, Marlene

    2006-01-01

    Natural visual scenes consist of many objects occupying a variety of spatial locations. Given that the plethora of information cannot be processed simultaneously, the multiplicity of inputs compete for representation. Using event-related functional MRI, we show that attention, the mechanism by which a subset of the input is selected, is mediated by the posterior parietal cortex (PPC). Of particular interest is that PPC activity is differentially sensitive to the object-based properties of the input, with enhanced activation for those locations bound by an attended object. Of great interest too is the ensuing modulation of activation in early cortical regions, reflected as differences in the temporal profile of the blood oxygenation level-dependent (BOLD) response for within-object versus between-object locations. These findings indicate that object-based selection results from an object-sensitive reorienting signal issued by the PPC. The dynamic circuit between the PPC and earlier sensory regions then enables observers to attend preferentially to objects of interest in complex scenes. PMID:16840559

  11. There's Waldo! A Normalization Model of Visual Search Predicts Single-Trial Human Fixations in an Object Search Task

    PubMed Central

    Miconi, Thomas; Groomes, Laura; Kreiman, Gabriel

    2016-01-01

    When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global “priority map” that integrates bottom-up visual information with top-down, target-specific signals. We propose a mechanistic model of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts single-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitues a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images and when objects are pasted in natural scenes. The model can also predict single-trial human fixations, including those in error and target-absent trials, in a search task involving complex objects. PMID:26092221

  12. Cognitive/emotional models for human behavior representation in 3D avatar simulations

    NASA Astrophysics Data System (ADS)

    Peterson, James K.

    2004-08-01

    Simplified models of human cognition and emotional response are presented which are based on models of auditory/ visual polymodal fusion. At the core of these models is a computational model of Area 37 of the temporal cortex which is based on new isocortex models presented recently by Grossberg. These models are trained using carefully chosen auditory (musical sequences), visual (paintings) and higher level abstract (meta level) data obtained from studies of how optimization strategies are chosen in response to outside managerial inputs. The software modules developed are then used as inputs to character generation codes in standard 3D virtual world simulations. The auditory and visual training data also enable the development of simple music and painting composition generators which significantly enhance one's ability to validate the cognitive model. The cognitive models are handled as interacting software agents implemented as CORBA objects to allow the use of multiple language coding choices (C++, Java, Python etc) and efficient use of legacy code.

  13. The effects of visual stimulation and selective visual attention on rhythmic neuronal synchronization in macaque area V4.

    PubMed

    Fries, Pascal; Womelsdorf, Thilo; Oostenveld, Robert; Desimone, Robert

    2008-04-30

    Selective attention lends relevant sensory input priority access to higher-level brain areas and ultimately to behavior. Recent studies have suggested that those neurons in visual areas that are activated by an attended stimulus engage in enhanced gamma-band (30-70 Hz) synchronization compared with neurons activated by a distracter. Such precise synchronization could enhance the postsynaptic impact of cells carrying behaviorally relevant information. Previous studies have used the local field potential (LFP) power spectrum or spike-LFP coherence (SFC) to indirectly estimate spike synchronization. Here, we directly demonstrate zero-phase gamma-band coherence among spike trains of V4 neurons. This synchronization was particularly evident during visual stimulation and enhanced by selective attention, thus confirming the pattern inferred from LFP power and SFC. We therefore investigated the time course of LFP gamma-band power and found rapid dynamics consistent with interactions of top-down spatial and feature attention with bottom-up saliency. In addition to the modulation of synchronization during visual stimulation, selective attention significantly changed the prestimulus pattern of synchronization. Attention inside the receptive field of the recorded neuronal population enhanced gamma-band synchronization and strongly reduced alpha-band (9-11 Hz) synchronization in the prestimulus period. These results lend further support for a functional role of rhythmic neuronal synchronization in attentional stimulus selection.

  14. Transcranial direct current stimulation enhances recovery of stereopsis in adults with amblyopia.

    PubMed

    Spiegel, Daniel P; Li, Jinrong; Hess, Robert F; Byblow, Winston D; Deng, Daming; Yu, Minbin; Thompson, Benjamin

    2013-10-01

    Amblyopia is a neurodevelopmental disorder of vision caused by abnormal visual experience during early childhood that is often considered to be untreatable in adulthood. Recently, it has been shown that a novel dichoptic videogame-based treatment for amblyopia can improve visual function in adult patients, at least in part, by reducing inhibition of inputs from the amblyopic eye to the visual cortex. Non-invasive anodal transcranial direct current stimulation has been shown to reduce the activity of inhibitory cortical interneurons when applied to the primary motor or visual cortex. In this double-blind, sham-controlled cross-over study we tested the hypothesis that anodal transcranial direct current stimulation of the visual cortex would enhance the therapeutic effects of dichoptic videogame-based treatment. A homogeneous group of 16 young adults (mean age 22.1 ± 1.1 years) with amblyopia were studied to compare the effect of dichoptic treatment alone and dichoptic treatment combined with visual cortex direct current stimulation on measures of binocular (stereopsis) and monocular (visual acuity) visual function. The combined treatment led to greater improvements in stereoacuity than dichoptic treatment alone, indicating that direct current stimulation of the visual cortex boosts the efficacy of dichoptic videogame-based treatment. This intervention warrants further evaluation as a novel therapeutic approach for adults with amblyopia.

  15. Motivation enhances visual working memory capacity through the modulation of central cognitive processes.

    PubMed

    Sanada, Motoyuki; Ikeda, Koki; Kimura, Kenta; Hasegawa, Toshikazu

    2013-09-01

    Motivation is well known to enhance working memory (WM) capacity, but the mechanism underlying this effect remains unclear. The WM process can be divided into encoding, maintenance, and retrieval, and in a change detection visual WM paradigm, the encoding and retrieval processes can be subdivided into perceptual and central processing. To clarify which of these segments are most influenced by motivation, we measured ERPs in a change detection task with differential monetary rewards. The results showed that the enhancement of WM capacity under high motivation was accompanied by modulations of late central components but not those reflecting attentional control on perceptual inputs across all stages of WM. We conclude that the "state-dependent" shift of motivation impacted the central, rather than the perceptual functions in order to achieve better behavioral performances. Copyright © 2013 Society for Psychophysiological Research.

  16. Getting more from visual working memory: Retro-cues enhance retrieval and protect from visual interference.

    PubMed

    Souza, Alessandra S; Rerko, Laura; Oberauer, Klaus

    2016-06-01

    Visual working memory (VWM) has a limited capacity. This limitation can be mitigated by the use of focused attention: if attention is drawn to the relevant working memory content before test, performance improves (the so-called retro-cue benefit). This study tests 2 explanations of the retro-cue benefit: (a) Focused attention protects memory representations from interference by visual input at test, and (b) focusing attention enhances retrieval. Across 6 experiments using color recognition and color reproduction tasks, we varied the amount of color interference at test, and the delay between a retrieval cue (i.e., the retro-cue) and the memory test. Retro-cue benefits were larger when the memory test introduced interfering visual stimuli, showing that the retro-cue effect is in part because of protection from visual interference. However, when visual interference was held constant, retro-cue benefits were still obtained whenever the retro-cue enabled retrieval of an object from VWM but delayed response selection. Our results show that accessible information in VWM might be lost in the processes of testing memory because of visual interference and incomplete retrieval. This is not an inevitable state of affairs, though: Focused attention can be used to get the most out of VWM. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  17. Sequential pictorial presentation of neural interaction in the retina. 2. The depolarizing and hyperpolarizing bipolar cells at rod terminals.

    PubMed

    Sjöstrand, F S

    2002-01-01

    Each rod is connected to one depolarizing and one hyperpolarizing bipolar cell. The synaptic connections of cone processes to each bipolar cell and presynaptically to the two rod-bipolar cell synapses establishes conditions for lateral interaction at this level. Thus, the cones raise the threshold for bipolar cell depolarization which is the basis for spatial brightness contrast enhancement and consequently for high visual acuity (Sjöstrand, 2001a). The cones facilitate ganglion cell depolarization by the bipolar cells and cone input prevents horizontal cell blocking of depolarization of the depolarizing bipolar cell, extending rod vision to low illumination. The combination of reduced cone input and transient hyperpolarization of the hyperpolarizing bipolar cell at onset of a light stimulus facilitates ganglion cell depolarization extensively at onset of the stimulus while no corresponding enhancement applies to the ganglion cell response at cessation of the stimulus, possibly establishing conditions for discrimination between on- vs. off-signals in the visual centre. Reduced cone input and hyperpolarization of the hyperpolarizing bipolar cell at onset of a light stimulus accounts for Granit's (1941) 'preexcitatory inhibition'. Presynaptic inhibition maintains transmitter concentration low in the synaptic gap at rod-bipolar cell and bipolar cell-ganglion cell synapses, securing proportional and amplified postsynaptic responses at these synapses. Perfect timing of variations in facilitatory and inhibitory input to the ganglion cell confines the duration of ganglion cell depolarization at onset and at cessation of a light stimulus to that of a single synaptic transmission.

  18. Shape perception simultaneously up- and downregulates neural activity in the primary visual cortex.

    PubMed

    Kok, Peter; de Lange, Floris P

    2014-07-07

    An essential part of visual perception is the grouping of local elements (such as edges and lines) into coherent shapes. Previous studies have shown that this grouping process modulates neural activity in the primary visual cortex (V1) that is signaling the local elements [1-4]. However, the nature of this modulation is controversial. Some studies find that shape perception reduces neural activity in V1 [2, 5, 6], while others report increased V1 activity during shape perception [1, 3, 4, 7-10]. Neurocomputational theories that cast perception as a generative process [11-13] propose that feedback connections carry predictions (i.e., the generative model), while feedforward connections signal the mismatch between top-down predictions and bottom-up inputs. Within this framework, the effect of feedback on early visual cortex may be either enhancing or suppressive, depending on whether the feedback signal is met by congruent bottom-up input. Here, we tested this hypothesis by quantifying the spatial profile of neural activity in V1 during the perception of illusory shapes using population receptive field mapping. We find that shape perception concurrently increases neural activity in regions of V1 that have a receptive field on the shape but do not receive bottom-up input and suppresses activity in regions of V1 that receive bottom-up input that is predicted by the shape. These effects were not modulated by task requirements. Together, these findings suggest that shape perception changes lower-order sensory representations in a highly specific and automatic manner, in line with theories that cast perception in terms of hierarchical generative models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. The Comparison of Visual Working Memory Representations with Perceptual Inputs

    ERIC Educational Resources Information Center

    Hyun, Joo-seok; Woodman, Geoffrey F.; Vogel, Edward K.; Hollingworth, Andrew; Luck, Steven J.

    2009-01-01

    The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. In this study, the authors tested the hypothesis that differences between the memory of a stimulus array and the perception of a…

  20. Enhanced audio-visual interactions in the auditory cortex of elderly cochlear-implant users.

    PubMed

    Schierholz, Irina; Finke, Mareike; Schulte, Svenja; Hauthal, Nadine; Kantzke, Christoph; Rach, Stefan; Büchner, Andreas; Dengler, Reinhard; Sandmann, Pascale

    2015-10-01

    Auditory deprivation and the restoration of hearing via a cochlear implant (CI) can induce functional plasticity in auditory cortical areas. How these plastic changes affect the ability to integrate combined auditory (A) and visual (V) information is not yet well understood. In the present study, we used electroencephalography (EEG) to examine whether age, temporary deafness and altered sensory experience with a CI can affect audio-visual (AV) interactions in post-lingually deafened CI users. Young and elderly CI users and age-matched NH listeners performed a speeded response task on basic auditory, visual and audio-visual stimuli. Regarding the behavioral results, a redundant signals effect, that is, faster response times to cross-modal (AV) than to both of the two modality-specific stimuli (A, V), was revealed for all groups of participants. Moreover, in all four groups, we found evidence for audio-visual integration. Regarding event-related responses (ERPs), we observed a more pronounced visual modulation of the cortical auditory response at N1 latency (approximately 100 ms after stimulus onset) in the elderly CI users when compared with young CI users and elderly NH listeners. Thus, elderly CI users showed enhanced audio-visual binding which may be a consequence of compensatory strategies developed due to temporary deafness and/or degraded sensory input after implantation. These results indicate that the combination of aging, sensory deprivation and CI facilitates the coupling between the auditory and the visual modality. We suggest that this enhancement in multisensory interactions could be used to optimize auditory rehabilitation, especially in elderly CI users, by the application of strong audio-visually based rehabilitation strategies after implant switch-on. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Stroboscopic visual training improves information encoding in short-term memory.

    PubMed

    Appelbaum, L Gregory; Cain, Matthew S; Schroeder, Julia E; Darling, Elise F; Mitroff, Stephen R

    2012-11-01

    The visual system has developed to transform an undifferentiated and continuous flow of information into discrete and manageable representations, and this ability rests primarily on the uninterrupted nature of the input. Here we explore the impact of altering how visual information is accumulated over time by assessing how intermittent vision influences memory retention. Previous work has shown that intermittent, or stroboscopic, visual training (i.e., practicing while only experiencing snapshots of vision) can enhance visual-motor control and visual cognition, yet many questions remain unanswered about the mechanisms that are altered. In the present study, we used a partial-report memory paradigm to assess the possible changes in visual memory following training under stroboscopic conditions. In Experiment 1, the memory task was completed before and immediately after a training phase, wherein participants engaged in physical activities (e.g., playing catch) while wearing either specialized stroboscopic eyewear or transparent control eyewear. In Experiment 2, an additional group of participants underwent the same stroboscopic protocol but were delayed 24 h between training and assessment, so as to measure retention. In comparison to the control group, both stroboscopic groups (immediate and delayed retest) revealed enhanced retention of information in short-term memory, leading to better recall at longer stimulus-to-cue delays (640-2,560 ms). These results demonstrate that training under stroboscopic conditions has the capacity to enhance some aspects of visual memory, that these faculties generalize beyond the specific tasks that were trained, and that trained improvements can be maintained for at least a day.

  2. Neonatal Restriction of Tactile Inputs Leads to Long-Lasting Impairments of Cross-Modal Processing

    PubMed Central

    Röder, Brigitte; Hanganu-Opatz, Ileana L.

    2015-01-01

    Optimal behavior relies on the combination of inputs from multiple senses through complex interactions within neocortical networks. The ontogeny of this multisensory interplay is still unknown. Here, we identify critical factors that control the development of visual-tactile processing by combining in vivo electrophysiology with anatomical/functional assessment of cortico-cortical communication and behavioral investigation of pigmented rats. We demonstrate that the transient reduction of unimodal (tactile) inputs during a short period of neonatal development prior to the first cross-modal experience affects feed-forward subcortico-cortical interactions by attenuating the cross-modal enhancement of evoked responses in the adult primary somatosensory cortex. Moreover, the neonatal manipulation alters cortico-cortical interactions by decreasing the cross-modal synchrony and directionality in line with the sparsification of direct projections between primary somatosensory and visual cortices. At the behavioral level, these functional and structural deficits resulted in lower cross-modal matching abilities. Thus, neonatal unimodal experience during defined developmental stages is necessary for setting up the neuronal networks of multisensory processing. PMID:26600123

  3. VISIDEP™: visual image depth enhancement by parallax induction

    NASA Astrophysics Data System (ADS)

    Jones, Edwin R.; McLaurin, A. P.; Cathey, LeConte

    1984-05-01

    The usual descriptions of depth perception have traditionally required the simultaneous presentation of disparate views presented to separate eyes with the concomitant demand that the resulting binocular parallax be horizontally aligned. Our work suggests that the visual input information is compared in a short-term memory buffer which permits the brain to compute depth as it is normally perceived. However, the mechanism utilized is also capable of receiving and processing the stereographic information even when it is received monocularly or when identical inputs are simultaneously fed to both eyes. We have also found that the restriction to horizontally displaced images is not a necessary requirement and that improvement in image acceptability is achieved by the use of vertical parallax. Use of these ideas permit the presentation of three-dimensional scenes on flat screens in full color without the encumbrance of glasses or other viewing aids.

  4. Coupling Visualization, Simulation, and Deep Learning for Ensemble Steering of Complex Energy Models: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potter, Kristin C; Brunhart-Lupo, Nicholas J; Bush, Brian W

    We have developed a framework for the exploration, design, and planning of energy systems that combines interactive visualization with machine-learning based approximations of simulations through a general purpose dataflow API. Our system provides a visual inter- face allowing users to explore an ensemble of energy simulations representing a subset of the complex input parameter space, and spawn new simulations to 'fill in' input regions corresponding to new enegery system scenarios. Unfortunately, many energy simula- tions are far too slow to provide interactive responses. To support interactive feedback, we are developing reduced-form models via machine learning techniques, which provide statistically soundmore » esti- mates of the full simulations at a fraction of the computational cost and which are used as proxies for the full-form models. Fast com- putation and an agile dataflow enhance the engagement with energy simulations, and allow researchers to better allocate computational resources to capture informative relationships within the system and provide a low-cost method for validating and quality-checking large-scale modeling efforts.« less

  5. Resilience to the contralateral visual field bias as a window into object representations

    PubMed Central

    Garcea, Frank E.; Kristensen, Stephanie; Almeida, Jorge; Mahon, Bradford Z.

    2016-01-01

    Viewing images of manipulable objects elicits differential blood oxygen level-dependent (BOLD) contrast across parietal and dorsal occipital areas of the human brain that support object-directed reaching, grasping, and complex object manipulation. However, it is unknown which object-selective regions of parietal cortex receive their principal inputs from the ventral object-processing pathway and which receive their inputs from the dorsal object-processing pathway. Parietal areas that receive their inputs from the ventral visual pathway, rather than from the dorsal stream, will have inputs that are already filtered through object categorization and identification processes. This predicts that parietal regions that receive inputs from the ventral visual pathway should exhibit object-selective responses that are resilient to contralateral visual field biases. To test this hypothesis, adult participants viewed images of tools and animals that were presented to the left or right visual fields during functional magnetic resonance imaging (fMRI). We found that the left inferior parietal lobule showed robust tool preferences independently of the visual field in which tool stimuli were presented. In contrast, a region in posterior parietal/dorsal occipital cortex in the right hemisphere exhibited an interaction between visual field and category: tool-preferences were strongest contralateral to the stimulus. These findings suggest that action knowledge accessed in the left inferior parietal lobule operates over inputs that are abstracted from the visual input and contingent on analysis by the ventral visual pathway, consistent with its putative role in supporting object manipulation knowledge. PMID:27160998

  6. Face adaptation improves gender discrimination.

    PubMed

    Yang, Hua; Shen, Jianhong; Chen, Juan; Fang, Fang

    2011-01-01

    Adaptation to a visual pattern can alter the sensitivities of neuronal populations encoding the pattern. However, the functional roles of adaptation, especially in high-level vision, are still equivocal. In the present study, we performed three experiments to investigate if face gender adaptation could affect gender discrimination. Experiments 1 and 2 revealed that adapting to a male/female face could selectively enhance discrimination for male/female faces. Experiment 3 showed that the discrimination enhancement induced by face adaptation could transfer across a substantial change in three-dimensional face viewpoint. These results provide further evidence suggesting that, similar to low-level vision, adaptation in high-level vision could calibrate the visual system to current inputs of complex shapes (i.e. face) and improve discrimination at the adapted characteristic. Copyright © 2010 Elsevier Ltd. All rights reserved.

  7. In vivo Visuotopic Brain Mapping with Manganese-Enhanced MRI and Resting-State Functional Connectivity MRI

    PubMed Central

    Chan, Kevin C.; Fan, Shu-Juan; Chan, Russell W.; Cheng, Joe S.; Zhou, Iris Y.; Wu, Ed X.

    2014-01-01

    The rodents are an increasingly important model for understanding the mechanisms of development, plasticity, functional specialization and disease in the visual system. However, limited tools have been available for assessing the structural and functional connectivity of the visual brain network globally, in vivo and longitudinally. There are also ongoing debates on whether functional brain connectivity directly reflects structural brain connectivity. In this study, we explored the feasibility of manganese-enhanced MRI (MEMRI) via 3 different routes of Mn2+ administration for visuotopic brain mapping and understanding of physiological transport in normal and visually deprived adult rats. In addition, resting-state functional connectivity MRI (RSfcMRI) was performed to evaluate the intrinsic functional network and structural-functional relationships in the corresponding anatomical visual brain connections traced by MEMRI. Upon intravitreal, subcortical, and intracortical Mn2+ injection, different topographic and layer-specific Mn enhancement patterns could be revealed in the visual cortex and subcortical visual nuclei along retinal, callosal, cortico-subcortical, transsynaptic and intracortical horizontal connections. Loss of visual input upon monocular enucleation to adult rats appeared to reduce interhemispheric polysynaptic Mn2+ transfer but not intra- or inter-hemispheric monosynaptic Mn2+ transport after Mn2+ injection into visual cortex. In normal adults, both structural and functional connectivity by MEMRI and RSfcMRI was stronger interhemispherically between bilateral primary/secondary visual cortex (V1/V2) transition zones (TZ) than between V1/V2 TZ and other cortical nuclei. Intrahemispherically, structural and functional connectivity was stronger between visual cortex and subcortical visual nuclei than between visual cortex and other subcortical nuclei. The current results demonstrated the sensitivity of MEMRI and RSfcMRI for assessing the neuroarchitecture, neurophysiology and structural-functional relationships of the visual brains in vivo. These may possess great potentials for effective monitoring and understanding of the basic anatomical and functional connections in the visual system during development, plasticity, disease, pharmacological interventions and genetic modifications in future studies. PMID:24394694

  8. Priming and the guidance by visual and categorical templates in visual search.

    PubMed

    Wilschut, Anna; Theeuwes, Jan; Olivers, Christian N L

    2014-01-01

    Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity toward the target feature, i.e., the extent to which observers searched selectively among items of the cued vs. uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.

  9. A novel false color mapping model-based fusion method of visual and infrared images

    NASA Astrophysics Data System (ADS)

    Qi, Bin; Kun, Gao; Tian, Yue-xin; Zhu, Zhen-yu

    2013-12-01

    A fast and efficient image fusion method is presented to generate near-natural colors from panchromatic visual and thermal imaging sensors. Firstly, a set of daytime color reference images are analyzed and the false color mapping principle is proposed according to human's visual and emotional habits. That is, object colors should remain invariant after color mapping operations, differences between infrared and visual images should be enhanced and the background color should be consistent with the main scene content. Then a novel nonlinear color mapping model is given by introducing the geometric average value of the input visual and infrared image gray and the weighted average algorithm. To determine the control parameters in the mapping model, the boundary conditions are listed according to the mapping principle above. Fusion experiments show that the new fusion method can achieve the near-natural appearance of the fused image, and has the features of enhancing color contrasts and highlighting the infrared brilliant objects when comparing with the traditional TNO algorithm. Moreover, it owns the low complexity and is easy to realize real-time processing. So it is quite suitable for the nighttime imaging apparatus.

  10. Learning a No-Reference Quality Assessment Model of Enhanced Images With Big Data.

    PubMed

    Gu, Ke; Tao, Dacheng; Qiao, Jun-Fei; Lin, Weisi

    2018-04-01

    In this paper, we investigate into the problem of image quality assessment (IQA) and enhancement via machine learning. This issue has long attracted a wide range of attention in computational intelligence and image processing communities, since, for many practical applications, e.g., object detection and recognition, raw images are usually needed to be appropriately enhanced to raise the visual quality (e.g., visibility and contrast). In fact, proper enhancement can noticeably improve the quality of input images, even better than originally captured images, which are generally thought to be of the best quality. In this paper, we present two most important contributions. The first contribution is to develop a new no-reference (NR) IQA model. Given an image, our quality measure first extracts 17 features through analysis of contrast, sharpness, brightness and more, and then yields a measure of visual quality using a regression module, which is learned with big-data training samples that are much bigger than the size of relevant image data sets. The results of experiments on nine data sets validate the superiority and efficiency of our blind metric compared with typical state-of-the-art full-reference, reduced-reference and NA IQA methods. The second contribution is that a robust image enhancement framework is established based on quality optimization. For an input image, by the guidance of the proposed NR-IQA measure, we conduct histogram modification to successively rectify image brightness and contrast to a proper level. Thorough tests demonstrate that our framework can well enhance natural images, low-contrast images, low-light images, and dehazed images. The source code will be released at https://sites.google.com/site/guke198701/publications.

  11. Eyes Matched to the Prize: The State of Matched Filters in Insect Visual Circuits.

    PubMed

    Kohn, Jessica R; Heath, Sarah L; Behnia, Rudy

    2018-01-01

    Confronted with an ever-changing visual landscape, animals must be able to detect relevant stimuli and translate this information into behavioral output. A visual scene contains an abundance of information: to interpret the entirety of it would be uneconomical. To optimally perform this task, neural mechanisms exist to enhance the detection of important features of the sensory environment while simultaneously filtering out irrelevant information. This can be accomplished by using a circuit design that implements specific "matched filters" that are tuned to relevant stimuli. Following this rule, the well-characterized visual systems of insects have evolved to streamline feature extraction on both a structural and functional level. Here, we review examples of specialized visual microcircuits for vital behaviors across insect species, including feature detection, escape, and estimation of self-motion. Additionally, we discuss how these microcircuits are modulated to weigh relevant input with respect to different internal and behavioral states.

  12. Figure-ground modulation in awake primate thalamus.

    PubMed

    Jones, Helen E; Andolina, Ian M; Shipp, Stewart D; Adams, Daniel L; Cudeiro, Javier; Salt, Thomas E; Sillito, Adam M

    2015-06-02

    Figure-ground discrimination refers to the perception of an object, the figure, against a nondescript background. Neural mechanisms of figure-ground detection have been associated with feedback interactions between higher centers and primary visual cortex and have been held to index the effect of global analysis on local feature encoding. Here, in recordings from visual thalamus of alert primates, we demonstrate a robust enhancement of neuronal firing when the figure, as opposed to the ground, component of a motion-defined figure-ground stimulus is located over the receptive field. In this paradigm, visual stimulation of the receptive field and its near environs is identical across both conditions, suggesting the response enhancement reflects higher integrative mechanisms. It thus appears that cortical activity generating the higher-order percept of the figure is simultaneously reentered into the lowest level that is anatomically possible (the thalamus), so that the signature of the evolving representation of the figure is imprinted on the input driving it in an iterative process.

  13. Figure-ground modulation in awake primate thalamus

    PubMed Central

    Jones, Helen E.; Andolina, Ian M.; Shipp, Stewart D.; Adams, Daniel L.; Cudeiro, Javier; Salt, Thomas E.; Sillito, Adam M.

    2015-01-01

    Figure-ground discrimination refers to the perception of an object, the figure, against a nondescript background. Neural mechanisms of figure-ground detection have been associated with feedback interactions between higher centers and primary visual cortex and have been held to index the effect of global analysis on local feature encoding. Here, in recordings from visual thalamus of alert primates, we demonstrate a robust enhancement of neuronal firing when the figure, as opposed to the ground, component of a motion-defined figure-ground stimulus is located over the receptive field. In this paradigm, visual stimulation of the receptive field and its near environs is identical across both conditions, suggesting the response enhancement reflects higher integrative mechanisms. It thus appears that cortical activity generating the higher-order percept of the figure is simultaneously reentered into the lowest level that is anatomically possible (the thalamus), so that the signature of the evolving representation of the figure is imprinted on the input driving it in an iterative process. PMID:25901330

  14. Asymmetric temporal integration of layer 4 and layer 2/3 inputs in visual cortex.

    PubMed

    Hang, Giao B; Dan, Yang

    2011-01-01

    Neocortical neurons in vivo receive concurrent synaptic inputs from multiple sources, including feedforward, horizontal, and feedback pathways. Layer 2/3 of the visual cortex receives feedforward input from layer 4 and horizontal input from layer 2/3. Firing of the pyramidal neurons, which carries the output to higher cortical areas, depends critically on the interaction of these pathways. Here we examined synaptic integration of inputs from layer 4 and layer 2/3 in rat visual cortical slices. We found that the integration is sublinear and temporally asymmetric, with larger responses if layer 2/3 input preceded layer 4 input. The sublinearity depended on inhibition, and the asymmetry was largely attributable to the difference between the two inhibitory inputs. Interestingly, the asymmetric integration was specific to pyramidal neurons, and it strongly affected their spiking output. Thus via cortical inhibition, the temporal order of activation of layer 2/3 and layer 4 pathways can exert powerful control of cortical output during visual processing.

  15. Resolution enhancement of wide-field interferometric microscopy by coupled deep autoencoders.

    PubMed

    Işil, Çağatay; Yorulmaz, Mustafa; Solmaz, Berkan; Turhan, Adil Burak; Yurdakul, Celalettin; Ünlü, Selim; Ozbay, Ekmel; Koç, Aykut

    2018-04-01

    Wide-field interferometric microscopy is a highly sensitive, label-free, and low-cost biosensing imaging technique capable of visualizing individual biological nanoparticles such as viral pathogens and exosomes. However, further resolution enhancement is necessary to increase detection and classification accuracy of subdiffraction-limited nanoparticles. In this study, we propose a deep-learning approach, based on coupled deep autoencoders, to improve resolution of images of L-shaped nanostructures. During training, our method utilizes microscope image patches and their corresponding manual truth image patches in order to learn the transformation between them. Following training, the designed network reconstructs denoised and resolution-enhanced image patches for unseen input.

  16. Comparison of Text-Based and Visual-Based Programming Input Methods for First-Time Learners

    ERIC Educational Resources Information Center

    Saito, Daisuke; Washizaki, Hironori; Fukazawa, Yoshiaki

    2017-01-01

    Aim/Purpose: When learning to program, both text-based and visual-based input methods are common. However, it is unclear which method is more appropriate for first-time learners (first learners). Background: The differences in the learning effect between text-based and visual-based input methods for first learners are compared the using a…

  17. Neuronal connectome of a sensory-motor circuit for visual navigation

    PubMed Central

    Randel, Nadine; Asadulina, Albina; Bezares-Calderón, Luis A; Verasztó, Csaba; Williams, Elizabeth A; Conzelmann, Markus; Shahidi, Réza; Jékely, Gáspár

    2014-01-01

    Animals use spatial differences in environmental light levels for visual navigation; however, how light inputs are translated into coordinated motor outputs remains poorly understood. Here we reconstruct the neuronal connectome of a four-eye visual circuit in the larva of the annelid Platynereis using serial-section transmission electron microscopy. In this 71-neuron circuit, photoreceptors connect via three layers of interneurons to motorneurons, which innervate trunk muscles. By combining eye ablations with behavioral experiments, we show that the circuit compares light on either side of the body and stimulates body bending upon left-right light imbalance during visual phototaxis. We also identified an interneuron motif that enhances sensitivity to different light intensity contrasts. The Platynereis eye circuit has the hallmarks of a visual system, including spatial light detection and contrast modulation, illustrating how image-forming eyes may have evolved via intermediate stages contrasting only a light and a dark field during a simple visual task. DOI: http://dx.doi.org/10.7554/eLife.02730.001 PMID:24867217

  18. Development of adaptive sensorimotor control in infant sitting posture.

    PubMed

    Chen, Li-Chiou; Jeka, John; Clark, Jane E

    2016-03-01

    A reliable and adaptive relationship between action and perception is necessary for postural control. Our understanding of how this adaptive sensorimotor control develops during infancy is very limited. This study examines the dynamic visual-postural relationship during early development. Twenty healthy infants were divided into 4 developmental groups (each n=5): sitting onset, standing alone, walking onset, and 1-year post-walking. During the experiment, the infant sat independently in a virtual moving-room in which anterior-posterior oscillations of visual motion were presented using a sum-of-sines technique with five input frequencies (from 0.12 to 1.24 Hz). Infants were tested in five conditions that varied in the amplitude of visual motion (from 0 to 8.64 cm). Gain and phase responses of infants' postural sway were analyzed. Our results showed that infants, from a few months post-sitting to 1 year post-walking, were able to control their sitting posture in response to various frequency and amplitude properties of the visual motion. Infants showed an adult-like inverted-U pattern for the frequency response to visual inputs with the highest gain at 0.52 and 0.76 Hz. As the visual motion amplitude increased, the gain response decreased. For the phase response, an adult-like frequency-dependent pattern was observed in all amplitude conditions for the experienced walkers. Newly sitting infants, however, showed variable postural behavior and did not systemically respond to the visual stimulus. Our results suggest that visual-postural entrainment and sensory re-weighting are fundamental processes that are present after a few months post sitting. Sensorimotor refinement during early postural development may result from the interactions of improved self-motion control and enhanced perceptual abilities. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Sensory experience modifies feature map relationships in visual cortex

    PubMed Central

    Cloherty, Shaun L; Hughes, Nicholas J; Hietanen, Markus A; Bhagavatula, Partha S

    2016-01-01

    The extent to which brain structure is influenced by sensory input during development is a critical but controversial question. A paradigmatic system for studying this is the mammalian visual cortex. Maps of orientation preference (OP) and ocular dominance (OD) in the primary visual cortex of ferrets, cats and monkeys can be individually changed by altered visual input. However, the spatial relationship between OP and OD maps has appeared immutable. Using a computational model we predicted that biasing the visual input to orthogonal orientation in the two eyes should cause a shift of OP pinwheels towards the border of OD columns. We then confirmed this prediction by rearing cats wearing orthogonally oriented cylindrical lenses over each eye. Thus, the spatial relationship between OP and OD maps can be modified by visual experience, revealing a previously unknown degree of brain plasticity in response to sensory input. DOI: http://dx.doi.org/10.7554/eLife.13911.001 PMID:27310531

  20. Visual Occlusion Decreases Motion Sickness in a Flight Simulator.

    PubMed

    Ishak, Shaziela; Bubka, Andrea; Bonato, Frederick

    2018-05-01

    Sensory conflict theories of motion sickness (MS) assert that symptoms may result when incoming sensory inputs (e.g., visual and vestibular) contradict each other. Logic suggests that attenuating input from one sense may reduce conflict and hence lessen MS symptoms. In the current study, it was hypothesized that attenuating visual input by blocking light entering the eye would reduce MS symptoms in a motion provocative environment. Participants sat inside an aircraft cockpit mounted onto a motion platform that simultaneously pitched, rolled, and heaved in two conditions. In the occluded condition, participants wore "blackout" goggles and closed their eyes to block light. In the control condition, participants opened their eyes and had full view of the cockpit's interior. Participants completed separate Simulator Sickness Questionnaires before and after each condition. The posttreatment total Simulator Sickness Questionnaires and subscores for nausea, oculomotor, and disorientation in the control condition were significantly higher than those in the occluded condition. These results suggest that under some conditions attenuating visual input may delay the onset of MS or weaken the severity of symptoms. Eliminating visual input may reduce visual/nonvisual sensory conflict by weakening the influence of the visual channel, which is consistent with the sensory conflict theory of MS.

  1. Guided filtering for solar image/video processing

    NASA Astrophysics Data System (ADS)

    Xu, Long; Yan, Yihua; Cheng, Jun

    2017-06-01

    A new image enhancement algorithm employing guided filtering is proposed in this work for the enhancement of solar images and videos so that users can easily figure out important fine structures embedded in the recorded images/movies for solar observation. The proposed algorithm can efficiently remove image noises, including Gaussian and impulse noises. Meanwhile, it can further highlight fibrous structures on/beyond the solar disk. These fibrous structures can clearly demonstrate the progress of solar flare, prominence coronal mass emission, magnetic field, and so on. The experimental results prove that the proposed algorithm gives significant enhancement of visual quality of solar images beyond original input and several classical image enhancement algorithms, thus facilitating easier determination of interesting solar burst activities from recorded images/movies.

  2. Stream-related preferences of inputs to the superior colliculus from areas of dorsal and ventral streams of mouse visual cortex.

    PubMed

    Wang, Quanxin; Burkhalter, Andreas

    2013-01-23

    Previous studies of intracortical connections in mouse visual cortex have revealed two subnetworks that resemble the dorsal and ventral streams in primates. Although calcium imaging studies have shown that many areas of the ventral stream have high spatial acuity whereas areas of the dorsal stream are highly sensitive for transient visual stimuli, there are some functional inconsistencies that challenge a simple grouping into "what/perception" and "where/action" streams known in primates. The superior colliculus (SC) is a major center for processing of multimodal sensory information and the motor control of orienting the eyes, head, and body. Visual processing is performed in superficial layers, whereas premotor activity is generated in deep layers of the SC. Because the SC is known to receive input from visual cortex, we asked whether the projections from 10 visual areas of the dorsal and ventral streams terminate in differential depth profiles within the SC. We found that inputs from primary visual cortex are by far the strongest. Projections from the ventral stream were substantially weaker, whereas the sparsest input originated from areas of the dorsal stream. Importantly, we found that ventral stream inputs terminated in superficial layers, whereas dorsal stream inputs tended to be patchy and either projected equally to superficial and deep layers or strongly preferred deep layers. The results suggest that the anatomically defined ventral and dorsal streams contain areas that belong to distinct functional systems, specialized for the processing of visual information and visually guided action, respectively.

  3. The effects of mechanical transparency on adjustment to a complex visuomotor transformation at early and late working age.

    PubMed

    Heuer, Herbert; Hegele, Mathias

    2010-12-01

    Mechanical tools are transparent in the sense that their input-output relations can be derived from their perceptible characteristics. Modern technology creates more and more tools that lack mechanical transparency, such as in the control of the position of a cursor by means of a computer mouse or some other input device. We inquired whether an enhancement of transparency by means of presenting the shaft of a virtual sliding lever, which governed the transformation of hand position into cursor position, supports performance of aimed cursor movement and the acquisition of an internal model of the transformation in both younger and older adults. Enhanced transparency resulted in an improvement of visual closed-loop control in terms of movement time and curvature of cursor paths. The movement-time improvement was more pronounced at older working age than at younger working age, so that the enhancement of transparency can serve as a means to mitigate age-related declines in performance. Benefits for the acquisition of an internal model of the transformation and of explicit knowledge were absent. Thus, open-loop control in this task did not profit from enhanced mechanical transparency. These findings strongly suggest that environmental support of transparency of the effects of input devices on controlled systems might be a powerful tool to support older users. Enhanced transparency may also improve simulator-based training by increasing motivation, even if training benefits do not transfer to situations without enhanced transparency. (PsycINFO Database Record (c) 2010 APA, all rights reserved).

  4. The processing of images of biological threats in visual short-term memory.

    PubMed

    Quinlan, Philip T; Yue, Yue; Cohen, Dale J

    2017-08-30

    The idea that there is enhanced memory for negatively, emotionally charged pictures was examined. Performance was measured under rapid, serial visual presentation (RSVP) conditions in which, on every trial, a sequence of six photo-images was presented. Briefly after the offset of the sequence, two alternative images (a target and a foil) were presented and participants attempted to choose which image had occurred in the sequence. Images were of threatening and non-threatening cats and dogs. The target depicted either an animal expressing an emotion distinct from the other images, or the sequences contained only images depicting the same emotional valence. Enhanced memory was found for targets that differed in emotional valence from the other sequence images, compared to targets that expressed the same emotional valence. Further controls in stimulus selection were then introduced and the same emotional distinctiveness effect obtained. In ruling out possible visual and attentional accounts of the data, an informal dual route topic model is discussed. This places emphasis on how visual short-term memory reveals a sensitivity to the emotional content of the input as it unfolds over time. Items that present with a distinctive emotional content stand out in memory. © 2017 The Author(s).

  5. Haptic over visual information in the distribution of visual attention after tool-use in near and far space.

    PubMed

    Park, George D; Reed, Catherine L

    2015-10-01

    Despite attentional prioritization for grasping space near the hands, tool-use appears to transfer attentional bias to the tool's end/functional part. The contributions of haptic and visual inputs to attentional distribution along a tool were investigated as a function of tool-use in near (Experiment 1) and far (Experiment 2) space. Visual attention was assessed with a 50/50, go/no-go, target discrimination task, while a tool was held next to targets appearing near the tool-occupied hand or tool-end. Target response times (RTs) and sensitivity (d-prime) were measured at target locations, before and after functional tool practice for three conditions: (1) open-tool: tool-end visible (visual + haptic inputs), (2) hidden-tool: tool-end visually obscured (haptic input only), and (3) short-tool: stick missing tool's length/end (control condition: hand occupied but no visual/haptic input). In near space, both open- and hidden-tool groups showed a tool-end, attentional bias (faster RTs toward tool-end) before practice; after practice, RTs near the hand improved. In far space, the open-tool group showed no bias before practice; after practice, target RTs near the tool-end improved. However, the hidden-tool group showed a consistent tool-end bias despite practice. Lack of short-tool group results suggested that hidden-tool group results were specific to haptic inputs. In conclusion, (1) allocation of visual attention along a tool due to tool practice differs in near and far space, and (2) visual attention is drawn toward the tool's end even when visually obscured, suggesting haptic input provides sufficient information for directing attention along the tool.

  6. Addition of visual noise boosts evoked potential-based brain-computer interface.

    PubMed

    Xie, Jun; Xu, Guanghua; Wang, Jing; Zhang, Sicong; Zhang, Feng; Li, Yeping; Han, Chengcheng; Li, Lili

    2014-05-14

    Although noise has a proven beneficial role in brain functions, there have not been any attempts on the dedication of stochastic resonance effect in neural engineering applications, especially in researches of brain-computer interfaces (BCIs). In our study, a steady-state motion visual evoked potential (SSMVEP)-based BCI with periodic visual stimulation plus moderate spatiotemporal noise can achieve better offline and online performance due to enhancement of periodic components in brain responses, which was accompanied by suppression of high harmonics. Offline results behaved with a bell-shaped resonance-like functionality and 7-36% online performance improvements can be achieved when identical visual noise was adopted for different stimulation frequencies. Using neural encoding modeling, these phenomena can be explained as noise-induced input-output synchronization in human sensory systems which commonly possess a low-pass property. Our work demonstrated that noise could boost BCIs in addressing human needs.

  7. FastDart : a fast, accurate and friendly version of DART code.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rest, J.; Taboada, H.

    2000-11-08

    A new enhanced, visual version of DART code is presented. DART is a mechanistic model based code, developed for the performance calculation and assessment of aluminum dispersion fuel. Major issues of this new version are the development of a new, time saving calculation routine, able to be run on PC, a friendly visual input interface and a plotting facility. This version, available for silicide and U-Mo fuels,adds to the classical accuracy of DART models for fuel performance prediction, a faster execution and visual interfaces. It is part of a collaboration agreement between ANL and CNEA in the area of Lowmore » Enriched Uranium Advanced Fuels, held by the Implementation Arrangement for Technical Exchange and Cooperation in the Area of Peaceful Uses of Nuclear Energy.« less

  8. Enhanced visualization of abnormalities in digital-mammographic images

    NASA Astrophysics Data System (ADS)

    Young, Susan S.; Moore, William E.

    2002-05-01

    This paper describes two new presentation methods that are intended to improve the ability of radiologists to visualize abnormalities in mammograms by enhancing the appearance of the breast parenchyma pattern relative to the fatty-tissue surroundings. The first method, referred to as mountain- view, is obtained via multiscale edge decomposition through filter banks. The image is displayed in a multiscale edge domain that causes the image to have a topographic-like appearance. The second method displays the image in the intensity domain and is referred to as contrast-enhancement presentation. The input image is first passed through a decomposition filter bank to produce a filtered output (Id). The image at the lowest resolution is processed using a LUT (look-up table) to produce a tone scaled image (I'). The LUT is designed to optimally map the code value range corresponding to the parenchyma pattern in the mammographic image into the dynamic range of the output medium. The algorithm uses a contrast weight control mechanism to produce the desired weight factors to enhance the edge information corresponding to the parenchyma pattern. The output image is formed using a reconstruction filter bank through I' and enhanced Id.

  9. Sound effects: Multimodal input helps infants find displaced objects.

    PubMed

    Shinskey, Jeanne L

    2017-09-01

    Before 9 months, infants use sound to retrieve a stationary object hidden by darkness but not one hidden by occlusion, suggesting auditory input is more salient in the absence of visual input. This article addresses how audiovisual input affects 10-month-olds' search for displaced objects. In AB tasks, infants who previously retrieved an object at A subsequently fail to find it after it is displaced to B, especially following a delay between hiding and retrieval. Experiment 1 manipulated auditory input by keeping the hidden object audible versus silent, and visual input by presenting the delay in the light versus dark. Infants succeeded more at B with audible than silent objects and, unexpectedly, more after delays in the light than dark. Experiment 2 presented both the delay and search phases in darkness. The unexpected light-dark difference disappeared. Across experiments, the presence of auditory input helped infants find displaced objects, whereas the absence of visual input did not. Sound might help by strengthening object representation, reducing memory load, or focusing attention. This work provides new evidence on when bimodal input aids object processing, corroborates claims that audiovisual processing improves over the first year of life, and contributes to multisensory approaches to studying cognition. Statement of contribution What is already known on this subject Before 9 months, infants use sound to retrieve a stationary object hidden by darkness but not one hidden by occlusion. This suggests they find auditory input more salient in the absence of visual input in simple search tasks. After 9 months, infants' object processing appears more sensitive to multimodal (e.g., audiovisual) input. What does this study add? This study tested how audiovisual input affects 10-month-olds' search for an object displaced in an AB task. Sound helped infants find displaced objects in both the presence and absence of visual input. Object processing becomes more sensitive to bimodal input as multisensory functions develop across the first year. © 2016 The British Psychological Society.

  10. The primary visual cortex in the neural circuit for visual orienting

    NASA Astrophysics Data System (ADS)

    Zhaoping, Li

    The primary visual cortex (V1) is traditionally viewed as remote from influencing brain's motor outputs. However, V1 provides the most abundant cortical inputs directly to the sensory layers of superior colliculus (SC), a midbrain structure to command visual orienting such as shifting gaze and turning heads. I will show physiological, anatomical, and behavioral data suggesting that V1 transforms visual input into a saliency map to guide a class of visual orienting that is reflexive or involuntary. In particular, V1 receives a retinotopic map of visual features, such as orientation, color, and motion direction of local visual inputs; local interactions between V1 neurons perform a local-to-global computation to arrive at a saliency map that highlights conspicuous visual locations by higher V1 responses. The conspicuous location are usually, but not always, where visual input statistics changes. The population V1 outputs to SC, which is also retinotopic, enables SC to locate, by lateral inhibition between SC neurons, the most salient location as the saccadic target. Experimental tests of this hypothesis will be shown. Variations of the neural circuit for visual orienting across animal species, with more or less V1 involvement, will be discussed. Supported by the Gatsby Charitable Foundation.

  11. Novel Models of Visual Topographic Map Alignment in the Superior Colliculus

    PubMed Central

    El-Ghazawi, Tarek A.; Triplett, Jason W.

    2016-01-01

    The establishment of precise neuronal connectivity during development is critical for sensing the external environment and informing appropriate behavioral responses. In the visual system, many connections are organized topographically, which preserves the spatial order of the visual scene. The superior colliculus (SC) is a midbrain nucleus that integrates visual inputs from the retina and primary visual cortex (V1) to regulate goal-directed eye movements. In the SC, topographically organized inputs from the retina and V1 must be aligned to facilitate integration. Previously, we showed that retinal input instructs the alignment of V1 inputs in the SC in a manner dependent on spontaneous neuronal activity; however, the mechanism of activity-dependent instruction remains unclear. To begin to address this gap, we developed two novel computational models of visual map alignment in the SC that incorporate distinct activity-dependent components. First, a Correlational Model assumes that V1 inputs achieve alignment with established retinal inputs through simple correlative firing mechanisms. A second Integrational Model assumes that V1 inputs contribute to the firing of SC neurons during alignment. Both models accurately replicate in vivo findings in wild type, transgenic and combination mutant mouse models, suggesting either activity-dependent mechanism is plausible. In silico experiments reveal distinct behaviors in response to weakening retinal drive, providing insight into the nature of the system governing map alignment depending on the activity-dependent strategy utilized. Overall, we describe novel computational frameworks of visual map alignment that accurately model many aspects of the in vivo process and propose experiments to test them. PMID:28027309

  12. Influence of Visual Prism Adaptation on Auditory Space Representation.

    PubMed

    Pochopien, Klaudia; Fahle, Manfred

    2017-01-01

    Prisms shifting the visual input sideways produce a mismatch between the visual versus felt position of one's hand. Prism adaptation eliminates this mismatch, realigning hand proprioception with visual input. Whether this realignment concerns exclusively the visuo-(hand)motor system or it generalizes to acoustic inputs is controversial. We here show that there is indeed a slight influence of visual adaptation on the perceived direction of acoustic sources. However, this shift in perceived auditory direction can be fully explained by a subconscious head rotation during prism exposure and by changes in arm proprioception. Hence, prism adaptation does only indirectly generalize to auditory space perception.

  13. Spatial Tuning Shifts Increase the Discriminability and Fidelity of Population Codes in Visual Cortex

    PubMed Central

    2017-01-01

    Selective visual attention enables organisms to enhance the representation of behaviorally relevant stimuli by altering the encoding properties of single receptive fields (RFs). Yet we know little about how the attentional modulations of single RFs contribute to the encoding of an entire visual scene. Addressing this issue requires (1) measuring a group of RFs that tile a continuous portion of visual space, (2) constructing a population-level measurement of spatial representations based on these RFs, and (3) linking how different types of RF attentional modulations change the population-level representation. To accomplish these aims, we used fMRI to characterize the responses of thousands of voxels in retinotopically organized human cortex. First, we found that the response modulations of voxel RFs (vRFs) depend on the spatial relationship between the RF center and the visual location of the attended target. Second, we used two analyses to assess the spatial encoding quality of a population of voxels. We found that attention increased fine spatial discriminability and representational fidelity near the attended target. Third, we linked these findings by manipulating the observed vRF attentional modulations and recomputing our measures of the fidelity of population codes. Surprisingly, we discovered that attentional enhancements of population-level representations largely depend on position shifts of vRFs, rather than changes in size or gain. Our data suggest that position shifts of single RFs are a principal mechanism by which attention enhances population-level representations in visual cortex. SIGNIFICANCE STATEMENT Although changes in the gain and size of RFs have dominated our view of how attention modulates visual information codes, such hypotheses have largely relied on the extrapolation of single-cell responses to population responses. Here we use fMRI to relate changes in single voxel receptive fields (vRFs) to changes in population-level representations. We find that vRF position shifts contribute more to population-level enhancements of visual information than changes in vRF size or gain. This finding suggests that position shifts are a principal mechanism by which spatial attention enhances population codes for relevant visual information. This poses challenges for labeled line theories of information processing, suggesting that downstream regions likely rely on distributed inputs rather than single neuron-to-neuron mappings. PMID:28242794

  14. Robotic assisted andrological surgery

    PubMed Central

    Parekattil, Sijo J; Gudeloglu, Ahmet

    2013-01-01

    The introduction of the operative microscope for andrological surgery in the 1970s provided enhanced magnification and accuracy, unparalleled to any previous visual loop or magnification techniques. This technology revolutionized techniques for microsurgery in andrology. Today, we may be on the verge of a second such revolution by the incorporation of robotic assisted platforms for microsurgery in andrology. Robotic assisted microsurgery is being utilized to a greater degree in andrology and a number of other microsurgical fields, such as ophthalmology, hand surgery, plastics and reconstructive surgery. The potential advantages of robotic assisted platforms include elimination of tremor, improved stability, surgeon ergonomics, scalability of motion, multi-input visual interphases with up to three simultaneous visual views, enhanced magnification, and the ability to manipulate three surgical instruments and cameras simultaneously. This review paper begins with the historical development of robotic microsurgery. It then provides an in-depth presentation of the technique and outcomes of common robotic microsurgical andrological procedures, such as vasectomy reversal, subinguinal varicocelectomy, targeted spermatic cord denervation (for chronic orchialgia) and robotic assisted microsurgical testicular sperm extraction (microTESE). PMID:23241637

  15. Attention, working memory, and phenomenal experience of WM content: memory levels determined by different types of top-down modulation.

    PubMed

    Jacob, Jane; Jacobs, Christianne; Silvanto, Juha

    2015-01-01

    What is the role of top-down attentional modulation in consciously accessing working memory (WM) content? In influential WM models, information can exist in different states, determined by allocation of attention; placing the original memory representation in the center of focused attention gives rise to conscious access. Here we discuss various lines of evidence indicating that such attentional modulation is not sufficient for memory content to be phenomenally experienced. We propose that, in addition to attentional modulation of the memory representation, another type of top-down modulation is required: suppression of all incoming visual information, via inhibition of early visual cortex. In this view, there are three distinct memory levels, as a function of the top-down control associated with them: (1) Nonattended, nonconscious associated with no attentional modulation; (2) attended, phenomenally nonconscious memory, associated with attentional enhancement of the actual memory trace; (3) attended, phenomenally conscious memory content, associated with enhancement of the memory trace and top-down suppression of all incoming visual input.

  16. Parameters of semantic multisensory integration depend on timing and modality order among people on the autism spectrum: evidence from event-related potentials.

    PubMed

    Russo, N; Mottron, L; Burack, J A; Jemel, B

    2012-07-01

    Individuals with autism spectrum disorders (ASD) report difficulty integrating simultaneously presented visual and auditory stimuli (Iarocci & McDonald, 2006), albeit showing enhanced perceptual processing of unisensory stimuli, as well as an enhanced role of perception in higher-order cognitive tasks (Enhanced Perceptual Functioning (EPF) model; Mottron, Dawson, Soulières, Hubert, & Burack, 2006). Individuals with an ASD also integrate auditory-visual inputs over longer periods of time than matched typically developing (TD) peers (Kwakye, Foss-Feig, Cascio, Stone & Wallace, 2011). To tease apart the dichotomy of both extended multisensory processing and enhanced perceptual processing, we used behavioral and electrophysiological measurements of audio-visual integration among persons with ASD. 13 TD and 14 autistics matched on IQ completed a forced choice multisensory semantic congruence task requiring speeded responses regarding the congruence or incongruence of animal sounds and pictures. Stimuli were presented simultaneously or sequentially at various stimulus onset asynchronies in both auditory first and visual first presentations. No group differences were noted in reaction time (RT) or accuracy. The latency at which congruent and incongruent waveforms diverged was the component of interest. In simultaneous presentations, congruent and incongruent waveforms diverged earlier (circa 150 ms) among persons with ASD than among TD individuals (around 350 ms). In sequential presentations, asymmetries in the timing of neuronal processing were noted in ASD which depended on stimulus order, but these were consistent with the nature of specific perceptual strengths in this group. These findings extend the Enhanced Perceptual Functioning Model to the multisensory domain, and provide a more nuanced context for interpreting ERP findings of impaired semantic processing in ASD. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Learned filters for object detection in multi-object visual tracking

    NASA Astrophysics Data System (ADS)

    Stamatescu, Victor; Wong, Sebastien; McDonnell, Mark D.; Kearney, David

    2016-05-01

    We investigate the application of learned convolutional filters in multi-object visual tracking. The filters were learned in both a supervised and unsupervised manner from image data using artificial neural networks. This work follows recent results in the field of machine learning that demonstrate the use learned filters for enhanced object detection and classification. Here we employ a track-before-detect approach to multi-object tracking, where tracking guides the detection process. The object detection provides a probabilistic input image calculated by selecting from features obtained using banks of generative or discriminative learned filters. We present a systematic evaluation of these convolutional filters using a real-world data set that examines their performance as generic object detectors.

  18. A special role for binocular visual input during development and as a component of occlusion therapy for treatment of amblyopia.

    PubMed

    Mitchell, Donald E

    2008-01-01

    To review work on animal models of deprivation amblyopia that points to a special role for binocular visual input in the development of spatial vision and as a component of occlusion (patching) therapy for amblyopia. The studies reviewed employ behavioural methods to measure the effects of various early experiential manipulations on the development of the visual acuity of the two eyes. Short periods of concordant binocular input, if continuous, can offset much longer daily periods of monocular deprivation to allow the development of normal visual acuity in both eyes. It appears that the visual system does not weigh all visual input equally in terms of its ability to impact on the development of vision but instead places greater weight on concordant binocular exposure. Experimental models of patching therapy for amblyopia imposed on animals in which amblyopia had been induced by a prior period of early monocular deprivation, indicate that the benefits of patching therapy may be only temporary and decline rapidly after patching is discontinued. However, when combined with critical amounts of binocular visual input each day, the benefits of patching can be both heightened and made permanent. Taken together with demonstrations of retained binocular connections in the visual cortex of monocularly deprived animals, a strong argument is made for inclusion of specific training of stereoscopic vision for part of the daily periods of binocular exposure that should be incorporated as part of any patching protocol for amblyopia.

  19. Specificity and timescales of cortical adaptation as inferences about natural movie statistics.

    PubMed

    Snow, Michoel; Coen-Cagli, Ruben; Schwartz, Odelia

    2016-10-01

    Adaptation is a phenomenological umbrella term under which a variety of temporal contextual effects are grouped. Previous models have shown that some aspects of visual adaptation reflect optimal processing of dynamic visual inputs, suggesting that adaptation should be tuned to the properties of natural visual inputs. However, the link between natural dynamic inputs and adaptation is poorly understood. Here, we extend a previously developed Bayesian modeling framework for spatial contextual effects to the temporal domain. The model learns temporal statistical regularities of natural movies and links these statistics to adaptation in primary visual cortex via divisive normalization, a ubiquitous neural computation. In particular, the model divisively normalizes the present visual input by the past visual inputs only to the degree that these are inferred to be statistically dependent. We show that this flexible form of normalization reproduces classical findings on how brief adaptation affects neuronal selectivity. Furthermore, prior knowledge acquired by the Bayesian model from natural movies can be modified by prolonged exposure to novel visual stimuli. We show that this updating can explain classical results on contrast adaptation. We also simulate the recent finding that adaptation maintains population homeostasis, namely, a balanced level of activity across a population of neurons with different orientation preferences. Consistent with previous disparate observations, our work further clarifies the influence of stimulus-specific and neuronal-specific normalization signals in adaptation.

  20. Specificity and timescales of cortical adaptation as inferences about natural movie statistics

    PubMed Central

    Snow, Michoel; Coen-Cagli, Ruben; Schwartz, Odelia

    2016-01-01

    Adaptation is a phenomenological umbrella term under which a variety of temporal contextual effects are grouped. Previous models have shown that some aspects of visual adaptation reflect optimal processing of dynamic visual inputs, suggesting that adaptation should be tuned to the properties of natural visual inputs. However, the link between natural dynamic inputs and adaptation is poorly understood. Here, we extend a previously developed Bayesian modeling framework for spatial contextual effects to the temporal domain. The model learns temporal statistical regularities of natural movies and links these statistics to adaptation in primary visual cortex via divisive normalization, a ubiquitous neural computation. In particular, the model divisively normalizes the present visual input by the past visual inputs only to the degree that these are inferred to be statistically dependent. We show that this flexible form of normalization reproduces classical findings on how brief adaptation affects neuronal selectivity. Furthermore, prior knowledge acquired by the Bayesian model from natural movies can be modified by prolonged exposure to novel visual stimuli. We show that this updating can explain classical results on contrast adaptation. We also simulate the recent finding that adaptation maintains population homeostasis, namely, a balanced level of activity across a population of neurons with different orientation preferences. Consistent with previous disparate observations, our work further clarifies the influence of stimulus-specific and neuronal-specific normalization signals in adaptation. PMID:27699416

  1. Lymphoma diagnosis in histopathology using a multi-stage visual learning approach

    NASA Astrophysics Data System (ADS)

    Codella, Noel; Moradi, Mehdi; Matasar, Matt; Sveda-Mahmood, Tanveer; Smith, John R.

    2016-03-01

    This work evaluates the performance of a multi-stage image enhancement, segmentation, and classification approach for lymphoma recognition in hematoxylin and eosin (H and E) stained histopathology slides of excised human lymph node tissue. In the first stage, the original histology slide undergoes various image enhancement and segmentation operations, creating an additional 5 images for every slide. These new images emphasize unique aspects of the original slide, including dominant staining, staining segmentations, non-cellular groupings, and cellular groupings. For the resulting 6 total images, a collection of visual features are extracted from 3 different spatial configurations. Visual features include the first fully connected layer (4096 dimensions) of the Caffe convolutional neural network trained from ImageNet data. In total, over 200 resultant visual descriptors are extracted for each slide. Non-linear SVMs are trained over each of the over 200 descriptors, which are then input to a forward stepwise ensemble selection that optimizes a late fusion sum of logistically normalized model outputs using local hill climbing. The approach is evaluated on a public NIH dataset containing 374 images representing 3 lymphoma conditions: chronic lymphocytic leukemia (CLL), follicular lymphoma (FL), and mantle cell lymphoma (MCL). Results demonstrate a 38.4% reduction in residual error over the current state-of-art on this dataset.

  2. Olfactory-visual integration facilitates perception of subthreshold negative emotion.

    PubMed

    Novak, Lucas R; Gitelman, Darren R; Schuyler, Brianna; Li, Wen

    2015-10-01

    A fast growing literature of multisensory emotion integration notwithstanding, the chemical senses, intimately associated with emotion, have been largely overlooked. Moreover, an ecologically highly relevant principle of "inverse effectiveness", rendering maximal integration efficacy with impoverished sensory input, remains to be assessed in emotion integration. Presenting minute, subthreshold negative (vs. neutral) cues in faces and odors, we demonstrated olfactory-visual emotion integration in improved emotion detection (especially among individuals with weaker perception of unimodal negative cues) and response enhancement in the amygdala. Moreover, while perceptual gain for visual negative emotion involved the posterior superior temporal sulcus/pSTS, perceptual gain for olfactory negative emotion engaged both the associative olfactory (orbitofrontal) cortex and amygdala. Dynamic causal modeling (DCM) analysis of fMRI timeseries further revealed connectivity strengthening among these areas during crossmodal emotion integration. That multisensory (but not low-level unisensory) areas exhibited both enhanced response and region-to-region coupling favors a top-down (vs. bottom-up) account for olfactory-visual emotion integration. Current findings thus confirm the involvement of multisensory convergence areas, while highlighting unique characteristics of olfaction-related integration. Furthermore, successful crossmodal binding of subthreshold aversive cues not only supports the principle of "inverse effectiveness" in emotion integration but also accentuates the automatic, unconscious quality of crossmodal emotion synthesis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Visual training paired with electrical stimulation of the basal forebrain improves orientation-selective visual acuity in the rat.

    PubMed

    Kang, Jun Il; Groleau, Marianne; Dotigny, Florence; Giguère, Hugo; Vaucher, Elvire

    2014-07-01

    The cholinergic afferents from the basal forebrain to the primary visual cortex play a key role in visual attention and cortical plasticity. These afferent fibers modulate acute and long-term responses of visual neurons to specific stimuli. The present study evaluates whether this cholinergic modulation of visual neurons results in cortical activity and visual perception changes. Awake adult rats were exposed repeatedly for 2 weeks to an orientation-specific grating with or without coupling this visual stimulation to an electrical stimulation of the basal forebrain. The visual acuity, as measured using a visual water maze before and after the exposure to the orientation-specific grating, was increased in the group of trained rats with simultaneous basal forebrain/visual stimulation. The increase in visual acuity was not observed when visual training or basal forebrain stimulation was performed separately or when cholinergic fibers were selectively lesioned prior to the visual stimulation. The visual evoked potentials show a long-lasting increase in cortical reactivity of the primary visual cortex after coupled visual/cholinergic stimulation, as well as c-Fos immunoreactivity of both pyramidal and GABAergic interneuron. These findings demonstrate that when coupled with visual training, the cholinergic system improves visual performance for the trained orientation probably through enhancement of attentional processes and cortical plasticity in V1 related to the ratio of excitatory/inhibitory inputs. This study opens the possibility of establishing efficient rehabilitation strategies for facilitating visual capacity.

  4. Changes in muscle activation patterns in response to enhanced sensory input during treadmill stepping in infants born with myelomeningocele.

    PubMed

    Pantall, Annette; Teulier, Caroline; Ulrich, Beverly D

    2012-12-01

    Infants with myelomeningocele (MMC) increase step frequency in response to modifications to the treadmill surface. The aim was to investigate how these modifications impacted the electromyographic (EMG) patterns. We analyzed EMG from 19 infants aged 2-10 months, with MMC at the lumbosacral level. We supported infants upright on the treadmill for 12 trials, each 30 seconds long. Modifications included visual flow, unloading, weights, Velcro and lcriction. Surface electrodes recorded EMG from tibialis anterior, lateral gastrocnemius, rectus femoris and biceps femoris. We determined muscle bursts for each stride cycle and from these calculated various parameters. Results indicated that each of the five sensory conditions generated different motor patterns. Visual flow and friction which we previously reported increased step frequency impacted lateral gastrocnemius most. Weights, which significantly decreased step frequency increased burst duration and co-activity of the proximal muscles. We also observed an age effect, with all conditions increasing muscle activity in younger infants whereas in older infants visual flow and unloading stimulated most activity. In conclusion, we have demonstrated that infants with myelomeningocele at levels which impact the myotomes of major locomotor muscles find ways to respond and adapt their motor output to changes in sensory input. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Changes in muscle activation patterns in response to enhanced sensory input during treadmill stepping in infants born with myelomeningocele

    PubMed Central

    Pantall, Annette; Teulier, Caroline; Ulrich, Beverly D.

    2013-01-01

    Infants with myelomeningocele (MMC) increase step frequency in response to modifications to the treadmill surface. The aim was to investigate how these modifications impacted the electromyographic (EMG) patterns. We analyzed EMG from 19 infants aged 2–10 months, with MMC at the lumbosacral level. We supported infants upright on the treadmill for 12 trials, each 30 seconds long. Modifications included visual flow, unloading, weights, Velcro and lcriction. Surface electrodes recorded EMG from tibialis anterior, lateral gastrocnemius, rectus femoris and biceps femoris. We determined muscle bursts for each stride cycle and from these calculated various parameters. Results indicated that each of the five sensory conditions generated different motor patterns. Visual flow and friction which we previously reported increased step frequency impacted lateral gastrocnemius most. Weights, which significantly decreased step frequency increased burst duration and co-activity of the proximal muscles. We also observed an age effect, with all conditions increasing muscle activity in younger infants whereas in older infants visual flow and unloading stimulated most activity. In conclusion, we have demonstrated that infants with myelomeningocele at levels which impact the myotomes of major locomotor muscles find ways to respond and adapt their motor output to changes in sensory input. PMID:23158017

  6. Peripheral Processing Facilitates Optic Flow-Based Depth Perception

    PubMed Central

    Li, Jinglin; Lindemann, Jens P.; Egelhaaf, Martin

    2016-01-01

    Flying insects, such as flies or bees, rely on consistent information regarding the depth structure of the environment when performing their flight maneuvers in cluttered natural environments. These behaviors include avoiding collisions, approaching targets or spatial navigation. Insects are thought to obtain depth information visually from the retinal image displacements (“optic flow”) during translational ego-motion. Optic flow in the insect visual system is processed by a mechanism that can be modeled by correlation-type elementary motion detectors (EMDs). However, it is still an open question how spatial information can be extracted reliably from the responses of the highly contrast- and pattern-dependent EMD responses, especially if the vast range of light intensities encountered in natural environments is taken into account. This question will be addressed here by systematically modeling the peripheral visual system of flies, including various adaptive mechanisms. Different model variants of the peripheral visual system were stimulated with image sequences that mimic the panoramic visual input during translational ego-motion in various natural environments, and the resulting peripheral signals were fed into an array of EMDs. We characterized the influence of each peripheral computational unit on the representation of spatial information in the EMD responses. Our model simulations reveal that information about the overall light level needs to be eliminated from the EMD input as is accomplished under light-adapted conditions in the insect peripheral visual system. The response characteristics of large monopolar cells (LMCs) resemble that of a band-pass filter, which reduces the contrast dependency of EMDs strongly, effectively enhancing the representation of the nearness of objects and, especially, of their contours. We furthermore show that local brightness adaptation of photoreceptors allows for spatial vision under a wide range of dynamic light conditions. PMID:27818631

  7. The case from animal studies for balanced binocular treatment strategies for human amblyopia.

    PubMed

    Mitchell, Donald E; Duffy, Kevin R

    2014-03-01

    Although amblyopia typically manifests itself as a monocular condition, its origin has long been linked to unbalanced neural signals from the two eyes during early postnatal development, a view confirmed by studies conducted on animal models in the last 50 years. Despite recognition of its binocular origin, treatment of amblyopia continues to be dominated by a period of patching of the non-amblyopic eye that necessarily hinders binocular co-operation. This review summarizes evidence from three lines of investigation conducted on an animal model of deprivation amblyopia to support the thesis that treatment of amblyopia should instead focus upon procedures that promote and enhance binocular co-operation. First, experiments with mixed daily visual experience in which episodes of abnormal visual input were pitted against normal binocular exposure revealed that short exposures of the latter offset much longer periods of abnormal input to allow normal development of visual acuity in both eyes. Second, experiments on the use of part-time patching revealed that purposeful introduction of episodes of binocular vision each day could be very beneficial. Periods of binocular exposure that represented 30-50% of the daily visual exposure included with daily occlusion of the non-amblyopic could allow recovery of normal vision in the amblyopic eye. Third, very recent experiments demonstrate that a short 10 day period of total darkness can promote very fast and complete recovery of visual acuity in the amblyopic eye of kittens and may represent an example of a class of artificial environments that have similar beneficial effects. Finally, an approach is described to allow timing of events in kitten and human visual system development to be scaled to optimize the ages for therapeutic interventions. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.

  8. Modeling the Time-Course of Responses for the Border Ownership Selectivity Based on the Integration of Feedforward Signals and Visual Cortical Interactions

    PubMed Central

    Wagatsuma, Nobuhiko; Sakai, Ko

    2017-01-01

    Border ownership (BO) indicates which side of a contour owns a border, and it plays a fundamental role in figure-ground segregation. The majority of neurons in V2 and V4 areas of monkeys exhibit BO selectivity. A physiological work reported that the responses of BO-selective cells show a rapid transition when a presented square is flipped along its classical receptive field (CRF) so that the opposite BO is presented, whereas the transition is significantly slower when a square with a clear BO is replaced by an ambiguous edge, e.g., when the square is enlarged greatly. The rapid transition seemed to reflect the influence of feedforward processing on BO selectivity. Herein, we investigated the role of feedforward signals and cortical interactions for time-courses in BO-selective cells by modeling a visual cortical network comprising V1, V2, and posterior parietal (PP) modules. In our computational model, the recurrent pathways among these modules gradually established the visual progress and the BO assignments. Feedforward inputs mainly determined the activities of these modules. Surrounding suppression/facilitation of early-level areas modulates the activities of V2 cells to provide BO signals. Weak feedback signals from the PP module enhanced the contrast gain extracted in V1, which underlies the attentional modulation of BO signals. Model simulations exhibited time-courses depending on the BO ambiguity, which were caused by the integration delay of V1 and V2 cells and the local inhibition therein given the difference in input stimulus. However, our model did not fully explain the characteristics of crucially slow transition: the responses of BO-selective physiological cells indicated the persistent activation several times longer than that of our model after the replacement with the ambiguous edge. Furthermore, the time-course of BO-selective model cells replicated the attentional modulation of response time in human psychophysical experiments. These attentional modulations for time-courses were induced by selective enhancement of early-level features due to interactions between V1 and PP. Our proposed model suggests fundamental roles of surrounding suppression/facilitation based on feedforward inputs as well as the interactions between early and parietal visual areas with respect to the ambiguity dependence of the neural dynamics in intermediate-level vision. PMID:28163688

  9. Modeling the Time-Course of Responses for the Border Ownership Selectivity Based on the Integration of Feedforward Signals and Visual Cortical Interactions.

    PubMed

    Wagatsuma, Nobuhiko; Sakai, Ko

    2016-01-01

    Border ownership (BO) indicates which side of a contour owns a border, and it plays a fundamental role in figure-ground segregation. The majority of neurons in V2 and V4 areas of monkeys exhibit BO selectivity. A physiological work reported that the responses of BO-selective cells show a rapid transition when a presented square is flipped along its classical receptive field (CRF) so that the opposite BO is presented, whereas the transition is significantly slower when a square with a clear BO is replaced by an ambiguous edge, e.g., when the square is enlarged greatly. The rapid transition seemed to reflect the influence of feedforward processing on BO selectivity. Herein, we investigated the role of feedforward signals and cortical interactions for time-courses in BO-selective cells by modeling a visual cortical network comprising V1, V2, and posterior parietal (PP) modules. In our computational model, the recurrent pathways among these modules gradually established the visual progress and the BO assignments. Feedforward inputs mainly determined the activities of these modules. Surrounding suppression/facilitation of early-level areas modulates the activities of V2 cells to provide BO signals. Weak feedback signals from the PP module enhanced the contrast gain extracted in V1, which underlies the attentional modulation of BO signals. Model simulations exhibited time-courses depending on the BO ambiguity, which were caused by the integration delay of V1 and V2 cells and the local inhibition therein given the difference in input stimulus. However, our model did not fully explain the characteristics of crucially slow transition: the responses of BO-selective physiological cells indicated the persistent activation several times longer than that of our model after the replacement with the ambiguous edge. Furthermore, the time-course of BO-selective model cells replicated the attentional modulation of response time in human psychophysical experiments. These attentional modulations for time-courses were induced by selective enhancement of early-level features due to interactions between V1 and PP. Our proposed model suggests fundamental roles of surrounding suppression/facilitation based on feedforward inputs as well as the interactions between early and parietal visual areas with respect to the ambiguity dependence of the neural dynamics in intermediate-level vision.

  10. Processing of Visual Imagery by an Adaptive Model of the Visual System: Its Performance and its Significance. Final Report, June 1969-March 1970.

    ERIC Educational Resources Information Center

    Tallman, Oliver H.

    A digital simulation of a model for the processing of visual images is derived from known aspects of the human visual system. The fundamental principle of computation suggested by a biological model is a transformation that distributes information contained in an input stimulus everywhere in a transform domain. Each sensory input contributes under…

  11. Training Modalities to Increase Sensorimotor Adaptability

    NASA Technical Reports Server (NTRS)

    Bloomberg, J. J.; Mulavara, A. P.; Peters, B. T.; Brady, R.; Audas, C.; Cohen, H. S.

    2009-01-01

    During the acute phase of adaptation to novel gravitational environments, sensorimotor disturbances have the potential to disrupt the ability of astronauts to perform required mission tasks. The goal of our current series of studies is develop a sensorimotor adaptability (SA) training program designed to facilitate recovery of functional capabilities when astronauts transition to different gravitational environments. The project has conducted a series of studies investigating the efficacy of treadmill training combined with a variety of sensory challenges (incongruent visual input, support surface instability) designed to increase adaptability. SA training using a treadmill combined with exposure to altered visual input was effective in producing increased adaptability in a more complex over-ground ambulatory task on an obstacle course. This confirms that for a complex task like walking, treadmill training contains enough of the critical features of overground walking to be an effective training modality. SA training can be optimized by using a periodized training schedule. Test sessions that each contain short-duration exposures to multiple perturbation stimuli allows subjects to acquire a greater ability to rapidly reorganize appropriate response strategies when encountering a novel sensory environment. Using a treadmill mounted on top of a six degree-of-freedom motion base platform we investigated locomotor training responses produced by subjects introduced to a dynamic walking surface combined with alterations in visual flow. Subjects who received this training had improved locomotor performance and faster reaction times when exposed to the novel sensory stimuli compared to control subjects. Results also demonstrate that individual sensory biases (i.e. increased visual dependency) can predict adaptive responses to novel sensory environments suggesting that individual training prescription can be developed to enhance adaptability. These data indicate that SA training can be effectively integrated with treadmill exercise and optimized to provide a unique system that combines multiple training requirements in a single countermeasure system. Learning Objectives: The development of a new countermeasure approach that enhances sensorimotor adaptability will be discussed.

  12. Higher order visual input to the mushroom bodies in the bee, Bombus impatiens.

    PubMed

    Paulk, Angelique C; Gronenberg, Wulfila

    2008-11-01

    To produce appropriate behaviors based on biologically relevant associations, sensory pathways conveying different modalities are integrated by higher-order central brain structures, such as insect mushroom bodies. To address this function of sensory integration, we characterized the structure and response of optic lobe (OL) neurons projecting to the calyces of the mushroom bodies in bees. Bees are well known for their visual learning and memory capabilities and their brains possess major direct visual input from the optic lobes to the mushroom bodies. To functionally characterize these visual inputs to the mushroom bodies, we recorded intracellularly from neurons in bumblebees (Apidae: Bombus impatiens) and a single neuron in a honeybee (Apidae: Apis mellifera) while presenting color and motion stimuli. All of the mushroom body input neurons were color sensitive while a subset was motion sensitive. Additionally, most of the mushroom body input neurons would respond to the first, but not to subsequent, presentations of repeated stimuli. In general, the medulla or lobula neurons projecting to the calyx signaled specific chromatic, temporal, and motion features of the visual world to the mushroom bodies, which included sensory information required for the biologically relevant associations bees form during foraging tasks.

  13. Visual speech information: a help or hindrance in perceptual processing of dysarthric speech.

    PubMed

    Borrie, Stephanie A

    2015-03-01

    This study investigated the influence of visual speech information on perceptual processing of neurologically degraded speech. Fifty listeners identified spastic dysarthric speech under both audio (A) and audiovisual (AV) conditions. Condition comparisons revealed that the addition of visual speech information enhanced processing of the neurologically degraded input in terms of (a) acuity (percent phonemes correct) of vowels and consonants and (b) recognition (percent words correct) of predictive and nonpredictive phrases. Listeners exploited stress-based segmentation strategies more readily in AV conditions, suggesting that the perceptual benefit associated with adding visual speech information to the auditory signal-the AV advantage-has both segmental and suprasegmental origins. Results also revealed that the magnitude of the AV advantage can be predicted, to some degree, by the extent to which an individual utilizes syllabic stress cues to inform word recognition in AV conditions. Findings inform the development of a listener-specific model of speech perception that applies to processing of dysarthric speech in everyday communication contexts.

  14. Speaking Math--A Voice Input, Speech Output Calculator for Students with Visual Impairments

    ERIC Educational Resources Information Center

    Bouck, Emily C.; Flanagan, Sara; Joshi, Gauri S.; Sheikh, Waseem; Schleppenbach, Dave

    2011-01-01

    This project explored a newly developed computer-based voice input, speech output (VISO) calculator. Three high school students with visual impairments educated at a state school for the blind and visually impaired participated in the study. The time they took to complete assessments and the average number of attempts per problem were recorded…

  15. Lateral Spread of Orientation Selectivity in V1 is Controlled by Intracortical Cooperativity

    PubMed Central

    Chavane, Frédéric; Sharon, Dahlia; Jancke, Dirk; Marre, Olivier; Frégnac, Yves; Grinvald, Amiram

    2011-01-01

    Neurons in the primary visual cortex receive subliminal information originating from the periphery of their receptive fields (RF) through a variety of cortical connections. In the cat primary visual cortex, long-range horizontal axons have been reported to preferentially bind to distant columns of similar orientation preferences, whereas feedback connections from higher visual areas provide a more diverse functional input. To understand the role of these lateral interactions, it is crucial to characterize their effective functional connectivity and tuning properties. However, the overall functional impact of cortical lateral connections, whatever their anatomical origin, is unknown since it has never been directly characterized. Using direct measurements of postsynaptic integration in cat areas 17 and 18, we performed multi-scale assessments of the functional impact of visually driven lateral networks. Voltage-sensitive dye imaging showed that local oriented stimuli evoke an orientation-selective activity that remains confined to the cortical feedforward imprint of the stimulus. Beyond a distance of one hypercolumn, the lateral spread of cortical activity gradually lost its orientation preference approximated as an exponential with a space constant of about 1 mm. Intracellular recordings showed that this loss of orientation selectivity arises from the diversity of converging synaptic input patterns originating from outside the classical RF. In contrast, when the stimulus size was increased, we observed orientation-selective spread of activation beyond the feedforward imprint. We conclude that stimulus-induced cooperativity enhances the long-range orientation-selective spread. PMID:21629708

  16. Distinct GABAergic targets of feedforward and feedback connections between lower and higher areas of rat visual cortex.

    PubMed

    Gonchar, Yuri; Burkhalter, Andreas

    2003-11-26

    Processing of visual information is performed in different cortical areas that are interconnected by feedforward (FF) and feedback (FB) pathways. Although FF and FB inputs are excitatory, their influences on pyramidal neurons also depend on the outputs of GABAergic neurons, which receive FF and FB inputs. Rat visual cortex contains at least three different families of GABAergic neurons that express parvalbumin (PV), calretinin (CR), and somatostatin (SOM) (Gonchar and Burkhalter, 1997). To examine whether pathway-specific inhibition (Shao and Burkhalter, 1996) is attributable to distinct connections with GABAergic neurons, we traced FF and FB inputs to PV, CR, and SOM neurons in layers 1-2/3 of area 17 and the secondary lateromedial area in rat visual cortex. We found that in layer 2/3 maximally 2% of FF and FB inputs go to CR and SOM neurons. This contrasts with 12-13% of FF and FB inputs onto layer 2/3 PV neurons. Unlike inputs to layer 2/3, connections to layer 1, which contains CR but lacks SOM and PV somata, are pathway-specific: 21% of FB inputs go to CR neurons, whereas FF inputs to layer 1 and its CR neurons are absent. These findings suggest that FF and FB influences on layer 2/3 pyramidal neurons mainly involve disynaptic connections via PV neurons that control the spike outputs to axons and proximal dendrites. Unlike FF input, FB input in addition makes a disynaptic link via CR neurons, which may influence the excitability of distal pyramidal cell dendrites in layer 1.

  17. Research on flight stability performance of rotor aircraft based on visual servo control method

    NASA Astrophysics Data System (ADS)

    Yu, Yanan; Chen, Jing

    2016-11-01

    control method based on visual servo feedback is proposed, which is used to improve the attitude of a quad-rotor aircraft and to enhance its flight stability. Ground target images are obtained by a visual platform fixed on aircraft. Scale invariant feature transform (SIFT) algorism is used to extract image feature information. According to the image characteristic analysis, fast motion estimation is completed and used as an input signal of PID flight control system to realize real-time status adjustment in flight process. Imaging tests and simulation results show that the method proposed acts good performance in terms of flight stability compensation and attitude adjustment. The response speed and control precision meets the requirements of actual use, which is able to reduce or even eliminate the influence of environmental disturbance. So the method proposed has certain research value to solve the problem of aircraft's anti-disturbance.

  18. Effect of mechanical tactile noise on amplitude of visual evoked potentials: multisensory stochastic resonance.

    PubMed

    Méndez-Balbuena, Ignacio; Huidobro, Nayeli; Silva, Mayte; Flores, Amira; Trenado, Carlos; Quintanar, Luis; Arias-Carrión, Oscar; Kristeva, Rumyana; Manjarrez, Elias

    2015-10-01

    The present investigation documents the electrophysiological occurrence of multisensory stochastic resonance in the human visual pathway elicited by tactile noise. We define multisensory stochastic resonance of brain evoked potentials as the phenomenon in which an intermediate level of input noise of one sensory modality enhances the brain evoked response of another sensory modality. Here we examined this phenomenon in visual evoked potentials (VEPs) modulated by the addition of tactile noise. Specifically, we examined whether a particular level of mechanical Gaussian noise applied to the index finger can improve the amplitude of the VEP. We compared the amplitude of the positive P100 VEP component between zero noise (ZN), optimal noise (ON), and high mechanical noise (HN). The data disclosed an inverted U-like graph for all the subjects, thus demonstrating the occurrence of a multisensory stochastic resonance in the P100 VEP. Copyright © 2015 the American Physiological Society.

  19. The involvement of central attention in visual search is determined by task demands.

    PubMed

    Han, Suk Won

    2017-04-01

    Attention, the mechanism by which a subset of sensory inputs is prioritized over others, operates at multiple processing stages. Specifically, attention enhances weak sensory signal at the perceptual stage, while it serves to select appropriate responses or consolidate sensory representations into short-term memory at the central stage. This study investigated the independence and interaction between perceptual and central attention. To do so, I used a dual-task paradigm, pairing a four-alternative choice task with a visual search task. The results showed that central attention for response selection was engaged in perceptual processing for visual search when the number of search items increased, thereby increasing the demand for serial allocation of focal attention. By contrast, central attention and perceptual attention remained independent as far as the demand for serial shifting of focal attention remained constant; decreasing stimulus contrast or increasing the set size of a parallel search did not evoke the involvement of central attention in visual search. These results suggest that the nature of concurrent visual search process plays a crucial role in the functional interaction between two different types of attention.

  20. Delta: a new web-based 3D genome visualization and analysis platform.

    PubMed

    Tang, Bixia; Li, Feifei; Li, Jing; Zhao, Wenming; Zhang, Zhihua

    2018-04-15

    Delta is an integrative visualization and analysis platform to facilitate visually annotating and exploring the 3D physical architecture of genomes. Delta takes Hi-C or ChIA-PET contact matrix as input and predicts the topologically associating domains and chromatin loops in the genome. It then generates a physical 3D model which represents the plausible consensus 3D structure of the genome. Delta features a highly interactive visualization tool which enhances the integration of genome topology/physical structure with extensive genome annotation by juxtaposing the 3D model with diverse genomic assay outputs. Finally, by visually comparing the 3D model of the β-globin gene locus and its annotation, we speculated a plausible transitory interaction pattern in the locus. Experimental evidence was found to support this speculation by literature survey. This served as an example of intuitive hypothesis testing with the help of Delta. Delta is freely accessible from http://delta.big.ac.cn, and the source code is available at https://github.com/zhangzhwlab/delta. zhangzhihua@big.ac.cn. Supplementary data are available at Bioinformatics online.

  1. Influence of visual inputs on quasi-static standing postural steadiness in individuals with spinal cord injury.

    PubMed

    Lemay, Jean-François; Gagnon, Dany; Duclos, Cyril; Grangeon, Murielle; Gauthier, Cindy; Nadeau, Sylvie

    2013-06-01

    Postural steadiness while standing is impaired in individuals with spinal cord injury (SCI) and could be potentially associated with increased reliance on visual inputs. The purpose of this study was to compare individuals with SCI and able-bodied participants on their use of visual inputs to maintain standing postural steadiness. Another aim was to quantify the association between visual contribution to achieve postural steadiness and a clinical balance scale. Individuals with SCI (n = 15) and able-bodied controls (n = 14) performed quasi-static stance, with eyes open or closed, on force plates for two 45 s trials. Measurements of the centre of pressure (COP) included the mean value of the root mean square (RMS), mean COP velocity (MV) and COP sway area (SA). Individuals with SCI were also evaluated with the Mini-Balance Evaluation Systems Test (Mini BESTest), a clinical outcome measure of postural steadiness. Individuals with SCI were significantly less stable than able-bodied controls in both conditions. The Romberg ratios (eyes open/eyes closed) for COP MV and SA were significantly higher for individuals with SCI, indicating a higher contribution of visual inputs for postural steadiness in that population. Romberg ratios for RMS and SA were significantly associated with the Mini-BESTest. This study highlights the contribution of visual inputs in individuals with SCI when maintaining quasi-static standing posture. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. The effect of visual-vestibulosomatosensory conflict induced by virtual reality on postural stability in humans.

    PubMed

    Nishiike, Suetaka; Okazaki, Suzuyo; Watanabe, Hiroshi; Akizuki, Hironori; Imai, Takao; Uno, Atsuhiko; Kitahara, Tadashi; Horii, Arata; Takeda, Noriaki; Inohara, Hidenori

    2013-01-01

    In this study, we examined the effects of sensory inputs of visual-vestibulosomatosensory conflict induced by virtual reality (VR) on subjective dizziness, posture stability and visual dependency on postural control in humans. Eleven healthy young volunteers were immersed in two different VR conditions. In the control condition, subjects walked voluntarily with the background images of interactive computer graphics proportionally synchronized to their walking pace. In the visual-vestibulosomatosensory conflict condition, subjects kept still, but the background images that subjects experienced in the control condition were presented. The scores of both Graybiel's and Hamilton's criteria, postural instability and Romberg ratio were measured before and after the two conditions. After immersion in the conflict condition, both subjective dizziness and objective postural instability were significantly increased, and Romberg ratio, an index of the visual dependency on postural control, was slightly decreased. These findings suggest that sensory inputs of visual-vestibulosomatosensory conflict induced by VR induced motion sickness, resulting in subjective dizziness and postural instability. They also suggest that adaptation to the conflict condition decreases the contribution of visual inputs to postural control with re-weighing of vestibulosomatosensory inputs. VR may be used as a rehabilitation tool for dizzy patients by its ability to induce sensory re-weighing of postural control.

  3. SNP ID-info: SNP ID searching and visualization platform.

    PubMed

    Yang, Cheng-Hong; Chuang, Li-Yeh; Cheng, Yu-Huei; Wen, Cheng-Hao; Chang, Phei-Lang; Chang, Hsueh-Wei

    2008-09-01

    Many association studies provide the relationship between single nucleotide polymorphisms (SNPs), diseases and cancers, without giving a SNP ID, however. Here, we developed the SNP ID-info freeware to provide the SNP IDs within inputting genetic and physical information of genomes. The program provides an "SNP-ePCR" function to generate the full-sequence using primers and template inputs. In "SNPosition," sequence from SNP-ePCR or direct input is fed to match the SNP IDs from SNP fasta-sequence. In "SNP search" and "SNP fasta" function, information of SNPs within the cytogenetic band, contig position, and keyword input are acceptable. Finally, the SNP ID neighboring environment for inputs is completely visualized in the order of contig position and marked with SNP and flanking hits. The SNP identification problems inherent in NCBI SNP BLAST are also avoided. In conclusion, the SNP ID-info provides a visualized SNP ID environment for multiple inputs and assists systematic SNP association studies. The server and user manual are available at http://bio.kuas.edu.tw/snpid-info.

  4. Orientation selectivity and the functional clustering of synaptic inputs in primary visual cortex

    PubMed Central

    Wilson, Daniel E.; Whitney, David E.; Scholl, Benjamin; Fitzpatrick, David

    2016-01-01

    The majority of neurons in primary visual cortex are tuned for stimulus orientation, but the factors that account for the range of orientation selectivities exhibited by cortical neurons remain unclear. To address this issue, we used in vivo 2-photon calcium imaging to characterize the orientation tuning and spatial arrangement of synaptic inputs to the dendritic spines of individual pyramidal neurons in layer 2/3 of ferret visual cortex. The summed synaptic input to individual neurons reliably predicted the neuron’s orientation preference, but did not account for differences in orientation selectivity among neurons. These differences reflected a robust input-output nonlinearity that could not be explained by spike threshold alone, and was strongly correlated with the spatial clustering of co-tuned synaptic inputs within the dendritic field. Dendritic branches with more co-tuned synaptic clusters exhibited greater rates of local dendritic calcium events supporting a prominent role for functional clustering of synaptic inputs in dendritic nonlinearities that shape orientation selectivity. PMID:27294510

  5. Visualization-by-Sketching: An Artist's Interface for Creating Multivariate Time-Varying Data Visualizations.

    PubMed

    Schroeder, David; Keefe, Daniel F

    2016-01-01

    We present Visualization-by-Sketching, a direct-manipulation user interface for designing new data visualizations. The goals are twofold: First, make the process of creating real, animated, data-driven visualizations of complex information more accessible to artists, graphic designers, and other visual experts with traditional, non-technical training. Second, support and enhance the role of human creativity in visualization design, enabling visual experimentation and workflows similar to what is possible with traditional artistic media. The approach is to conceive of visualization design as a combination of processes that are already closely linked with visual creativity: sketching, digital painting, image editing, and reacting to exemplars. Rather than studying and tweaking low-level algorithms and their parameters, designers create new visualizations by painting directly on top of a digital data canvas, sketching data glyphs, and arranging and blending together multiple layers of animated 2D graphics. This requires new algorithms and techniques to interpret painterly user input relative to data "under" the canvas, balance artistic freedom with the need to produce accurate data visualizations, and interactively explore large (e.g., terabyte-sized) multivariate datasets. Results demonstrate a variety of multivariate data visualization techniques can be rapidly recreated using the interface. More importantly, results and feedback from artists support the potential for interfaces in this style to attract new, creative users to the challenging task of designing more effective data visualizations and to help these users stay "in the creative zone" as they work.

  6. Perceptual grouping enhances visual plasticity.

    PubMed

    Mastropasqua, Tommaso; Turatto, Massimo

    2013-01-01

    Visual perceptual learning, a manifestation of neural plasticity, refers to improvements in performance on a visual task achieved by training. Attention is known to play an important role in perceptual learning, given that the observer's discriminative ability improves only for those stimulus feature that are attended. However, the distribution of attention can be severely constrained by perceptual grouping, a process whereby the visual system organizes the initial retinal input into candidate objects. Taken together, these two pieces of evidence suggest the interesting possibility that perceptual grouping might also affect perceptual learning, either directly or via attentional mechanisms. To address this issue, we conducted two experiments. During the training phase, participants attended to the contrast of the task-relevant stimulus (oriented grating), while two similar task-irrelevant stimuli were presented in the adjacent positions. One of the two flanking stimuli was perceptually grouped with the attended stimulus as a consequence of its similar orientation (Experiment 1) or because it was part of the same perceptual object (Experiment 2). A test phase followed the training phase at each location. Compared to the task-irrelevant no-grouping stimulus, orientation discrimination improved at the attended location. Critically, a perceptual learning effect equivalent to the one observed for the attended location also emerged for the task-irrelevant grouping stimulus, indicating that perceptual grouping induced a transfer of learning to the stimulus (or feature) being perceptually grouped with the task-relevant one. Our findings indicate that no voluntary effort to direct attention to the grouping stimulus or feature is necessary to enhance visual plasticity.

  7. The effect of early visual deprivation on the neural bases of multisensory processing.

    PubMed

    Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte

    2015-06-01

    Developmental vision is deemed to be necessary for the maturation of multisensory cortical circuits. Thus far, this has only been investigated in animal studies, which have shown that congenital visual deprivation markedly reduces the capability of neurons to integrate cross-modal inputs. The present study investigated the effect of transient congenital visual deprivation on the neural mechanisms of multisensory processing in humans. We used functional magnetic resonance imaging to compare responses of visual and auditory cortical areas to visual, auditory and audio-visual stimulation in cataract-reversal patients and normally sighted controls. The results showed that cataract-reversal patients, unlike normally sighted controls, did not exhibit multisensory integration in auditory areas. Furthermore, cataract-reversal patients, but not normally sighted controls, exhibited lower visual cortical processing within visual cortex during audio-visual stimulation than during visual stimulation. These results indicate that congenital visual deprivation affects the capability of cortical areas to integrate cross-modal inputs in humans, possibly because visual processing is suppressed during cross-modal stimulation. Arguably, the lack of vision in the first months after birth may result in a reorganization of visual cortex, including the suppression of noisy visual input from the deprived retina in order to reduce interference during auditory processing. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Predictive Measures of Locomotor Performance on an Unstable Walking Surface

    NASA Technical Reports Server (NTRS)

    Bloomberg, J. J.; Peters, B. T.; Mulavara, A. P.; Caldwell, E. E.; Batson, C. D.; De Dios, Y. E.; Gadd, N. E.; Goel, R.; Wood, S. J.; Cohen, H. S.; hide

    2016-01-01

    Locomotion requires integration of visual, vestibular, and somatosensory information to produce the appropriate motor output to control movement. The degree to which these sensory inputs are weighted and reorganized in discordant sensory environments varies by individual and may be predictive of the ability to adapt to novel environments. The goals of this project are to: 1) develop a set of predictive measures capable of identifying individual differences in sensorimotor adaptability, and 2) use this information to inform the design of training countermeasures designed to enhance the ability of astronauts to adapt to gravitational transitions improving balance and locomotor performance after a Mars landing and enhancing egress capability after a landing on Earth.

  9. Improved medical image fusion based on cascaded PCA and shift invariant wavelet transforms.

    PubMed

    Reena Benjamin, J; Jayasree, T

    2018-02-01

    In the medical field, radiologists need more informative and high-quality medical images to diagnose diseases. Image fusion plays a vital role in the field of biomedical image analysis. It aims to integrate the complementary information from multimodal images, producing a new composite image which is expected to be more informative for visual perception than any of the individual input images. The main objective of this paper is to improve the information, to preserve the edges and to enhance the quality of the fused image using cascaded principal component analysis (PCA) and shift invariant wavelet transforms. A novel image fusion technique based on cascaded PCA and shift invariant wavelet transforms is proposed in this paper. PCA in spatial domain extracts relevant information from the large dataset based on eigenvalue decomposition, and the wavelet transform operating in the complex domain with shift invariant properties brings out more directional and phase details of the image. The significance of maximum fusion rule applied in dual-tree complex wavelet transform domain enhances the average information and morphological details. The input images of the human brain of two different modalities (MRI and CT) are collected from whole brain atlas data distributed by Harvard University. Both MRI and CT images are fused using cascaded PCA and shift invariant wavelet transform method. The proposed method is evaluated based on three main key factors, namely structure preservation, edge preservation, contrast preservation. The experimental results and comparison with other existing fusion methods show the superior performance of the proposed image fusion framework in terms of visual and quantitative evaluations. In this paper, a complex wavelet-based image fusion has been discussed. The experimental results demonstrate that the proposed method enhances the directional features as well as fine edge details. Also, it reduces the redundant details, artifacts, distortions.

  10. Neuronal ensemble for visual working memory via interplay of slow and fast oscillations.

    PubMed

    Mizuhara, Hiroaki; Yamaguchi, Yoko

    2011-05-01

    The current focus of studies on neural entities for memory maintenance is on the interplay between fast neuronal oscillations in the gamma band and slow oscillations in the theta or delta band. The hierarchical coupling of slow and fast oscillations is crucial for the rehearsal of sensory inputs for short-term storage, as well as for binding sensory inputs that are represented in spatially segregated cortical areas. However, no experimental evidence for the binding of spatially segregated information has yet been presented for memory maintenance in humans. In the present study, we actively manipulated memory maintenance performance with an attentional blink procedure during human scalp electroencephalography (EEG) recordings and identified that slow oscillations are enhanced when memory maintenance is successful. These slow oscillations accompanied fast oscillations in the gamma frequency range that appeared at spatially segregated scalp sites. The amplitude of the gamma oscillation at these scalp sites was simultaneously enhanced at an EEG phase of the slow oscillation. Successful memory maintenance appears to be achieved by a rehearsal of sensory inputs together with a coordination of distributed fast oscillations at a preferred timing of the slow oscillations. © 2011 The Authors. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  11. Target dependence of orientation and direction selectivity of corticocortical projection neurons in the mouse V1

    PubMed Central

    Matsui, Teppei; Ohki, Kenichi

    2013-01-01

    Higher order visual areas that receive input from the primary visual cortex (V1) are specialized for the processing of distinct features of visual information. However, it is still incompletely understood how this functional specialization is acquired. Here we used in vivo two photon calcium imaging in the mouse visual cortex to investigate whether this functional distinction exists at as early as the level of projections from V1 to two higher order visual areas, AL and LM. Specifically, we examined whether sharpness of orientation and direction selectivity and optimal spatial and temporal frequency of projection neurons from V1 to higher order visual areas match with that of target areas. We found that the V1 input to higher order visual areas were indeed functionally distinct: AL preferentially received inputs from V1 that were more orientation and direction selective and tuned for lower spatial frequency compared to projection of V1 to LM, consistent with functional differences between AL and LM. The present findings suggest that selective projections from V1 to higher order visual areas initiates parallel processing of sensory information in the visual cortical network. PMID:24068987

  12. Sharpening of Hierarchical Visual Feature Representations of Blurred Images.

    PubMed

    Abdelhack, Mohamed; Kamitani, Yukiyasu

    2018-01-01

    The robustness of the visual system lies in its ability to perceive degraded images. This is achieved through interacting bottom-up, recurrent, and top-down pathways that process the visual input in concordance with stored prior information. The interaction mechanism by which they integrate visual input and prior information is still enigmatic. We present a new approach using deep neural network (DNN) representation to reveal the effects of such integration on degraded visual inputs. We transformed measured human brain activity resulting from viewing blurred images to the hierarchical representation space derived from a feedforward DNN. Transformed representations were found to veer toward the original nonblurred image and away from the blurred stimulus image. This indicated deblurring or sharpening in the neural representation, and possibly in our perception. We anticipate these results will help unravel the interplay mechanism between bottom-up, recurrent, and top-down pathways, leading to more comprehensive models of vision.

  13. Using NJOY to Create MCNP ACE Files and Visualize Nuclear Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kahler, Albert Comstock

    We provide lecture materials that describe the input requirements to create various MCNP ACE files (Fast, Thermal, Dosimetry, Photo-nuclear and Photo-atomic) with the NJOY Nuclear Data Processing code system. Input instructions to visualize nuclear data with NJOY are also provided.

  14. Postural response to predictable and nonpredictable visual flow in children and adults.

    PubMed

    Schmuckler, Mark A

    2017-11-01

    Children's (3-5years) and adults' postural reactions to different conditions of visual flow information varying in its frequency content was examined using a moving room apparatus. Both groups experienced four conditions of visual input: low-frequency (0.20Hz) visual oscillations, high-frequency (0.60Hz) oscillations, multifrequency nonpredictable visual input, and no imposed visual information. Analyses of the frequency content of anterior-posterior (AP) sway revealed that postural reactions to the single-frequency conditions replicated previous findings; children were responsive to low- and high-frequency oscillations, whereas adults were responsive to low-frequency information. Extending previous work, AP sway in response to the nonpredictable condition revealed that both groups were responsive to the different components contained in the multifrequency visual information, although adults retained their frequency selectivity to low-frequency versus high-frequency content. These findings are discussed in relation to work examining feedback versus feedforward control of posture, and the reweighting of sensory inputs for postural control, as a function of development and task context. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Visual influence on path integration in darkness indicates a multimodal representation of large-scale space

    PubMed Central

    Tcheang, Lili; Bülthoff, Heinrich H.; Burgess, Neil

    2011-01-01

    Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map. PMID:21199934

  16. Multisensory integration across the senses in young and old adults

    PubMed Central

    Mahoney, Jeannette R.; Li, Po Ching Clara; Oh-Park, Mooyeon; Verghese, Joe; Holtzer, Roee

    2011-01-01

    Stimuli are processed concurrently and across multiple sensory inputs. Here we directly compared the effect of multisensory integration (MSI) on reaction time across three paired sensory inputs in eighteen young (M=19.17 yrs) and eighteen old (M=76.44 yrs) individuals. Participants were determined to be non-demented and without any medical or psychiatric conditions that would affect their performance. Participants responded to randomly presented unisensory (auditory, visual, somatosensory) stimuli and three paired sensory inputs consisting of auditory-somatosensory (AS) auditory-visual (AV) and visual-somatosensory (VS) stimuli. Results revealed that reaction time (RT) to all multisensory pairings was significantly faster than those elicited to the constituent unisensory conditions across age groups; findings that could not be accounted for by simple probability summation. Both young and old participants responded the fastest to multisensory pairings containing somatosensory input. Compared to younger adults, older adults demonstrated a significantly greater RT benefit when processing concurrent VS information. In terms of co-activation, older adults demonstrated a significant increase in the magnitude of visual-somatosensory co-activation (i.e., multisensory integration), while younger adults demonstrated a significant increase in the magnitude of auditory-visual and auditory-somatosensory co-activation. This study provides first evidence in support of the facilitative effect of pairing somatosensory with visual stimuli in older adults. PMID:22024545

  17. Handwriting generates variable visual input to facilitate symbol learning

    PubMed Central

    Li, Julia X.; James, Karin H.

    2015-01-01

    Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing two hypotheses: That handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5 year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: three involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and three involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the six conditions (N=72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. PMID:26726913

  18. Postural Ataxia in Cerebellar Downbeat Nystagmus: Its Relation to Visual, Proprioceptive and Vestibular Signals and Cerebellar Atrophy.

    PubMed

    Helmchen, Christoph; Kirchhoff, Jan-Birger; Göttlich, Martin; Sprenger, Andreas

    2017-01-01

    The cerebellum integrates proprioceptive, vestibular and visual signals for postural control. Cerebellar patients with downbeat nystagmus (DBN) complain of unsteadiness of stance and gait as well as blurred vision and oscillopsia. The aim of this study was to elucidate the differential role of visual input, gaze eccentricity, vestibular and proprioceptive input on the postural stability in a large cohort of cerebellar patients with DBN, in comparison to healthy age-matched control subjects. Oculomotor (nystagmus, smooth pursuit eye movements) and postural (postural sway speed) parameters were recorded and related to each other and volumetric changes of the cerebellum (voxel-based morphometry, SPM). Twenty-seven patients showed larger postural instability in all experimental conditions. Postural sway increased with nystagmus in the eyes closed condition but not with the eyes open. Romberg's ratio remained stable and was not different from healthy controls. Postural sway did not change with gaze position or graviceptive input. It increased with attenuated proprioceptive input and on tandem stance in both groups but Romberg's ratio also did not differ. Cerebellar atrophy (vermal lobule VI, VIII) correlated with the severity of impaired smooth pursuit eye movements of DBN patients. Postural ataxia of cerebellar patients with DBN cannot be explained by impaired visual feedback. Despite oscillopsia visual feedback control on cerebellar postural control seems to be preserved as postural sway was strongest on visual deprivation. The increase in postural ataxia is neither related to modulations of single components characterizing nystagmus nor to deprivation of single sensory (visual, proprioceptive) inputs usually stabilizing stance. Re-weighting of multisensory signals and/or inappropriate cerebellar motor commands might account for this postural ataxia.

  19. Postural Ataxia in Cerebellar Downbeat Nystagmus: Its Relation to Visual, Proprioceptive and Vestibular Signals and Cerebellar Atrophy

    PubMed Central

    Helmchen, Christoph; Kirchhoff, Jan-Birger; Göttlich, Martin; Sprenger, Andreas

    2017-01-01

    Background The cerebellum integrates proprioceptive, vestibular and visual signals for postural control. Cerebellar patients with downbeat nystagmus (DBN) complain of unsteadiness of stance and gait as well as blurred vision and oscillopsia. Objectives The aim of this study was to elucidate the differential role of visual input, gaze eccentricity, vestibular and proprioceptive input on the postural stability in a large cohort of cerebellar patients with DBN, in comparison to healthy age-matched control subjects. Methods Oculomotor (nystagmus, smooth pursuit eye movements) and postural (postural sway speed) parameters were recorded and related to each other and volumetric changes of the cerebellum (voxel-based morphometry, SPM). Results Twenty-seven patients showed larger postural instability in all experimental conditions. Postural sway increased with nystagmus in the eyes closed condition but not with the eyes open. Romberg’s ratio remained stable and was not different from healthy controls. Postural sway did not change with gaze position or graviceptive input. It increased with attenuated proprioceptive input and on tandem stance in both groups but Romberg’s ratio also did not differ. Cerebellar atrophy (vermal lobule VI, VIII) correlated with the severity of impaired smooth pursuit eye movements of DBN patients. Conclusions Postural ataxia of cerebellar patients with DBN cannot be explained by impaired visual feedback. Despite oscillopsia visual feedback control on cerebellar postural control seems to be preserved as postural sway was strongest on visual deprivation. The increase in postural ataxia is neither related to modulations of single components characterizing nystagmus nor to deprivation of single sensory (visual, proprioceptive) inputs usually stabilizing stance. Re-weighting of multisensory signals and/or inappropriate cerebellar motor commands might account for this postural ataxia. PMID:28056109

  20. Fine and distributed subcellular retinotopy of excitatory inputs to the dendritic tree of a collision-detecting neuron

    PubMed Central

    Zhu, Ying

    2016-01-01

    Individual neurons in several sensory systems receive synaptic inputs organized according to subcellular topographic maps, yet the fine structure of this topographic organization and its relation to dendritic morphology have not been studied in detail. Subcellular topography is expected to play a role in dendritic integration, particularly when dendrites are extended and active. The lobula giant movement detector (LGMD) neuron in the locust visual system is known to receive topographic excitatory inputs on part of its dendritic tree. The LGMD responds preferentially to objects approaching on a collision course and is thought to implement several interesting dendritic computations. To study the fine retinotopic mapping of visual inputs onto the excitatory dendrites of the LGMD, we designed a custom microscope allowing visual stimulation at the native sampling resolution of the locust compound eye while simultaneously performing two-photon calcium imaging on excitatory dendrites. We show that the LGMD receives a distributed, fine retinotopic projection from the eye facets and that adjacent facets activate overlapping portions of the same dendritic branches. We also demonstrate that adjacent retinal inputs most likely make independent synapses on the excitatory dendrites of the LGMD. Finally, we show that the fine topographic mapping can be studied using dynamic visual stimuli. Our results reveal the detailed structure of the dendritic input originating from individual facets on the eye and their relation to that of adjacent facets. The mapping of visual space onto the LGMD's dendrites is expected to have implications for dendritic computation. PMID:27009157

  1. Virtualized Traffic: reconstructing traffic flows from discrete spatiotemporal data.

    PubMed

    Sewall, Jason; van den Berg, Jur; Lin, Ming C; Manocha, Dinesh

    2011-01-01

    We present a novel concept, Virtualized Traffic, to reconstruct and visualize continuous traffic flows from discrete spatiotemporal data provided by traffic sensors or generated artificially to enhance a sense of immersion in a dynamic virtual world. Given the positions of each car at two recorded locations on a highway and the corresponding time instances, our approach can reconstruct the traffic flows (i.e., the dynamic motions of multiple cars over time) between the two locations along the highway for immersive visualization of virtual cities or other environments. Our algorithm is applicable to high-density traffic on highways with an arbitrary number of lanes and takes into account the geometric, kinematic, and dynamic constraints on the cars. Our method reconstructs the car motion that automatically minimizes the number of lane changes, respects safety distance to other cars, and computes the acceleration necessary to obtain a smooth traffic flow subject to the given constraints. Furthermore, our framework can process a continuous stream of input data in real time, enabling the users to view virtualized traffic events in a virtual world as they occur. We demonstrate our reconstruction technique with both synthetic and real-world input. © 2011 IEEE Published by the IEEE Computer Society

  2. Alpha oscillations correlate with the successful inhibition of unattended stimuli.

    PubMed

    Händel, Barbara F; Haarmeier, Thomas; Jensen, Ole

    2011-09-01

    Because the human visual system is continually being bombarded with inputs, it is necessary to have effective mechanisms for filtering out irrelevant information. This is partly achieved by the allocation of attention, allowing the visual system to process relevant input while blocking out irrelevant input. What is the physiological substrate of attentional allocation? It has been proposed that alpha activity reflects functional inhibition. Here we asked if inhibition by alpha oscillations has behavioral consequences for suppressing the perception of unattended input. To this end, we investigated the influence of alpha activity on motion processing in two attentional conditions using magneto-encephalography. The visual stimuli used consisted of two random-dot kinematograms presented simultaneously to the left and right visual hemifields. Subjects were cued to covertly attend the left or right kinematogram. After 1.5 sec, a second cue tested whether subjects could report the direction of coherent motion in the attended (80%) or unattended hemifield (20%). Occipital alpha power was higher contralateral to the unattended side than to the attended side, thus suggesting inhibition of the unattended hemifield. Our key finding is that this alpha lateralization in the 20% invalidly cued trials did correlate with the perception of motion direction: Subjects with pronounced alpha lateralization were worse at detecting motion direction in the unattended hemifield. In contrast, lateralization did not correlate with visual discrimination in the attended visual hemifield. Our findings emphasize the suppressive nature of alpha oscillations and suggest that processing of inputs outside the field of attention is weakened by means of increased alpha activity.

  3. Neural network system for purposeful behavior based on foveal visual preprocessor

    NASA Astrophysics Data System (ADS)

    Golovan, Alexander V.; Shevtsova, Natalia A.; Klepatch, Arkadi A.

    1996-10-01

    Biologically plausible model of the system with an adaptive behavior in a priori environment and resistant to impairment has been developed. The system consists of input, learning, and output subsystems. The first subsystems classifies input patterns presented as n-dimensional vectors in accordance with some associative rule. The second one being a neural network determines adaptive responses of the system to input patterns. Arranged neural groups coding possible input patterns and appropriate output responses are formed during learning by means of negative reinforcement. Output subsystem maps a neural network activity into the system behavior in the environment. The system developed has been studied by computer simulation imitating a collision-free motion of a mobile robot. After some learning period the system 'moves' along a road without collisions. It is shown that in spite of impairment of some neural network elements the system functions reliably after relearning. Foveal visual preprocessor model developed earlier has been tested to form a kind of visual input to the system.

  4. Closed head injury and perceptual processing in dual-task situations.

    PubMed

    Hein, G; Schubert, T; von Cramon, D Y

    2005-01-01

    Using a classical psychological refractory period (PRP) paradigm we investigated whether increased interference between dual-task input processes is one possible source of dual-task deficits in patients with closed-head injury (CHI). Patients and age-matched controls were asked to give speeded motor reactions to an auditory and a visual stimulus. The perceptual difficulty of the visual stimulus was manipulated by varying its intensity. The results of Experiment 1 showed that CHI patients suffer from increased interference between dual-task input processes, which is related to the salience of the visual stimulus. A second experiment indicated that this input interference may be specific to brain damage following CHI. It is not evident in other groups of neurological patients like Parkinson's disease patients. We conclude that the non-interfering processing of input stages in dual-tasks requires cognitive control. A decline in the control of input processes should be considered as one source of dual-task deficits in CHI patients.

  5. Textual Enhancement of Input: Issues and Possibilities

    ERIC Educational Resources Information Center

    Han, ZhaoHong; Park, Eun Sung; Combs, Charles

    2008-01-01

    The input enhancement hypothesis proposed by Sharwood Smith (1991, 1993) has stimulated considerable research over the last 15 years. This article reviews the research on textual enhancement of input (TE), an area where the majority of input enhancement studies have aggregated. Methodological idiosyncrasies are the norm of this body of research.…

  6. Enhanced Line Integral Convolution with Flow Feature Detection

    NASA Technical Reports Server (NTRS)

    Lane, David; Okada, Arthur

    1996-01-01

    The Line Integral Convolution (LIC) method, which blurs white noise textures along a vector field, is an effective way to visualize overall flow patterns in a 2D domain. The method produces a flow texture image based on the input velocity field defined in the domain. Because of the nature of the algorithm, the texture image tends to be blurry. This sometimes makes it difficult to identify boundaries where flow separation and reattachments occur. We present techniques to enhance LIC texture images and use colored texture images to highlight flow separation and reattachment boundaries. Our techniques have been applied to several flow fields defined in 3D curvilinear multi-block grids and scientists have found the results to be very useful.

  7. Correspondence between visual and electrical input filters of ON and OFF mouse retinal ganglion cells

    NASA Astrophysics Data System (ADS)

    Sekhar, S.; Jalligampala, A.; Zrenner, E.; Rathbun, D. L.

    2017-08-01

    Objective. Over the past two decades retinal prostheses have made major strides in restoring functional vision to patients blinded by diseases such as retinitis pigmentosa. Presently, implants use single pulses to activate the retina. Though this stimulation paradigm has proved beneficial to patients, an unresolved problem is the inability to selectively stimulate the on and off visual pathways. To this end our goal was to test, using white noise, voltage-controlled, cathodic, monophasic pulse stimulation, whether different retinal ganglion cell (RGC) types in the wild type retina have different electrical input filters. This is an important precursor to addressing pathway-selective stimulation. Approach. Using full-field visual flash and electrical and visual Gaussian noise stimulation, combined with the technique of spike-triggered averaging (STA), we calculate the electrical and visual input filters for different types of RGCs (classified as on, off or on-off based on their response to the flash stimuli). Main results. Examining the STAs, we found that the spiking activity of on cells during electrical stimulation correlates with a decrease in the voltage magnitude preceding a spike, while the spiking activity of off cells correlates with an increase in the voltage preceding a spike. No electrical preference was found for on-off cells. Comparing STAs of wild type and rd10 mice revealed narrower electrical STA deflections with shorter latencies in rd10. Significance. This study is the first comparison of visual cell types and their corresponding temporal electrical input filters in the retina. The altered input filters in degenerated rd10 retinas are consistent with photoreceptor stimulation underlying visual type-specific electrical STA shapes in wild type retina. It is therefore conceivable that existing implants could target partially degenerated photoreceptors that have only lost their outer segments, but not somas, to selectively activate the on and off visual pathways.

  8. Dendro-dendritic interactions between motion-sensitive large-field neurons in the fly.

    PubMed

    Haag, Juergen; Borst, Alexander

    2002-04-15

    For visual course control, flies rely on a set of motion-sensitive neurons called lobula plate tangential cells (LPTCs). Among these cells, the so-called CH (centrifugal horizontal) cells shape by their inhibitory action the receptive field properties of other LPTCs called FD (figure detection) cells specialized for figure-ground discrimination based on relative motion. Studying the ipsilateral input circuitry of CH cells by means of dual-electrode and combined electrical-optical recordings, we find that CH cells receive graded input from HS (large-field horizontal system) cells via dendro-dendritic electrical synapses. This particular wiring scheme leads to a spatial blur of the motion image on the CH cell dendrite, and, after inhibiting FD cells, to an enhancement of motion contrast. This could be crucial for enabling FD cells to discriminate object from self motion.

  9. Perceptual Grouping Enhances Visual Plasticity

    PubMed Central

    Mastropasqua, Tommaso; Turatto, Massimo

    2013-01-01

    Visual perceptual learning, a manifestation of neural plasticity, refers to improvements in performance on a visual task achieved by training. Attention is known to play an important role in perceptual learning, given that the observer's discriminative ability improves only for those stimulus feature that are attended. However, the distribution of attention can be severely constrained by perceptual grouping, a process whereby the visual system organizes the initial retinal input into candidate objects. Taken together, these two pieces of evidence suggest the interesting possibility that perceptual grouping might also affect perceptual learning, either directly or via attentional mechanisms. To address this issue, we conducted two experiments. During the training phase, participants attended to the contrast of the task-relevant stimulus (oriented grating), while two similar task-irrelevant stimuli were presented in the adjacent positions. One of the two flanking stimuli was perceptually grouped with the attended stimulus as a consequence of its similar orientation (Experiment 1) or because it was part of the same perceptual object (Experiment 2). A test phase followed the training phase at each location. Compared to the task-irrelevant no-grouping stimulus, orientation discrimination improved at the attended location. Critically, a perceptual learning effect equivalent to the one observed for the attended location also emerged for the task-irrelevant grouping stimulus, indicating that perceptual grouping induced a transfer of learning to the stimulus (or feature) being perceptually grouped with the task-relevant one. Our findings indicate that no voluntary effort to direct attention to the grouping stimulus or feature is necessary to enhance visual plasticity. PMID:23301100

  10. The Comparison of Visual Working Memory Representations with Perceptual Inputs

    PubMed Central

    Hyun, Joo-seok; Woodman, Geoffrey F.; Vogel, Edward K.; Hollingworth, Andrew

    2008-01-01

    The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. This study tests the hypothesis that differences between the memory of a stimulus array and the perception of a new array are detected in a manner that is analogous to the detection of simple features in visual search tasks. That is, just as the presence of a task-relevant feature in visual search can be detected in parallel, triggering a rapid shift of attention to the object containing the feature, the presence of a memory-percept difference along a task-relevant dimension can be detected in parallel, triggering a rapid shift of attention to the changed object. Supporting evidence was obtained in a series of experiments that examined manual reaction times, saccadic reaction times, and event-related potential latencies. However, these experiments also demonstrated that a slow, limited-capacity process must occur before the observer can make a manual change-detection response. PMID:19653755

  11. On the Visual Input Driving Human Smooth-Pursuit Eye Movements

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Beutter, Brent R.; Lorenceau, Jean

    1996-01-01

    Current computational models of smooth-pursuit eye movements assume that the primary visual input is local retinal-image motion (often referred to as retinal slip). However, we show that humans can pursue object motion with considerable accuracy, even in the presence of conflicting local image motion. This finding indicates that the visual cortical area(s) controlling pursuit must be able to perform a spatio-temporal integration of local image motion into a signal related to object motion. We also provide evidence that the object-motion signal that drives pursuit is related to the signal that supports perception. We conclude that current models of pursuit should be modified to include a visual input that encodes perceived object motion and not merely retinal image motion. Finally, our findings suggest that the measurement of eye movements can be used to monitor visual perception, with particular value in applied settings as this non-intrusive approach would not require interrupting ongoing work or training.

  12. How does visual manipulation affect obstacle avoidance strategies used by athletes?

    PubMed

    Bijman, M P; Fisher, J J; Vallis, L A

    2016-01-01

    Research examining our ability to avoid obstacles in our path has stressed the importance of visual input. The aim of this study was to determine if athletes playing varsity-level field sports, who rely on visual input to guide motor behaviour, are more able to guide their foot over obstacles compared to recreational individuals. While wearing kinematic markers, eight varsity athletes and eight age-matched controls (aged 18-25) walked along a walkway and stepped over stationary obstacles (180° motion arc). Visual input was manipulated using PLATO visual goggles three or two steps pre-obstacle crossing and compared to trials where vision was given throughout. A main effect between groups for peak trail toe elevation was shown with greater values generated by the controls for all crossing conditions during full vision trials only. This may be interpreted as athletes not perceiving this obstacle as an increased threat to their postural stability. Collectively, findings suggest the athletic group is able to transfer their abilities to non-specific conditions during full vision trials; however, varsity-level athletes were equally reliant on visual cues for these visually guided stepping tasks as their performance was similar to the controls when vision is removed.

  13. Altered transfer of visual motion information to parietal association cortex in untreated first-episode psychosis: Implications for pursuit eye tracking

    PubMed Central

    Lencer, Rebekka; Keedy, Sarah K.; Reilly, James L.; McDonough, Bruce E.; Harris, Margret S. H.; Sprenger, Andreas; Sweeney, John A.

    2011-01-01

    Visual motion processing and its use for pursuit eye movement control represent a valuable model for studying the use of sensory input for action planning. In psychotic disorders, alterations of visual motion perception have been suggested to cause pursuit eye tracking deficits. We evaluated this system in functional neuroimaging studies of untreated first-episode schizophrenia (N=24), psychotic bipolar disorder patients (N=13) and healthy controls (N=20). During a passive visual motion processing task, both patient groups showed reduced activation in the posterior parietal projection fields of motion-sensitive extrastriate area V5, but not in V5 itself. This suggests reduced bottom-up transfer of visual motion information from extrastriate cortex to perceptual systems in parietal association cortex. During active pursuit, activation was enhanced in anterior intraparietal sulcus and insula in both patient groups, and in dorsolateral prefrontal cortex and dorsomedial thalamus in schizophrenia patients. This may result from increased demands on sensorimotor systems for pursuit control due to the limited availability of perceptual motion information about target speed and tracking error. Visual motion information transfer deficits to higher -level association cortex may contribute to well-established pursuit tracking abnormalities, and perhaps to a wider array of alterations in perception and action planning in psychotic disorders. PMID:21873035

  14. Bring It to the Pitch: Combining Video and Movement Data to Enhance Team Sport Analysis.

    PubMed

    Stein, Manuel; Janetzko, Halldor; Lamprecht, Andreas; Breitkreutz, Thorsten; Zimmermann, Philipp; Goldlucke, Bastian; Schreck, Tobias; Andrienko, Gennady; Grossniklaus, Michael; Keim, Daniel A

    2018-01-01

    Analysts in professional team sport regularly perform analysis to gain strategic and tactical insights into player and team behavior. Goals of team sport analysis regularly include identification of weaknesses of opposing teams, or assessing performance and improvement potential of a coached team. Current analysis workflows are typically based on the analysis of team videos. Also, analysts can rely on techniques from Information Visualization, to depict e.g., player or ball trajectories. However, video analysis is typically a time-consuming process, where the analyst needs to memorize and annotate scenes. In contrast, visualization typically relies on an abstract data model, often using abstract visual mappings, and is not directly linked to the observed movement context anymore. We propose a visual analytics system that tightly integrates team sport video recordings with abstract visualization of underlying trajectory data. We apply appropriate computer vision techniques to extract trajectory data from video input. Furthermore, we apply advanced trajectory and movement analysis techniques to derive relevant team sport analytic measures for region, event and player analysis in the case of soccer analysis. Our system seamlessly integrates video and visualization modalities, enabling analysts to draw on the advantages of both analysis forms. Several expert studies conducted with team sport analysts indicate the effectiveness of our integrated approach.

  15. Visual attention: Linking prefrontal sources to neuronal and behavioral correlates.

    PubMed

    Clark, Kelsey; Squire, Ryan Fox; Merrikhi, Yaser; Noudoost, Behrad

    2015-09-01

    Attention is a means of flexibly selecting and enhancing a subset of sensory input based on the current behavioral goals. Numerous signatures of attention have been identified throughout the brain, and now experimenters are seeking to determine which of these signatures are causally related to the behavioral benefits of attention, and the source of these modulations within the brain. Here, we review the neural signatures of attention throughout the brain, their theoretical benefits for visual processing, and their experimental correlations with behavioral performance. We discuss the importance of measuring cue benefits as a way to distinguish between impairments on an attention task, which may instead be visual or motor impairments, and true attentional deficits. We examine evidence for various areas proposed as sources of attentional modulation within the brain, with a focus on the prefrontal cortex. Lastly, we look at studies that aim to link sources of attention to its neuronal signatures elsewhere in the brain. Copyright © 2015. Published by Elsevier Ltd.

  16. Light reintroduction after dark exposure reactivates plasticity in adults via perisynaptic activation of MMP-9

    PubMed Central

    2017-01-01

    The sensitivity of ocular dominance to regulation by monocular deprivation is the canonical model of plasticity confined to a critical period. However, we have previously shown that visual deprivation through dark exposure (DE) reactivates critical period plasticity in adults. Previous work assumed that the elimination of visual input was sufficient to enhance plasticity in the adult mouse visual cortex. In contrast, here we show that light reintroduction (LRx) after DE is responsible for the reactivation of plasticity. LRx triggers degradation of the ECM, which is blocked by pharmacological inhibition or genetic ablation of matrix metalloproteinase-9 (MMP-9). LRx induces an increase in MMP-9 activity that is perisynaptic and enriched at thalamo-cortical synapses. The reactivation of plasticity by LRx is absent in Mmp9−/− mice, and is rescued by hyaluronidase, an enzyme that degrades core ECM components. Thus, the LRx-induced increase in MMP-9 removes constraints on structural and functional plasticity in the mature cortex. PMID:28875930

  17. Visualization of Pulsar Search Data

    NASA Astrophysics Data System (ADS)

    Foster, R. S.; Wolszczan, A.

    1993-05-01

    The search for periodic signals from rotating neutron stars or pulsars has been a computationally taxing problem to astronomers for more than twenty-five years. Over this time interval, increases in computational capability have allowed ever more sensitive searches, covering a larger parameter space. The volume of input data and the general presence of radio frequency interference typically produce numerous spurious signals. Visualization of the search output and enhanced real-time processing of significant candidate events allow the pulsar searcher to optimally processes and search for new radio pulsars. The pulsar search algorithm and visualization system presented in this paper currently runs on serial RISC based workstations, a traditional vector based super computer, and a massively parallel computer. A description of the serial software algorithm and its modifications for massively parallel computing are describe. The results of four successive searches for millisecond period radio pulsars using the Arecibo telescope at 430 MHz have resulted in the successful detection of new long-period and millisecond period radio pulsars.

  18. Neuronal integration in visual cortex elevates face category tuning to conscious face perception

    PubMed Central

    Fahrenfort, Johannes J.; Snijders, Tineke M.; Heinen, Klaartje; van Gaal, Simon; Scholte, H. Steven; Lamme, Victor A. F.

    2012-01-01

    The human brain has the extraordinary capability to transform cluttered sensory input into distinct object representations. For example, it is able to rapidly and seemingly without effort detect object categories in complex natural scenes. Surprisingly, category tuning is not sufficient to achieve conscious recognition of objects. What neural process beyond category extraction might elevate neural representations to the level where objects are consciously perceived? Here we show that visible and invisible faces produce similar category-selective responses in the ventral visual cortex. The pattern of neural activity evoked by visible faces could be used to decode the presence of invisible faces and vice versa. However, only visible faces caused extensive response enhancements and changes in neural oscillatory synchronization, as well as increased functional connectivity between higher and lower visual areas. We conclude that conscious face perception is more tightly linked to neural processes of sustained information integration and binding than to processes accommodating face category tuning. PMID:23236162

  19. Visual and Auditory Input in Second-Language Speech Processing

    ERIC Educational Resources Information Center

    Hardison, Debra M.

    2010-01-01

    The majority of studies in second-language (L2) speech processing have involved unimodal (i.e., auditory) input; however, in many instances, speech communication involves both visual and auditory sources of information. Some researchers have argued that multimodal speech is the primary mode of speech perception (e.g., Rosenblum 2005). Research on…

  20. Multisensory connections of monkey auditory cerebral cortex

    PubMed Central

    Smiley, John F.; Falchier, Arnaud

    2009-01-01

    Functional studies have demonstrated multisensory responses in auditory cortex, even in the primary and early auditory association areas. The features of somatosensory and visual responses in auditory cortex suggest that they are involved in multiple processes including spatial, temporal and object-related perception. Tract tracing studies in monkeys have demonstrated several potential sources of somatosensory and visual inputs to auditory cortex. These include potential somatosensory inputs from the retroinsular (RI) and granular insula (Ig) cortical areas, and from the thalamic posterior (PO) nucleus. Potential sources of visual responses include peripheral field representations of areas V2 and prostriata, as well as the superior temporal polysensory area (STP) in the superior temporal sulcus, and the magnocellular medial geniculate thalamic nucleus (MGm). Besides these sources, there are several other thalamic, limbic and cortical association structures that have multisensory responses and may contribute cross-modal inputs to auditory cortex. These connections demonstrated by tract tracing provide a list of potential inputs, but in most cases their significance has not been confirmed by functional experiments. It is possible that the somatosensory and visual modulation of auditory cortex are each mediated by multiple extrinsic sources. PMID:19619628

  1. The origins of metamodality in visual object area LO: Bodily topographical biases and increased functional connectivity to S1

    PubMed Central

    Tal, Zohar; Geva, Ran; Amedi, Amir

    2016-01-01

    Recent evidence from blind participants suggests that visual areas are task-oriented and sensory modality input independent rather than sensory-specific to vision. Specifically, visual areas are thought to retain their functional selectivity when using non-visual inputs (touch or sound) even without having any visual experience. However, this theory is still controversial since it is not clear whether this also characterizes the sighted brain, and whether the reported results in the sighted reflect basic fundamental a-modal processes or are an epiphenomenon to a large extent. In the current study, we addressed these questions using a series of fMRI experiments aimed to explore visual cortex responses to passive touch on various body parts and the coupling between the parietal and visual cortices as manifested by functional connectivity. We show that passive touch robustly activated the object selective parts of the lateral–occipital (LO) cortex while deactivating almost all other occipital–retinotopic-areas. Furthermore, passive touch responses in the visual cortex were specific to hand and upper trunk stimulations. Psychophysiological interaction (PPI) analysis suggests that LO is functionally connected to the hand area in the primary somatosensory homunculus (S1), during hand and shoulder stimulations but not to any of the other body parts. We suggest that LO is a fundamental hub that serves as a node between visual-object selective areas and S1 hand representation, probably due to the critical evolutionary role of touch in object recognition and manipulation. These results might also point to a more general principle suggesting that recruitment or deactivation of the visual cortex by other sensory input depends on the ecological relevance of the information conveyed by this input to the task/computations carried out by each area or network. This is likely to rely on the unique and differential pattern of connectivity for each visual area with the rest of the brain. PMID:26673114

  2. Adaptation to sensory input tunes visual cortex to criticality

    NASA Astrophysics Data System (ADS)

    Shew, Woodrow L.; Clawson, Wesley P.; Pobst, Jeff; Karimipanah, Yahya; Wright, Nathaniel C.; Wessel, Ralf

    2015-08-01

    A long-standing hypothesis at the interface of physics and neuroscience is that neural networks self-organize to the critical point of a phase transition, thereby optimizing aspects of sensory information processing. This idea is partially supported by strong evidence for critical dynamics observed in the cerebral cortex, but the impact of sensory input on these dynamics is largely unknown. Thus, the foundations of this hypothesis--the self-organization process and how it manifests during strong sensory input--remain unstudied experimentally. Here we show in visual cortex and in a computational model that strong sensory input initially elicits cortical network dynamics that are not critical, but adaptive changes in the network rapidly tune the system to criticality. This conclusion is based on observations of multifaceted scaling laws predicted to occur at criticality. Our findings establish sensory adaptation as a self-organizing mechanism that maintains criticality in visual cortex during sensory information processing.

  3. Anatomy and physiology of the afferent visual system.

    PubMed

    Prasad, Sashank; Galetta, Steven L

    2011-01-01

    The efficient organization of the human afferent visual system meets enormous computational challenges. Once visual information is received by the eye, the signal is relayed by the retina, optic nerve, chiasm, tracts, lateral geniculate nucleus, and optic radiations to the striate cortex and extrastriate association cortices for final visual processing. At each stage, the functional organization of these circuits is derived from their anatomical and structural relationships. In the retina, photoreceptors convert photons of light to an electrochemical signal that is relayed to retinal ganglion cells. Ganglion cell axons course through the optic nerve, and their partial decussation in the chiasm brings together corresponding inputs from each eye. Some inputs follow pathways to mediate pupil light reflexes and circadian rhythms. However, the majority of inputs arrive at the lateral geniculate nucleus, which relays visual information via second-order neurons that course through the optic radiations to arrive in striate cortex. Feedback mechanisms from higher cortical areas shape the neuronal responses in early visual areas, supporting coherent visual perception. Detailed knowledge of the anatomy of the afferent visual system, in combination with skilled examination, allows precise localization of neuropathological processes and guides effective diagnosis and management of neuro-ophthalmic disorders. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Removing Visual Bias in Filament Identification: A New Goodness-of-fit Measure

    NASA Astrophysics Data System (ADS)

    Green, C.-E.; Cunningham, M. R.; Dawson, J. R.; Jones, P. A.; Novak, G.; Fissel, L. M.

    2017-05-01

    Different combinations of input parameters to filament identification algorithms, such as disperse and filfinder, produce numerous different output skeletons. The skeletons are a one-pixel-wide representation of the filamentary structure in the original input image. However, these output skeletons may not necessarily be a good representation of that structure. Furthermore, a given skeleton may not be as good of a representation as another. Previously, there has been no mathematical “goodness-of-fit” measure to compare output skeletons to the input image. Thus far this has been assessed visually, introducing visual bias. We propose the application of the mean structural similarity index (MSSIM) as a mathematical goodness-of-fit measure. We describe the use of the MSSIM to find the output skeletons that are the most mathematically similar to the original input image (the optimum, or “best,” skeletons) for a given algorithm, and independently of the algorithm. This measure makes possible systematic parameter studies, aimed at finding the subset of input parameter values returning optimum skeletons. It can also be applied to the output of non-skeleton-based filament identification algorithms, such as the Hessian matrix method. The MSSIM removes the need to visually examine thousands of output skeletons, and eliminates the visual bias, subjectivity, and limited reproducibility inherent in that process, representing a major improvement upon existing techniques. Importantly, it also allows further automation in the post-processing of output skeletons, which is crucial in this era of “big data.”

  5. On the cyclic nature of perception in vision versus audition

    PubMed Central

    VanRullen, Rufin; Zoefel, Benedikt; Ilhan, Barkin

    2014-01-01

    Does our perceptual awareness consist of a continuous stream, or a discrete sequence of perceptual cycles, possibly associated with the rhythmic structure of brain activity? This has been a long-standing question in neuroscience. We review recent psychophysical and electrophysiological studies indicating that part of our visual awareness proceeds in approximately 7–13 Hz cycles rather than continuously. On the other hand, experimental attempts at applying similar tools to demonstrate the discreteness of auditory awareness have been largely unsuccessful. We argue and demonstrate experimentally that visual and auditory perception are not equally affected by temporal subsampling of their respective input streams: video sequences remain intelligible at sampling rates of two to three frames per second, whereas audio inputs lose their fine temporal structure, and thus all significance, below 20–30 samples per second. This does not mean, however, that our auditory perception must proceed continuously. Instead, we propose that audition could still involve perceptual cycles, but the periodic sampling should happen only after the stage of auditory feature extraction. In addition, although visual perceptual cycles can follow one another at a spontaneous pace largely independent of the visual input, auditory cycles may need to sample the input stream more flexibly, by adapting to the temporal structure of the auditory inputs. PMID:24639585

  6. Adaptive learning in a compartmental model of visual cortex—how feedback enables stable category learning and refinement

    PubMed Central

    Layher, Georg; Schrodt, Fabian; Butz, Martin V.; Neumann, Heiko

    2014-01-01

    The categorization of real world objects is often reflected in the similarity of their visual appearances. Such categories of objects do not necessarily form disjunct sets of objects, neither semantically nor visually. The relationship between categories can often be described in terms of a hierarchical structure. For instance, tigers and leopards build two separate mammalian categories, both of which are subcategories of the category Felidae. In the last decades, the unsupervised learning of categories of visual input stimuli has been addressed by numerous approaches in machine learning as well as in computational neuroscience. However, the question of what kind of mechanisms might be involved in the process of subcategory learning, or category refinement, remains a topic of active investigation. We propose a recurrent computational network architecture for the unsupervised learning of categorial and subcategorial visual input representations. During learning, the connection strengths of bottom-up weights from input to higher-level category representations are adapted according to the input activity distribution. In a similar manner, top-down weights learn to encode the characteristics of a specific stimulus category. Feedforward and feedback learning in combination realize an associative memory mechanism, enabling the selective top-down propagation of a category's feedback weight distribution. We suggest that the difference between the expected input encoded in the projective field of a category node and the current input pattern controls the amplification of feedforward-driven representations. Large enough differences trigger the recruitment of new representational resources and the establishment of additional (sub-) category representations. We demonstrate the temporal evolution of such learning and show how the proposed combination of an associative memory with a modulatory feedback integration successfully establishes category and subcategory representations. PMID:25538637

  7. Audio-visual speech cue combination.

    PubMed

    Arnold, Derek H; Tear, Morgan; Schindel, Ryan; Roseboom, Warrick

    2010-04-16

    Different sources of sensory information can interact, often shaping what we think we have seen or heard. This can enhance the precision of perceptual decisions relative to those made on the basis of a single source of information. From a computational perspective, there are multiple reasons why this might happen, and each predicts a different degree of enhanced precision. Relatively slight improvements can arise when perceptual decisions are made on the basis of multiple independent sensory estimates, as opposed to just one. These improvements can arise as a consequence of probability summation. Greater improvements can occur if two initially independent estimates are summated to form a single integrated code, especially if the summation is weighted in accordance with the variance associated with each independent estimate. This form of combination is often described as a Bayesian maximum likelihood estimate. Still greater improvements are possible if the two sources of information are encoded via a common physiological process. Here we show that the provision of simultaneous audio and visual speech cues can result in substantial sensitivity improvements, relative to single sensory modality based decisions. The magnitude of the improvements is greater than can be predicted on the basis of either a Bayesian maximum likelihood estimate or a probability summation. Our data suggest that primary estimates of speech content are determined by a physiological process that takes input from both visual and auditory processing, resulting in greater sensitivity than would be possible if initially independent audio and visual estimates were formed and then subsequently combined.

  8. Machine Learning Classification of Heterogeneous Fields to Estimate Physical Responses

    NASA Astrophysics Data System (ADS)

    McKenna, S. A.; Akhriev, A.; Alzate, C.; Zhuk, S.

    2017-12-01

    The promise of machine learning to enhance physics-based simulation is examined here using the transient pressure response to a pumping well in a heterogeneous aquifer. 10,000 random fields of log10 hydraulic conductivity (K) are created and conditioned on a single K measurement at the pumping well. Each K-field is used as input to a forward simulation of drawdown (pressure decline). The differential equations governing groundwater flow to the well serve as a non-linear transform of the input K-field to an output drawdown field. The results are stored and the data set is split into training and testing sets for classification. A Euclidean distance measure between any two fields is calculated and the resulting distances between all pairs of fields define a similarity matrix. Similarity matrices are calculated for both input K-fields and the resulting drawdown fields at the end of the simulation. The similarity matrices are then used as input to spectral clustering to determine groupings of similar input and output fields. Additionally, the similarity matrix is used as input to multi-dimensional scaling to visualize the clustering of fields in lower dimensional spaces. We examine the ability to cluster both input K-fields and output drawdown fields separately with the goal of identifying K-fields that create similar drawdowns and, conversely, given a set of simulated drawdown fields, identify meaningful clusters of input K-fields. Feature extraction based on statistical parametric mapping provides insight into what features of the fields drive the classification results. The final goal is to successfully classify input K-fields into the correct output class, and also, given an output drawdown field, be able to infer the correct class of input field that created it.

  9. Training Enhances Both Locomotor and Cognitive Adaptability to a Novel Sensory Environment

    NASA Technical Reports Server (NTRS)

    Bloomberg, J. J.; Peters, B. T.; Mulavara, A. P.; Brady, R. A.; Batson, C. D.; Ploutz-Snyder, R. J.; Cohen, H. S.

    2010-01-01

    During adaptation to novel gravitational environments, sensorimotor disturbances have the potential to disrupt the ability of astronauts to perform required mission tasks. The goal of this project is to develop a sensorimotor adaptability (SA) training program to facilitate rapid adaptation. We have developed a unique training system comprised of a treadmill placed on a motion-base facing a virtual visual scene that provides an unstable walking surface combined with incongruent visual flow designed to enhance sensorimotor adaptability. The goal of our present study was to determine if SA training improved both the locomotor and cognitive responses to a novel sensory environment and to quantify the extent to which training would be retained. Methods: Twenty subjects (10 training, 10 control) completed three, 30-minute training sessions during which they walked on the treadmill while receiving discordant support surface and visual input. Control subjects walked on the treadmill but did not receive any support surface or visual alterations. To determine the efficacy of training all subjects performed the Transfer Test upon completion of training. For this test, subjects were exposed to novel visual flow and support surface movement, not previously experienced during training. The Transfer Test was performed 20 minutes, 1 week, 1, 3 and 6 months after the final training session. Stride frequency, auditory reaction time, and heart rate data were collected as measures of postural stability, cognitive effort and anxiety, respectively. Results: Using mixed effects regression methods we determined that subjects who received SA training showed less alterations in stride frequency, auditory reaction time and heart rate compared to controls. Conclusion: Subjects who received SA training improved performance across a number of modalities including enhanced locomotor function, increased multi-tasking capability and reduced anxiety during adaptation to novel discordant sensory information. Trained subjects maintained their level of performance over six months.

  10. On the effects of multimodal information integration in multitasking.

    PubMed

    Stock, Ann-Kathrin; Gohil, Krutika; Huster, René J; Beste, Christian

    2017-07-07

    There have recently been considerable advances in our understanding of the neuronal mechanisms underlying multitasking, but the role of multimodal integration for this faculty has remained rather unclear. We examined this issue by comparing different modality combinations in a multitasking (stop-change) paradigm. In-depth neurophysiological analyses of event-related potentials (ERPs) were conducted to complement the obtained behavioral data. Specifically, we applied signal decomposition using second order blind identification (SOBI) to the multi-subject ERP data and source localization. We found that both general multimodal information integration and modality-specific aspects (potentially related to task difficulty) modulate behavioral performance and associated neurophysiological correlates. Simultaneous multimodal input generally increased early attentional processing of visual stimuli (i.e. P1 and N1 amplitudes) as well as measures of cognitive effort and conflict (i.e. central P3 amplitudes). Yet, tactile-visual input caused larger impairments in multitasking than audio-visual input. General aspects of multimodal information integration modulated the activity in the premotor cortex (BA 6) as well as different visual association areas concerned with the integration of visual information with input from other modalities (BA 19, BA 21, BA 37). On top of this, differences in the specific combination of modalities also affected performance and measures of conflict/effort originating in prefrontal regions (BA 6).

  11. Young children's recall and reconstruction of audio and audiovisual narratives.

    PubMed

    Gibbons, J; Anderson, D R; Smith, R; Field, D E; Fischer, C

    1986-08-01

    It has been claimed that the visual component of audiovisual media dominates young children's cognitive processing. This experiment examines the effects of input modality while controlling the complexity of the visual and auditory content and while varying the comprehension task (recall vs. reconstruction). 4- and 7-year-olds were presented brief stories through either audio or audiovisual media. The audio version consisted of narrated character actions and character utterances. The narrated actions were matched to the utterances on the basis of length and propositional complexity. The audiovisual version depicted the actions visually by means of stop animation instead of by auditory narrative statements. The character utterances were the same in both versions. Audiovisual input produced superior performance on explicit information in the 4-year-olds and produced more inferences at both ages. Because performance on utterances was superior in the audiovisual condition as compared to the audio condition, there was no evidence that visual input inhibits processing of auditory information. Actions were more likely to be produced by the younger children than utterances, regardless of input medium, indicating that prior findings of visual dominance may have been due to the salience of narrative action. Reconstruction, as compared to recall, produced superior depiction of actions at both ages as well as more constrained relevant inferences and narrative conventions.

  12. Assessing Input Enhancement as Positive Factor and Its Impact on L2 Vocabulary Learning

    ERIC Educational Resources Information Center

    Motlagh, Seyyed Fariborz Pishdadi; Nasab, Mahdiyeh Seyed Beheshti

    2015-01-01

    Input enhancement's role to promote learners' awareness in L2 contexts has caused a tremendous amount of research. Conspicuously, by regarding all aspects of input enhancement, the study aimed to find out how differently many kinds of input enhancement factors such as bolding, underlining, and capitalizing impact on L2 learners' vocabulary…

  13. Rotational wind indicator enhances control of rotated displays

    NASA Technical Reports Server (NTRS)

    Cunningham, H. A.; Pavel, Misha

    1991-01-01

    Rotation by 108 deg of the spatial mapping between a visual display and a manual input device produces large spatial errors in a discrete aiming task. These errors are not easily corrected by voluntary mental effort, but the central nervous system does adapt gradually to the new mapping. Bernotat (1970) showed that adding true hand position to a 90 deg rotated display improved performance of a compensatory tracking task, but tracking error rose again upon removal of the explicit cue. This suggests that the explicit error signal did not induce changes in the neural mapping, but rather allowed the operator to reduce tracking error using a higher mental strategy. In this report, we describe an explicit visual display enhancement applied to a 108 deg rotated discrete aiming task. A 'wind indicator' corresponding to the effect of the mapping rotation is displayed on the operator-controlled cursor. The human operator is instructed to oppose the virtual force represented by the indicator, as one would do if flying an airplane in a crosswind. This enhancement reduces spatial aiming error in the first 10 minutes of practice by an average of 70 percent when compared to a no enhancement control condition. Moreover, it produces adaptation aftereffect, which is evidence of learning by neural adaptation rather than by mental strategy. Finally, aiming error does not rise upon removal of the explicit cue.

  14. Lack of Multisensory Integration in Hemianopia: No Influence of Visual Stimuli on Aurally Guided Saccades to the Blind Hemifield

    PubMed Central

    Ten Brink, Antonia F.; Nijboer, Tanja C. W.; Bergsma, Douwe P.; Barton, Jason J. S.; Van der Stigchel, Stefan

    2015-01-01

    In patients with visual hemifield defects residual visual functions may be present, a phenomenon called blindsight. The superior colliculus (SC) is part of the spared pathway that is considered to be responsible for this phenomenon. Given that the SC processes input from different modalities and is involved in the programming of saccadic eye movements, the aim of the present study was to examine whether multimodal integration can modulate oculomotor competition in the damaged hemifield. We conducted two experiments with eight patients who had visual field defects due to lesions that affected the retinogeniculate pathway but spared the retinotectal direct SC pathway. They had to make saccades to an auditory target that was presented alone or in combination with a visual stimulus. The visual stimulus could either be spatially coincident with the auditory target (possibly enhancing the auditory target signal), or spatially disparate to the auditory target (possibly competing with the auditory tar-get signal). For each patient we compared the saccade endpoint deviation in these two bi-modal conditions with the endpoint deviation in the unimodal condition (auditory target alone). In all seven hemianopic patients, saccade accuracy was affected only by visual stimuli in the intact, but not in the blind visual field. In one patient with a more limited quadrantano-pia, a facilitation effect of the spatially coincident visual stimulus was observed. We conclude that our results show that multisensory integration is infrequent in the blind field of patients with hemianopia. PMID:25835952

  15. Control of articulated snake robot under dynamic active constraints.

    PubMed

    Kwok, Ka-Wai; Vitiello, Valentina; Yang, Guang-Zhong

    2010-01-01

    Flexible, ergonomically enhanced surgical robots have important applications to transluminal endoscopic surgery, for which path-following and dynamic shape conformance are essential. In this paper, kinematic control of a snake robot for motion stabilisation under dynamic active constraints is addressed. The main objective is to enable the robot to track the visual target accurately and steadily on deforming tissue whilst conforming to pre-defined anatomical constraints. The motion tracking can also be augmented with manual control. By taking into account the physical limits in terms of maximum frequency response of the system (manifested as a delay between the input of the manipulator and the movement of the end-effector), we show the importance of visual-motor synchronisation for performing accurate smooth pursuit movements. Detailed user experiments are performed to demonstrate the practical value of the proposed control mechanism.

  16. Audio-visual synchrony and feature-selective attention co-amplify early visual processing.

    PubMed

    Keitel, Christian; Müller, Matthias M

    2016-05-01

    Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.

  17. Optic flow improves adaptability of spatiotemporal characteristics during split-belt locomotor adaptation with tactile stimulation

    PubMed Central

    Anthony Eikema, Diderik Jan A.; Chien, Jung Hung; Stergiou, Nicholas; Myers, Sara A.; Scott-Pandorf, Melissa M.; Bloomberg, Jacob J.; Mukherjee, Mukul

    2015-01-01

    Human locomotor adaptation requires feedback and feed-forward control processes to maintain an appropriate walking pattern. Adaptation may require the use of visual and proprioceptive input to decode altered movement dynamics and generate an appropriate response. After a person transfers from an extreme sensory environment and back, as astronauts do when they return from spaceflight, the prolonged period required for re-adaptation can pose a significant burden. In our previous paper, we showed that plantar tactile vibration during a split-belt adaptation task did not interfere with the treadmill adaptation however, larger overground transfer effects with a slower decay resulted. Such effects, in the absence of visual feedback (of motion) and perturbation of tactile feedback, is believed to be due to a higher proprioceptive gain because, in the absence of relevant external dynamic cues such as optic flow, reliance on body-based cues is enhanced during gait tasks through multisensory integration. In this study we therefore investigated the effect of optic flow on tactile stimulated split-belt adaptation as a paradigm to facilitate the sensorimotor adaptation process. Twenty healthy young adults, separated into two matched groups, participated in the study. All participants performed an overground walking trial followed by a split-belt treadmill adaptation protocol. The tactile group (TC) received vibratory plantar tactile stimulation only, whereas the virtual reality and tactile group (VRT) received an additional concurrent visual stimulation: a moving virtual corridor, inducing perceived self-motion. A post-treadmill overground trial was performed to determine adaptation transfer. Interlimb coordination of spatiotemporal and kinetic variables was quantified using symmetry indices, and analyzed using repeated-measures ANOVA. Marked changes of step length characteristics were observed in both groups during split-belt adaptation. Stance and swing time symmetry were similar in the two groups, suggesting that temporal parameters are not modified by optic flow. However, whereas the TC group displayed significant stance time asymmetries during the post-treadmill session, such aftereffects were absent in the VRT group. The results indicated that the enhanced transfer resulting from exposure to plantar cutaneous vibration during adaptation was alleviated by optic flow information. The presence of visual self-motion information may have reduced proprioceptive gain during learning. Thus, during overground walking, the learned proprioceptive split-belt pattern is more rapidly overridden by visual input due to its increased relative gain. The results suggest that when visual stimulation is provided during adaptive training, the system acquires the novel movement dynamics while maintaining the ability to flexibly adapt to different environments. PMID:26525712

  18. Integrate-and-fire vs Poisson models of LGN input to V1 cortex: noisier inputs reduce orientation selectivity

    PubMed Central

    Lin, I-Chun; Xing, Dajun; Shapley, Robert

    2014-01-01

    One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes. PMID:22684587

  19. Integrate-and-fire vs Poisson models of LGN input to V1 cortex: noisier inputs reduce orientation selectivity.

    PubMed

    Lin, I-Chun; Xing, Dajun; Shapley, Robert

    2012-12-01

    One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes.

  20. Localized direction selective responses in the dendrites of visual interneurons of the fly

    PubMed Central

    2010-01-01

    Background The various tasks of visual systems, including course control, collision avoidance and the detection of small objects, require at the neuronal level the dendritic integration and subsequent processing of many spatially distributed visual motion inputs. While much is known about the pooled output in these systems, as in the medial superior temporal cortex of monkeys or in the lobula plate of the insect visual system, the motion tuning of the elements that provide the input has yet received little attention. In order to visualize the motion tuning of these inputs we examined the dendritic activation patterns of neurons that are selective for the characteristic patterns of wide-field motion, the lobula-plate tangential cells (LPTCs) of the blowfly. These neurons are known to sample direction-selective motion information from large parts of the visual field and combine these signals into axonal and dendro-dendritic outputs. Results Fluorescence imaging of intracellular calcium concentration allowed us to take a direct look at the local dendritic activity and the resulting local preferred directions in LPTC dendrites during activation by wide-field motion in different directions. These 'calcium response fields' resembled a retinotopic dendritic map of local preferred directions in the receptive field, the layout of which is a distinguishing feature of different LPTCs. Conclusions Our study reveals how neurons acquire selectivity for distinct visual motion patterns by dendritic integration of the local inputs with different preferred directions. With their spatial layout of directional responses, the dendrites of the LPTCs we investigated thus served as matched filters for wide-field motion patterns. PMID:20384983

  1. Image enhancement by non-linear extrapolation in frequency space

    NASA Technical Reports Server (NTRS)

    Anderson, Charles H. (Inventor); Greenspan, Hayit K. (Inventor)

    1998-01-01

    An input image is enhanced to include spatial frequency components having frequencies higher than those in an input image. To this end, an edge map is generated from the input image using a high band pass filtering technique. An enhancing map is subsequently generated from the edge map, with the enhanced map having spatial frequencies exceeding an initial maximum spatial frequency of the input image. The enhanced map is generated by applying a non-linear operator to the edge map in a manner which preserves the phase transitions of the edges of the input image. The enhanced map is added to the input image to achieve a resulting image having spatial frequencies greater than those in the input image. Simplicity of computations and ease of implementation allow for image sharpening after enlargement and for real-time applications such as videophones, advanced definition television, zooming, and restoration of old motion pictures.

  2. FliMax, a novel stimulus device for panoramic and highspeed presentation of behaviourally generated optic flow.

    PubMed

    Lindemann, J P; Kern, R; Michaelis, C; Meyer, P; van Hateren, J H; Egelhaaf, M

    2003-03-01

    A high-speed panoramic visual stimulation device is introduced which is suitable to analyse visual interneurons during stimulation with rapid image displacements as experienced by fast moving animals. The responses of an identified motion sensitive neuron in the visual system of the blowfly to behaviourally generated image sequences are very complex and hard to predict from the established input circuitry of the neuron. This finding suggests that the computational significance of visual interneurons can only be assessed if they are characterised not only by conventional stimuli as are often used for systems analysis, but also by behaviourally relevant input.

  3. Standing postural reaction to visual and proprioceptive stimulation in chronic acquired demyelinating polyneuropathy.

    PubMed

    Provost, Clement P; Tasseel-Ponche, Sophie; Lozeron, Pierre; Piccinini, Giulia; Quintaine, Victorine; Arnulf, Bertrand; Kubis, Nathalie; Yelnik, Alain P

    2018-02-28

    To investigate the weight of visual and proprioceptive inputs, measured indirectly in standing position control, in patients with chronic acquired demyelinating polyneuropathy (CADP). Prospective case study. Twenty-five patients with CADP and 25 healthy controls. Posture was recorded on a double force platform. Stimulations were optokinetic (60°/s) for visual input and vibration (50 Hz) for proprioceptive input. Visual stimulation involved 4 tests (upward, downward, rightward and leftward) and proprioceptive stimulation 2 tests (triceps surae and tibialis anterior). A composite score, previously published and slightly modified, was used for the recorded postural signals from the different stimulations. Despite their sensitivity deficits, patients with CADP were more sensitive to proprioceptive stimuli than were healthy controls (mean composite score 13.9 ((standard deviation; SD) 4.8) vs 18.4 (SD 4.8), p = 0.002). As expected, they were also more sensitive to visual stimuli (mean composite score 10.5 (SD 8.7) vs 22.9 (SD 7.5), p <0.0001). These results encourage balance rehabilitation of patients with CADP, aimed at promoting the use of proprioceptive information, thereby reducing too-early development of visual compensation while proprioception is still available.

  4. Sparse coding can predict primary visual cortex receptive field changes induced by abnormal visual input.

    PubMed

    Hunt, Jonathan J; Dayan, Peter; Goodhill, Geoffrey J

    2013-01-01

    Receptive fields acquired through unsupervised learning of sparse representations of natural scenes have similar properties to primary visual cortex (V1) simple cell receptive fields. However, what drives in vivo development of receptive fields remains controversial. The strongest evidence for the importance of sensory experience in visual development comes from receptive field changes in animals reared with abnormal visual input. However, most sparse coding accounts have considered only normal visual input and the development of monocular receptive fields. Here, we applied three sparse coding models to binocular receptive field development across six abnormal rearing conditions. In every condition, the changes in receptive field properties previously observed experimentally were matched to a similar and highly faithful degree by all the models, suggesting that early sensory development can indeed be understood in terms of an impetus towards sparsity. As previously predicted in the literature, we found that asymmetries in inter-ocular correlation across orientations lead to orientation-specific binocular receptive fields. Finally we used our models to design a novel stimulus that, if present during rearing, is predicted by the sparsity principle to lead robustly to radically abnormal receptive fields.

  5. Sparse Coding Can Predict Primary Visual Cortex Receptive Field Changes Induced by Abnormal Visual Input

    PubMed Central

    Hunt, Jonathan J.; Dayan, Peter; Goodhill, Geoffrey J.

    2013-01-01

    Receptive fields acquired through unsupervised learning of sparse representations of natural scenes have similar properties to primary visual cortex (V1) simple cell receptive fields. However, what drives in vivo development of receptive fields remains controversial. The strongest evidence for the importance of sensory experience in visual development comes from receptive field changes in animals reared with abnormal visual input. However, most sparse coding accounts have considered only normal visual input and the development of monocular receptive fields. Here, we applied three sparse coding models to binocular receptive field development across six abnormal rearing conditions. In every condition, the changes in receptive field properties previously observed experimentally were matched to a similar and highly faithful degree by all the models, suggesting that early sensory development can indeed be understood in terms of an impetus towards sparsity. As previously predicted in the literature, we found that asymmetries in inter-ocular correlation across orientations lead to orientation-specific binocular receptive fields. Finally we used our models to design a novel stimulus that, if present during rearing, is predicted by the sparsity principle to lead robustly to radically abnormal receptive fields. PMID:23675290

  6. Enhanced attention amplifies face adaptation.

    PubMed

    Rhodes, Gillian; Jeffery, Linda; Evangelista, Emma; Ewing, Louise; Peters, Marianne; Taylor, Libby

    2011-08-15

    Perceptual adaptation not only produces striking perceptual aftereffects, but also enhances coding efficiency and discrimination by calibrating coding mechanisms to prevailing inputs. Attention to simple stimuli increases adaptation, potentially enhancing its functional benefits. Here we show that attention also increases adaptation to faces. In Experiment 1, face identity aftereffects increased when attention to adapting faces was increased using a change detection task. In Experiment 2, figural (distortion) face aftereffects increased when attention was increased using a snap game (detecting immediate repeats) during adaptation. Both were large effects. Contributions of low-level adaptation were reduced using free viewing (both experiments) and a size change between adapt and test faces (Experiment 2). We suggest that attention may enhance adaptation throughout the entire cortical visual pathway, with functional benefits well beyond the immediate advantages of selective processing of potentially important stimuli. These results highlight the potential to facilitate adaptive updating of face-coding mechanisms by strategic deployment of attentional resources. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Development of a Bayesian Estimator for Audio-Visual Integration: A Neurocomputational Study

    PubMed Central

    Ursino, Mauro; Crisafulli, Andrea; di Pellegrino, Giuseppe; Magosso, Elisa; Cuppini, Cristiano

    2017-01-01

    The brain integrates information from different sensory modalities to generate a coherent and accurate percept of external events. Several experimental studies suggest that this integration follows the principle of Bayesian estimate. However, the neural mechanisms responsible for this behavior, and its development in a multisensory environment, are still insufficiently understood. We recently presented a neural network model of audio-visual integration (Neural Computation, 2017) to investigate how a Bayesian estimator can spontaneously develop from the statistics of external stimuli. Model assumes the presence of two unimodal areas (auditory and visual) topologically organized. Neurons in each area receive an input from the external environment, computed as the inner product of the sensory-specific stimulus and the receptive field synapses, and a cross-modal input from neurons of the other modality. Based on sensory experience, synapses were trained via Hebbian potentiation and a decay term. Aim of this work is to improve the previous model, including a more realistic distribution of visual stimuli: visual stimuli have a higher spatial accuracy at the central azimuthal coordinate and a lower accuracy at the periphery. Moreover, their prior probability is higher at the center, and decreases toward the periphery. Simulations show that, after training, the receptive fields of visual and auditory neurons shrink to reproduce the accuracy of the input (both at the center and at the periphery in the visual case), thus realizing the likelihood estimate of unimodal spatial position. Moreover, the preferred positions of visual neurons contract toward the center, thus encoding the prior probability of the visual input. Finally, a prior probability of the co-occurrence of audio-visual stimuli is encoded in the cross-modal synapses. The model is able to simulate the main properties of a Bayesian estimator and to reproduce behavioral data in all conditions examined. In particular, in unisensory conditions the visual estimates exhibit a bias toward the fovea, which increases with the level of noise. In cross modal conditions, the SD of the estimates decreases when using congruent audio-visual stimuli, and a ventriloquism effect becomes evident in case of spatially disparate stimuli. Moreover, the ventriloquism decreases with the eccentricity. PMID:29046631

  8. R-based Tool for a Pairwise Structure-activity Relationship Analysis.

    PubMed

    Klimenko, Kyrylo

    2018-04-01

    The Structure-Activity Relationship analysis is a complex process that can be enhanced by computational techniques. This article describes a simple tool for SAR analysis that has a graphic user interface and a flexible approach towards the input of molecular data. The application allows calculating molecular similarity represented by Tanimoto index & Euclid distance, as well as, determining activity cliffs by means of Structure-Activity Landscape Index. The calculation is performed in a pairwise manner either for the reference compound and other compounds or for all possible pairs in the data set. The results of SAR analysis are visualized using two types of plot. The application capability is demonstrated by the analysis of a set of COX2 inhibitors with respect to Isoxicam. This tool is available online: it includes manual and input file examples. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Slow changing postural cues cancel visual field dependence on self-tilt detection.

    PubMed

    Scotto Di Cesare, C; Macaluso, T; Mestre, D R; Bringoux, L

    2015-01-01

    Interindividual differences influence the multisensory integration process involved in spatial perception. Here, we assessed the effect of visual field dependence on self-tilt detection relative to upright, as a function of static vs. slow changing visual or postural cues. To that aim, we manipulated slow rotations (i.e., 0.05° s(-1)) of the body and/or the visual scene in pitch. Participants had to indicate whether they felt being tilted forward at successive angles. Results show that thresholds for self-tilt detection substantially differed between visual field dependent/independent subjects, when only the visual scene was rotated. This difference was no longer present when the body was actually rotated, whatever the visual scene condition (i.e., absent, static or rotated relative to the observer). These results suggest that the cancellation of visual field dependence by dynamic postural cues may rely on a multisensory reweighting process, where slow changing vestibular/somatosensory inputs may prevail over visual inputs. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Concurrent audio-visual feedback for supporting drivers at intersections: A study using two linked driving simulators.

    PubMed

    Houtenbos, M; de Winter, J C F; Hale, A R; Wieringa, P A; Hagenzieker, M P

    2017-04-01

    A large portion of road traffic crashes occur at intersections for the reason that drivers lack necessary visual information. This research examined the effects of an audio-visual display that provides real-time sonification and visualization of the speed and direction of another car approaching the crossroads on an intersecting road. The location of red blinking lights (left vs. right on the speedometer) and the lateral input direction of beeps (left vs. right ear in headphones) corresponded to the direction from where the other car approached, and the blink and beep rates were a function of the approaching car's speed. Two driving simulators were linked so that the participant and the experimenter drove in the same virtual world. Participants (N = 25) completed four sessions (two with the audio-visual display on, two with the audio-visual display off), each session consisting of 22 intersections at which the experimenter approached from the left or right and either maintained speed or slowed down. Compared to driving with the display off, the audio-visual display resulted in enhanced traffic efficiency (i.e., greater mean speed, less coasting) while not compromising safety (i.e., the time gap between the two vehicles was equivalent). A post-experiment questionnaire showed that the beeps were regarded as more useful than the lights. It is argued that the audio-visual display is a promising means of supporting drivers until fully automated driving is technically feasible. Copyright © 2016. Published by Elsevier Ltd.

  11. The role of visual deprivation and experience on the performance of sensory substitution devices.

    PubMed

    Stronks, H Christiaan; Nau, Amy C; Ibbotson, Michael R; Barnes, Nick

    2015-10-22

    It is commonly accepted that the blind can partially compensate for their loss of vision by developing enhanced abilities with their remaining senses. This visual compensation may be related to the fact that blind people rely on their other senses in everyday life. Many studies have indeed shown that experience plays an important role in visual compensation. Numerous neuroimaging studies have shown that the visual cortices of the blind are recruited by other functional brain areas and can become responsive to tactile or auditory input instead. These cross-modal plastic changes are more pronounced in the early blind compared to late blind individuals. The functional consequences of cross-modal plasticity on visual compensation in the blind are debated, as are the influences of various etiologies of vision loss (i.e., blindness acquired early or late in life). Distinguishing between the influences of experience and visual deprivation on compensation is especially relevant for rehabilitation of the blind with sensory substitution devices. The BrainPort artificial vision device and The vOICe are assistive devices for the blind that redirect visual information to another intact sensory system. Establishing how experience and different etiologies of vision loss affect the performance of these devices may help to improve existing rehabilitation strategies, formulate effective selection criteria and develop prognostic measures. In this review we will discuss studies that investigated the influence of training and visual deprivation on the performance of various sensory substitution approaches. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Deletion of Ten-m3 Induces the Formation of Eye Dominance Domains in Mouse Visual Cortex

    PubMed Central

    Merlin, Sam; Horng, Sam; Marotte, Lauren R.; Sur, Mriganka; Sawatari, Atomu

    2013-01-01

    The visual system is characterized by precise retinotopic mapping of each eye, together with exquisitely matched binocular projections. In many species, the inputs that represent the eyes are segregated into ocular dominance columns in primary visual cortex (V1), whereas in rodents, this does not occur. Ten-m3, a member of the Ten-m/Odz/Teneurin family, regulates axonal guidance in the retinogeniculate pathway. Significantly, ipsilateral projections are expanded in the dorsal lateral geniculate nucleus and are not aligned with contralateral projections in Ten-m3 knockout (KO) mice. Here, we demonstrate the impact of altered retinogeniculate mapping on the organization and function of V1. Transneuronal tracing and c-fos immunohistochemistry demonstrate that the subcortical expansion of ipsilateral input is conveyed to V1 in Ten-m3 KOs: Ipsilateral inputs are widely distributed across V1 and are interdigitated with contralateral inputs into eye dominance domains. Segregation is confirmed by optical imaging of intrinsic signals. Single-unit recording shows ipsilateral, and contralateral inputs are mismatched at the level of single V1 neurons, and binocular stimulation leads to functional suppression of these cells. These findings indicate that the medial expansion of the binocular zone together with an interocular mismatch is sufficient to induce novel structural features, such as eye dominance domains in rodent visual cortex. PMID:22499796

  13. Audiovisual synchrony enhances BOLD responses in a brain network including multisensory STS while also enhancing target-detection performance for both modalities

    PubMed Central

    Marchant, Jennifer L; Ruff, Christian C; Driver, Jon

    2012-01-01

    The brain seeks to combine related inputs from different senses (e.g., hearing and vision), via multisensory integration. Temporal information can indicate whether stimuli in different senses are related or not. A recent human fMRI study (Noesselt et al. [2007]: J Neurosci 27:11431–11441) used auditory and visual trains of beeps and flashes with erratic timing, manipulating whether auditory and visual trains were synchronous or unrelated in temporal pattern. A region of superior temporal sulcus (STS) showed higher BOLD signal for the synchronous condition. But this could not be related to performance, and it remained unclear if the erratic, unpredictable nature of the stimulus trains was important. Here we compared synchronous audiovisual trains to asynchronous trains, while using a behavioral task requiring detection of higher-intensity target events in either modality. We further varied whether the stimulus trains had predictable temporal pattern or not. Synchrony (versus lag) between auditory and visual trains enhanced behavioral sensitivity (d') to intensity targets in either modality, regardless of predictable versus unpredictable patterning. The analogous contrast in fMRI revealed BOLD increases in several brain areas, including the left STS region reported by Noesselt et al. [2007: J Neurosci 27:11431–11441]. The synchrony effect on BOLD here correlated with the subject-by-subject impact on performance. Predictability of temporal pattern did not affect target detection performance or STS activity, but did lead to an interaction with audiovisual synchrony for BOLD in inferior parietal cortex. PMID:21953980

  14. Degraded attentional modulation of cortical neural populations in strabismic amblyopia

    PubMed Central

    Hou, Chuan; Kim, Yee-Joon; Lai, Xin Jie; Verghese, Preeti

    2016-01-01

    Behavioral studies have reported reduced spatial attention in amblyopia, a developmental disorder of spatial vision. However, the neural populations in the visual cortex linked with these behavioral spatial attention deficits have not been identified. Here, we use functional MRI–informed electroencephalography source imaging to measure the effect of attention on neural population activity in the visual cortex of human adult strabismic amblyopes who were stereoblind. We show that compared with controls, the modulatory effects of selective visual attention on the input from the amblyopic eye are substantially reduced in the primary visual cortex (V1) as well as in extrastriate visual areas hV4 and hMT+. Degraded attentional modulation is also found in the normal-acuity fellow eye in areas hV4 and hMT+ but not in V1. These results provide electrophysiological evidence that abnormal binocular input during a developmental critical period may impact cortical connections between the visual cortex and higher level cortices beyond the known amblyopic losses in V1 and V2, suggesting that a deficit of attentional modulation in the visual cortex is an important component of the functional impairment in amblyopia. Furthermore, we find that degraded attentional modulation in V1 is correlated with the magnitude of interocular suppression and the depth of amblyopia. These results support the view that the visual suppression often seen in strabismic amblyopia might be a form of attentional neglect of the visual input to the amblyopic eye. PMID:26885628

  15. Degraded attentional modulation of cortical neural populations in strabismic amblyopia.

    PubMed

    Hou, Chuan; Kim, Yee-Joon; Lai, Xin Jie; Verghese, Preeti

    2016-01-01

    Behavioral studies have reported reduced spatial attention in amblyopia, a developmental disorder of spatial vision. However, the neural populations in the visual cortex linked with these behavioral spatial attention deficits have not been identified. Here, we use functional MRI-informed electroencephalography source imaging to measure the effect of attention on neural population activity in the visual cortex of human adult strabismic amblyopes who were stereoblind. We show that compared with controls, the modulatory effects of selective visual attention on the input from the amblyopic eye are substantially reduced in the primary visual cortex (V1) as well as in extrastriate visual areas hV4 and hMT+. Degraded attentional modulation is also found in the normal-acuity fellow eye in areas hV4 and hMT+ but not in V1. These results provide electrophysiological evidence that abnormal binocular input during a developmental critical period may impact cortical connections between the visual cortex and higher level cortices beyond the known amblyopic losses in V1 and V2, suggesting that a deficit of attentional modulation in the visual cortex is an important component of the functional impairment in amblyopia. Furthermore, we find that degraded attentional modulation in V1 is correlated with the magnitude of interocular suppression and the depth of amblyopia. These results support the view that the visual suppression often seen in strabismic amblyopia might be a form of attentional neglect of the visual input to the amblyopic eye.

  16. Top-down influence on the visual cortex of the blind during sensory substitution

    PubMed Central

    Murphy, Matthew C.; Nau, Amy C.; Fisher, Christopher; Kim, Seong-Gi; Schuman, Joel S.; Chan, Kevin C.

    2017-01-01

    Visual sensory substitution devices provide a non-surgical and flexible approach to vision rehabilitation in the blind. These devices convert images taken by a camera into cross-modal sensory signals that are presented as a surrogate for direct visual input. While previous work has demonstrated that the visual cortex of blind subjects is recruited during sensory substitution, the cognitive basis of this activation remains incompletely understood. To test the hypothesis that top-down input provides a significant contribution to this activation, we performed functional MRI scanning in 11 blind (7 acquired and 4 congenital) and 11 sighted subjects under two conditions: passive listening of image-encoded soundscapes before sensory substitution training and active interpretation of the same auditory sensory substitution signals after a 10-minute training session. We found that the modulation of visual cortex activity due to active interpretation was significantly stronger in the blind over sighted subjects. In addition, congenitally blind subjects showed stronger task-induced modulation in the visual cortex than acquired blind subjects. In a parallel experiment, we scanned 18 blind (11 acquired and 7 congenital) and 18 sighted subjects at rest to investigate alterations in functional connectivity due to visual deprivation. The results demonstrated that visual cortex connectivity of the blind shifted away from sensory networks and toward known areas of top-down input. Taken together, our data support the model of the brain, including the visual system, as a highly flexible task-based and not sensory-based machine. PMID:26584776

  17. Gestural Communication With Accelerometer-Based Input Devices and Tactile Displays

    DTIC Science & Technology

    2008-12-01

    and natural terrain obstructions, or concealment often impede visual communication attempts. To overcome some of these issues, “daisy-chaining” or...the intended recipients. Moreover, visual communication demands a focus on the visual modality possibly distracting a receiving soldier’s visual

  18. The effect of multimodal and enriched feedback on SMR-BCI performance.

    PubMed

    Sollfrank, T; Ramsay, A; Perdikis, S; Williamson, J; Murray-Smith, R; Leeb, R; Millán, J D R; Kübler, A

    2016-01-01

    This study investigated the effect of multimodal (visual and auditory) continuous feedback with information about the uncertainty of the input signal on motor imagery based BCI performance. A liquid floating through a visualization of a funnel (funnel feedback) provided enriched visual or enriched multimodal feedback. In a between subject design 30 healthy SMR-BCI naive participants were provided with either conventional bar feedback (CB), or visual funnel feedback (UF), or multimodal (visual and auditory) funnel feedback (MF). Subjects were required to imagine left and right hand movement and were trained to control the SMR based BCI for five sessions on separate days. Feedback accuracy varied largely between participants. The MF feedback lead to a significantly better performance in session 1 as compared to the CB feedback and could significantly enhance motivation and minimize frustration in BCI use across the five training sessions. The present study demonstrates that the BCI funnel feedback allows participants to modulate sensorimotor EEG rhythms. Participants were able to control the BCI with the funnel feedback with better performance during the initial session and less frustration compared to the CB feedback. The multimodal funnel feedback provides an alternative to the conventional cursorbar feedback for training subjects to modulate their sensorimotor rhythms. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  19. A quantitative comparison of the hemispheric, areal, and laminar origins of sensory and motor cortical projections to the superior colliculus of the cat.

    PubMed

    Butler, Blake E; Chabot, Nicole; Lomber, Stephen G

    2016-09-01

    The superior colliculus (SC) is a midbrain structure central to orienting behaviors. The organization of descending projections from sensory cortices to the SC has garnered much attention; however, rarely have projections from multiple modalities been quantified and contrasted, allowing for meaningful conclusions within a single species. Here, we examine corticotectal projections from visual, auditory, somatosensory, motor, and limbic cortices via retrograde pathway tracers injected throughout the superficial and deep layers of the cat SC. As anticipated, the majority of cortical inputs to the SC originate in the visual cortex. In fact, each field implicated in visual orienting behavior makes a substantial projection. Conversely, only one area of the auditory orienting system, the auditory field of the anterior ectosylvian sulcus (fAES), and no area involved in somatosensory orienting, shows significant corticotectal inputs. Although small relative to visual inputs, the projection from the fAES is of particular interest, as it represents the only bilateral cortical input to the SC. This detailed, quantitative study allows for comparison across modalities in an animal that serves as a useful model for both auditory and visual perception. Moreover, the differences in patterns of corticotectal projections between modalities inform the ways in which orienting systems are modulated by cortical feedback. J. Comp. Neurol. 524:2623-2642, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  20. Direction of Magnetoencephalography Sources Associated with Feedback and Feedforward Contributions in a Visual Object Recognition Task

    PubMed Central

    Ahlfors, Seppo P.; Jones, Stephanie R.; Ahveninen, Jyrki; Hämäläinen, Matti S.; Belliveau, John W.; Bar, Moshe

    2014-01-01

    Identifying inter-area communication in terms of the hierarchical organization of functional brain areas is of considerable interest in human neuroimaging. Previous studies have suggested that the direction of magneto- and electroencephalography (MEG, EEG) source currents depends on the layer-specific input patterns into a cortical area. We examined the direction in MEG source currents in a visual object recognition experiment in which there were specific expectations of activation in the fusiform region being driven by either feedforward or feedback inputs. The source for the early non-specific visual evoked response, presumably corresponding to feedforward driven activity, pointed outward, i.e., away from the white matter. In contrast, the source for the later, object-recognition related signals, expected to be driven by feedback inputs, pointed inward, toward the white matter. Associating specific features of the MEG/EEG source waveforms to feedforward and feedback inputs could provide unique information about the activation patterns within hierarchically organized cortical areas. PMID:25445356

  1. Perceptual Learning via Modification of Cortical Top-Down Signals

    PubMed Central

    Schäfer, Roland; Vasilaki, Eleni; Senn, Walter

    2007-01-01

    The primary visual cortex (V1) is pre-wired to facilitate the extraction of behaviorally important visual features. Collinear edge detectors in V1, for instance, mutually enhance each other to improve the perception of lines against a noisy background. The same pre-wiring that facilitates line extraction, however, is detrimental when subjects have to discriminate the brightness of different line segments. How is it possible to improve in one task by unsupervised practicing, without getting worse in the other task? The classical view of perceptual learning is that practicing modulates the feedforward input stream through synaptic modifications onto or within V1. However, any rewiring of V1 would deteriorate other perceptual abilities different from the trained one. We propose a general neuronal model showing that perceptual learning can modulate top-down input to V1 in a task-specific way while feedforward and lateral pathways remain intact. Consistent with biological data, the model explains how context-dependent brightness discrimination is improved by a top-down recruitment of recurrent inhibition and a top-down induced increase of the neuronal gain within V1. Both the top-down modulation of inhibition and of neuronal gain are suggested to be universal features of cortical microcircuits which enable perceptual learning. PMID:17715996

  2. Neurochemical changes in the pericalcarine cortex in congenital blindness attributable to bilateral anophthalmia

    PubMed Central

    Coullon, Gaelle S. L.; Emir, Uzay E.; Fine, Ione; Watkins, Kate E.

    2015-01-01

    Congenital blindness leads to large-scale functional and structural reorganization in the occipital cortex, but relatively little is known about the neurochemical changes underlying this cross-modal plasticity. To investigate the effect of complete and early visual deafferentation on the concentration of metabolites in the pericalcarine cortex, 1H magnetic resonance spectroscopy was performed in 14 sighted subjects and 5 subjects with bilateral anophthalmia, a condition in which both eyes fail to develop. In the pericalcarine cortex, where primary visual cortex is normally located, the proportion of gray matter was significantly greater, and levels of choline, glutamate, glutamine, myo-inositol, and total creatine were elevated in anophthalmic relative to sighted subjects. Anophthalmia had no effect on the structure or neurochemistry of a sensorimotor cortex control region. More gray matter, combined with high levels of choline and myo-inositol, resembles the profile of the cortex at birth and suggests that the lack of visual input from the eyes might have delayed or arrested the maturation of this cortical region. High levels of choline and glutamate/glutamine are consistent with enhanced excitatory circuits in the anophthalmic occipital cortex, which could reflect a shift toward enhanced plasticity or sensitivity that could in turn mediate or unmask cross-modal responses. Finally, it is possible that the change in function of the occipital cortex results in biochemical profiles that resemble those of auditory, language, or somatosensory cortex. PMID:26180125

  3. Synaptic Mechanisms Generating Orientation Selectivity in the ON Pathway of the Rabbit Retina

    PubMed Central

    Venkataramani, Sowmya

    2016-01-01

    Neurons that signal the orientation of edges within the visual field have been widely studied in primary visual cortex. Much less is known about the mechanisms of orientation selectivity that arise earlier in the visual stream. Here we examine the synaptic and morphological properties of a subtype of orientation-selective ganglion cell in the rabbit retina. The receptive field has an excitatory ON center, flanked by excitatory OFF regions, a structure similar to simple cell receptive fields in primary visual cortex. Examination of the light-evoked postsynaptic currents in these ON-type orientation-selective ganglion cells (ON-OSGCs) reveals that synaptic input is mediated almost exclusively through the ON pathway. Orientation selectivity is generated by larger excitation for preferred relative to orthogonal stimuli, and conversely larger inhibition for orthogonal relative to preferred stimuli. Excitatory orientation selectivity arises in part from the morphology of the dendritic arbors. Blocking GABAA receptors reduces orientation selectivity of the inhibitory synaptic inputs and the spiking responses. Negative contrast stimuli in the flanking regions produce orientation-selective excitation in part by disinhibition of a tonic NMDA receptor-mediated input arising from ON bipolar cells. Comparison with earlier studies of OFF-type OSGCs indicates that diverse synaptic circuits have evolved in the retina to detect the orientation of edges in the visual input. SIGNIFICANCE STATEMENT A core goal for visual neuroscientists is to understand how neural circuits at each stage of the visual system extract and encode features from the visual scene. This study documents a novel type of orientation-selective ganglion cell in the retina and shows that the receptive field structure is remarkably similar to that of simple cells in primary visual cortex. However, the data indicate that, unlike in the cortex, orientation selectivity in the retina depends on the activity of inhibitory interneurons. The results further reveal the physiological basis for feature detection in the visual system, elucidate the synaptic mechanisms that generate orientation selectivity at an early stage of visual processing, and illustrate a novel role for NMDA receptors in retinal processing. PMID:26985041

  4. Synaptic Mechanisms Generating Orientation Selectivity in the ON Pathway of the Rabbit Retina.

    PubMed

    Venkataramani, Sowmya; Taylor, W Rowland

    2016-03-16

    Neurons that signal the orientation of edges within the visual field have been widely studied in primary visual cortex. Much less is known about the mechanisms of orientation selectivity that arise earlier in the visual stream. Here we examine the synaptic and morphological properties of a subtype of orientation-selective ganglion cell in the rabbit retina. The receptive field has an excitatory ON center, flanked by excitatory OFF regions, a structure similar to simple cell receptive fields in primary visual cortex. Examination of the light-evoked postsynaptic currents in these ON-type orientation-selective ganglion cells (ON-OSGCs) reveals that synaptic input is mediated almost exclusively through the ON pathway. Orientation selectivity is generated by larger excitation for preferred relative to orthogonal stimuli, and conversely larger inhibition for orthogonal relative to preferred stimuli. Excitatory orientation selectivity arises in part from the morphology of the dendritic arbors. Blocking GABAA receptors reduces orientation selectivity of the inhibitory synaptic inputs and the spiking responses. Negative contrast stimuli in the flanking regions produce orientation-selective excitation in part by disinhibition of a tonic NMDA receptor-mediated input arising from ON bipolar cells. Comparison with earlier studies of OFF-type OSGCs indicates that diverse synaptic circuits have evolved in the retina to detect the orientation of edges in the visual input. A core goal for visual neuroscientists is to understand how neural circuits at each stage of the visual system extract and encode features from the visual scene. This study documents a novel type of orientation-selective ganglion cell in the retina and shows that the receptive field structure is remarkably similar to that of simple cells in primary visual cortex. However, the data indicate that, unlike in the cortex, orientation selectivity in the retina depends on the activity of inhibitory interneurons. The results further reveal the physiological basis for feature detection in the visual system, elucidate the synaptic mechanisms that generate orientation selectivity at an early stage of visual processing, and illustrate a novel role for NMDA receptors in retinal processing. Copyright © 2016 the authors 0270-6474/16/363336-14$15.00/0.

  5. Visual and non-visual motion information processing during pursuit eye tracking in schizophrenia and bipolar disorder.

    PubMed

    Trillenberg, Peter; Sprenger, Andreas; Talamo, Silke; Herold, Kirsten; Helmchen, Christoph; Verleger, Rolf; Lencer, Rebekka

    2017-04-01

    Despite many reports on visual processing deficits in psychotic disorders, studies are needed on the integration of visual and non-visual components of eye movement control to improve the understanding of sensorimotor information processing in these disorders. Non-visual inputs to eye movement control include prediction of future target velocity from extrapolation of past visual target movement and anticipation of future target movements. It is unclear whether non-visual input is impaired in patients with schizophrenia. We recorded smooth pursuit eye movements in 21 patients with schizophrenia spectrum disorder, 22 patients with bipolar disorder, and 24 controls. In a foveo-fugal ramp task, the target was either continuously visible or was blanked during movement. We determined peak gain (measuring overall performance), initial eye acceleration (measuring visually driven pursuit), deceleration after target extinction (measuring prediction), eye velocity drifts before onset of target visibility (measuring anticipation), and residual gain during blanking intervals (measuring anticipation and prediction). In both patient groups, initial eye acceleration was decreased and the ability to adjust eye acceleration to increasing target acceleration was impaired. In contrast, neither deceleration nor eye drift velocity was reduced in patients, implying unimpaired non-visual contributions to pursuit drive. Disturbances of eye movement control in psychotic disorders appear to be a consequence of deficits in sensorimotor transformation rather than a pure failure in adding cognitive contributions to pursuit drive in higher-order cortical circuits. More generally, this deficit might reflect a fundamental imbalance between processing external input and acting according to internal preferences.

  6. Three Types of Cortical L5 Neurons that Differ in Brain-Wide Connectivity and Function

    PubMed Central

    Kim, Euiseok J.; Juavinett, Ashley L.; Kyubwa, Espoir M.; Jacobs, Matthew W.; Callaway, Edward M.

    2015-01-01

    SUMMARY Cortical layer 5 (L5) pyramidal neurons integrate inputs from many sources and distribute outputs to cortical and subcortical structures. Previous studies demonstrate two L5 pyramid types: cortico-cortical (CC) and cortico-subcortical (CS). We characterize connectivity and function of these cell types in mouse primary visual cortex and reveal a new subtype. Unlike previously described L5 CC and CS neurons, this new subtype does not project to striatum [cortico-cortical, non-striatal (CC-NS)] and has distinct morphology, physiology and visual responses. Monosynaptic rabies tracing reveals that CC neurons preferentially receive input from higher visual areas, while CS neurons receive more input from structures implicated in top-down modulation of brain states. CS neurons are also more direction-selective and prefer faster stimuli than CC neurons. These differences suggest distinct roles as specialized output channels, with CS neurons integrating information and generating responses more relevant to movement control and CC neurons being more important in visual perception. PMID:26671462

  7. Three Types of Cortical Layer 5 Neurons That Differ in Brain-wide Connectivity and Function.

    PubMed

    Kim, Euiseok J; Juavinett, Ashley L; Kyubwa, Espoir M; Jacobs, Matthew W; Callaway, Edward M

    2015-12-16

    Cortical layer 5 (L5) pyramidal neurons integrate inputs from many sources and distribute outputs to cortical and subcortical structures. Previous studies demonstrate two L5 pyramid types: cortico-cortical (CC) and cortico-subcortical (CS). We characterize connectivity and function of these cell types in mouse primary visual cortex and reveal a new subtype. Unlike previously described L5 CC and CS neurons, this new subtype does not project to striatum [cortico-cortical, non-striatal (CC-NS)] and has distinct morphology, physiology, and visual responses. Monosynaptic rabies tracing reveals that CC neurons preferentially receive input from higher visual areas, while CS neurons receive more input from structures implicated in top-down modulation of brain states. CS neurons are also more direction-selective and prefer faster stimuli than CC neurons. These differences suggest distinct roles as specialized output channels, with CS neurons integrating information and generating responses more relevant to movement control and CC neurons being more important in visual perception. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Distinct Roles of NMDAR and mGluR5 in Light Exposure Reversal of Feedforward Synaptic Strength in V1 of Juvenile Mice after Binocular Vision Deprivation.

    PubMed

    Tie, Xiaoxiu; Li, Shuo; Feng, Yilin; Lai, Biqin; Liu, Sheng; Jiang, Bin

    2018-06-01

    In the visual cortex, sensory deprivation causes global augmentation of the amplitude of AMPA receptor-mediated miniature EPSCs in layer 2/3 pyramidal cells and enhancement of NMDA receptor-dependent long-term potentiation (LTP) in cells activated in layer 4, effects that are both rapidly reversed by light exposure. Layer 2/3 pyramidal cells receive both feedforward input from layer 4 and intra-cortical lateral input from the same layer, LTP is mainly induced by the former input. Whether feedforward excitatory synaptic strength is affected by visual deprivation and light exposure, how this synaptic strength correlates with the magnitude of LTP in this pathway, and the underlying mechanism have not been explored. Here, we showed that in juvenile mice, both dark rearing and dark exposure reduced the feedforward excitatory synaptic strength, and the effects can be reversed completely by 10-12 h and 6-8 h light exposure, respectively. However, inhibition of NMDA receptors by CPP or mGluR5 by MPEP, prevented the effect of light exposure on the mice reared in the dark from birth, while only inhibition of NMDAR prevented the effect of light exposure on dark-exposed mice. These results suggested that the activation of both NMDAR and mGluR5 are essential in the light exposure reversal of feedforward excitatory synaptic strength in the dark reared mice from birth; while in the dark exposed mice, only activation of NMDAR is required. Copyright © 2018. Published by Elsevier Ltd.

  9. Image contrast enhancement using adjacent-blocks-based modification for local histogram equalization

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Pan, Zhibin

    2017-11-01

    Infrared images usually have some non-ideal characteristics such as weak target-to-background contrast and strong noise. Because of these characteristics, it is necessary to apply the contrast enhancement algorithm to improve the visual quality of infrared images. Histogram equalization (HE) algorithm is a widely used contrast enhancement algorithm due to its effectiveness and simple implementation. But a drawback of HE algorithm is that the local contrast of an image cannot be equally enhanced. Local histogram equalization algorithms are proved to be the effective techniques for local image contrast enhancement. However, over-enhancement of noise and artifacts can be easily found in the local histogram equalization enhanced images. In this paper, a new contrast enhancement technique based on local histogram equalization algorithm is proposed to overcome the drawbacks mentioned above. The input images are segmented into three kinds of overlapped sub-blocks using the gradients of them. To overcome the over-enhancement effect, the histograms of these sub-blocks are then modified by adjacent sub-blocks. We pay more attention to improve the contrast of detail information while the brightness of the flat region in these sub-blocks is well preserved. It will be shown that the proposed algorithm outperforms other related algorithms by enhancing the local contrast without introducing over-enhancement effects and additional noise.

  10. Parallel Processing Strategies of the Primate Visual System

    PubMed Central

    Nassi, Jonathan J.; Callaway, Edward M.

    2009-01-01

    Preface Incoming sensory information is sent to the brain along modality-specific channels corresponding to the five senses. Each of these channels further parses the incoming signals into parallel streams to provide a compact, efficient input to the brain. Ultimately, these parallel input signals must be elaborated upon and integrated within the cortex to provide a unified and coherent percept. Recent studies in the primate visual cortex have greatly contributed to our understanding of how this goal is accomplished. Multiple strategies including retinal tiling, hierarchical and parallel processing and modularity, defined spatially and by cell type-specific connectivity, are all used by the visual system to recover the rich detail of our visual surroundings. PMID:19352403

  11. Visual prediction and perceptual expertise

    PubMed Central

    Cheung, Olivia S.; Bar, Moshe

    2012-01-01

    Making accurate predictions about what may happen in the environment requires analogies between perceptual input and associations in memory. These elements of predictions are based on cortical representations, but little is known about how these processes can be enhanced by experience and training. On the other hand, studies on perceptual expertise have revealed that the acquisition of expertise leads to strengthened associative processing among features or objects, suggesting that predictions and expertise may be tightly connected. Here we review the behavioral and neural findings regarding the mechanisms involving prediction and expert processing, and highlight important possible overlaps between them. Future investigation should examine the relations among perception, memory and prediction skills as a function of expertise. The knowledge gained by this line of research will have implications for visual cognition research, and will advance our understanding of how the human brain can improve its ability to predict by learning from experience. PMID:22123523

  12. NMDA Receptor Regulation Prevents Regression of Visual Cortical Function in the Absence of Mecp2

    PubMed Central

    Durand, Severine; Patrizi, Annarita; Quast, Kathleen B.; Hachigian, Lea; Pavlyuk, Roman; Saxena, Alka; Carninci, Piero; Hensch, Takao K.; Fagiolini, Michela

    2012-01-01

    SUMMARY Brain function is shaped by postnatal experience and vulnerable to disruption of Methyl-CpG-binding protein, Mecp2, in multiple neurodevelopmental disorders. How Mecp2 contributes to the experience-dependent refinement of specific cortical circuits and their impairment remains unknown. We analyzed vision in gene-targeted mice and observed an initial normal development in the absence of Mecp2. Visual acuity then rapidly regressed after postnatal day P35–40 and cortical circuits largely fell silent by P55-60. Enhanced inhibitory gating and an excess of parvalbumin-positive, perisomatic input preceded the loss of vision. Both cortical function and inhibitory hyperconnectivity were strikingly rescued independent of Mecp2 by early sensory deprivation or genetic deletion of the excitatory NMDA receptor subunit, NR2A. Thus, vision is a sensitive biomarker of progressive cortical dysfunction and may guide novel, circuit-based therapies for Mecp2 deficiency. PMID:23259945

  13. Distribution of Potential Hydrothermally Altered Rocks in Central Colorado Derived From Landsat Thematic Mapper Data: A Geographic Information System Data Set

    USGS Publications Warehouse

    Knepper, Daniel H.

    2010-01-01

    As part of the Central Colorado Mineral Resource Assessment Project, the digital image data for four Landsat Thematic Mapper scenes covering central Colorado between Wyoming and New Mexico were acquired and band ratios were calculated after masking pixels dominated by vegetation, snow, and terrain shadows. Ratio values were visually enhanced by contrast stretching, revealing only those areas with strong responses (high ratio values). A color-ratio composite mosaic was prepared for the four scenes so that the distribution of potentially hydrothermally altered rocks could be visually evaluated. To provide a more useful input to a Geographic Information System-based mineral resource assessment, the information contained in the color-ratio composite raster image mosaic was converted to vector-based polygons after thresholding to isolate the strongest ratio responses and spatial filtering to reduce vector complexity and isolate the largest occurrences of potentially hydrothermally altered rocks.

  14. Adapting the iSNOBAL model for improved visualization in a GIS environment

    NASA Astrophysics Data System (ADS)

    Johansen, W. J.; Delparte, D.

    2014-12-01

    Snowmelt is a primary means of crucial water resources in much of the western United States. Researchers are developing models that estimate snowmelt to aid in water resource management. One such model is the image snowcover energy and mass balance (iSNOBAL) model. It uses input climate grids to simulate the development and melting of snowpack in mountainous regions. This study looks at applying this model to the Reynolds Creek Experimental Watershed in southwestern Idaho, utilizing novel approaches incorporating geographic information systems (GIS). To improve visualization of the iSNOBAL model, we have adapted it to run in a GIS environment. This type of environment is suited to both the input grid creation and the visualization of results. The data used for input grid creation can be stored locally or on a web-server. Kriging interpolation embedded within Python scripts are used to create air temperature, soil temperature, humidity, and precipitation grids, while built-in GIS and existing tools are used to create solar radiation and wind grids. Additional Python scripting is then used to perform model calculations. The final product is a user-friendly and accessible version of the iSNOBAL model, including the ability to easily visualize and interact with model results, all within a web- or desktop-based GIS environment. This environment allows for interactive manipulation of model parameters and visualization of the resulting input grids for the model calculations. Future work is moving towards adapting the model further for use in a 3D gaming engine for improved visualization and interaction.

  15. Isolating Visual and Proprioceptive Components of Motor Sequence Learning in ASD.

    PubMed

    Sharer, Elizabeth A; Mostofsky, Stewart H; Pascual-Leone, Alvaro; Oberman, Lindsay M

    2016-05-01

    In addition to defining impairments in social communication skills, individuals with autism spectrum disorder (ASD) also show impairments in more basic sensory and motor skills. Development of new skills involves integrating information from multiple sensory modalities. This input is then used to form internal models of action that can be accessed when both performing skilled movements, as well as understanding those actions performed by others. Learning skilled gestures is particularly reliant on integration of visual and proprioceptive input. We used a modified serial reaction time task (SRTT) to decompose proprioceptive and visual components and examine whether patterns of implicit motor skill learning differ in ASD participants as compared with healthy controls. While both groups learned the implicit motor sequence during training, healthy controls showed robust generalization whereas ASD participants demonstrated little generalization when visual input was constant. In contrast, no group differences in generalization were observed when proprioceptive input was constant, with both groups showing limited degrees of generalization. The findings suggest, when learning a motor sequence, individuals with ASD tend to rely less on visual feedback than do healthy controls. Visuomotor representations are considered to underlie imitative learning and action understanding and are thereby crucial to social skill and cognitive development. Thus, anomalous patterns of implicit motor learning, with a tendency to discount visual feedback, may be an important contributor in core social communication deficits that characterize ASD. Autism Res 2016, 9: 563-569. © 2015 International Society for Autism Research, Wiley Periodicals, Inc. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  16. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    PubMed

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  17. Absence of Visual Input Results in the Disruption of Grid Cell Firing in the Mouse.

    PubMed

    Chen, Guifen; Manson, Daniel; Cacucci, Francesca; Wills, Thomas Joseph

    2016-09-12

    Grid cells are spatially modulated neurons within the medial entorhinal cortex whose firing fields are arranged at the vertices of tessellating equilateral triangles [1]. The exquisite periodicity of their firing has led to the suggestion that they represent a path integration signal, tracking the organism's position by integrating speed and direction of movement [2-10]. External sensory inputs are required to reset any errors that the path integrator would inevitably accumulate. Here we probe the nature of the external sensory inputs required to sustain grid firing, by recording grid cells as mice explore familiar environments in complete darkness. The absence of visual cues results in a significant disruption of grid cell firing patterns, even when the quality of the directional information provided by head direction cells is largely preserved. Darkness alters the expression of velocity signaling within the entorhinal cortex, with changes evident in grid cell firing rate and the local field potential theta frequency. Short-term (<1.5 s) spike timing relationships between grid cell pairs are preserved in the dark, indicating that network patterns of excitatory and inhibitory coupling between grid cells exist independently of visual input and of spatially periodic firing. However, we find no evidence of preserved hexagonal symmetry in the spatial firing of single grid cells at comparable short timescales. Taken together, these results demonstrate that visual input is required to sustain grid cell periodicity and stability in mice and suggest that grid cells in mice cannot perform accurate path integration in the absence of reliable visual cues. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  18. Assessing visual requirements for social context-dependent activation of the songbird song system

    PubMed Central

    Hara, Erina; Kubikova, Lubica; Hessler, Neal A.; Jarvis, Erich D.

    2008-01-01

    Social context has been shown to have a profound influence on brain activation in a wide range of vertebrate species. Best studied in songbirds, when males sing undirected song, the level of neural activity and expression of immediate early genes (IEGs) in several song nuclei is dramatically higher or lower than when they sing directed song to other birds, particularly females. This differential social context-dependent activation is independent of auditory input and is not simply dependent on the motor act of singing. These findings suggested that the critical sensory modality driving social context-dependent differences in the brain could be visual cues. Here, we tested this hypothesis by examining IEG activation in song nuclei in hemispheres to which visual input was normal or blocked. We found that covering one eye blocked visually induced IEG expression throughout both contralateral visual pathways of the brain, and reduced activation of the contralateral ventral tegmental area, a non-visual midbrain motivation-related area affected by social context. However, blocking visual input had no effect on the social context-dependent activation of the contralateral song nuclei during female-directed singing. Our findings suggest that individual sensory modalities are not direct driving forces for the social context differences in song nuclei during singing. Rather, these social context differences in brain activation appear to depend more on the general sense that another individual is present. PMID:18826930

  19. Cortical and Subcortical Coordination of Visual Spatial Attention Revealed by Simultaneous EEG-fMRI Recording.

    PubMed

    Green, Jessica J; Boehler, Carsten N; Roberts, Kenneth C; Chen, Ling-Chia; Krebs, Ruth M; Song, Allen W; Woldorff, Marty G

    2017-08-16

    Visual spatial attention has been studied in humans with both electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) individually. However, due to the intrinsic limitations of each of these methods used alone, our understanding of the systems-level mechanisms underlying attentional control remains limited. Here, we examined trial-to-trial covariations of concurrently recorded EEG and fMRI in a cued visual spatial attention task in humans, which allowed delineation of both the generators and modulators of the cue-triggered event-related oscillatory brain activity underlying attentional control function. The fMRI activity in visual cortical regions contralateral to the cued direction of attention covaried positively with occipital gamma-band EEG, consistent with activation of cortical regions representing attended locations in space. In contrast, fMRI activity in ipsilateral visual cortical regions covaried inversely with occipital alpha-band oscillations, consistent with attention-related suppression of the irrelevant hemispace. Moreover, the pulvinar nucleus of the thalamus covaried with both of these spatially specific, attention-related, oscillatory EEG modulations. Because the pulvinar's neuroanatomical geometry makes it unlikely to be a direct generator of the scalp-recorded EEG, these covariational patterns appear to reflect the pulvinar's role as a regulatory control structure, sending spatially specific signals to modulate visual cortex excitability proactively. Together, these combined EEG/fMRI results illuminate the dynamically interacting cortical and subcortical processes underlying spatial attention, providing important insight not realizable using either method alone. SIGNIFICANCE STATEMENT Noninvasive recordings of changes in the brain's blood flow using functional magnetic resonance imaging and electrical activity using electroencephalography in humans have individually shown that shifting attention to a location in space produces spatially specific changes in visual cortex activity in anticipation of a stimulus. The mechanisms controlling these attention-related modulations of sensory cortex, however, are poorly understood. Here, we recorded these two complementary measures of brain activity simultaneously and examined their trial-to-trial covariations to gain insight into these attentional control mechanisms. This multi-methodological approach revealed the attention-related coordination of visual cortex modulation by the subcortical pulvinar nucleus of the thalamus while also disentangling the mechanisms underlying the attentional enhancement of relevant stimulus input and those underlying the concurrent suppression of irrelevant input. Copyright © 2017 the authors 0270-6474/17/377803-08$15.00/0.

  20. Role of feedforward geniculate inputs in the generation of orientation selectivity in the cat's primary visual cortex

    PubMed Central

    Viswanathan, Sivaram; Jayakumar, Jaikishan; Vidyasagar, Trichur R

    2011-01-01

    Abstract Neurones of the mammalian primary visual cortex have the remarkable property of being selective for the orientation of visual contours. It has been controversial whether the selectivity arises from intracortical mechanisms, from the pattern of afferent connectivity from lateral geniculate nucleus (LGN) to cortical cells or from the sharpening of a bias that is already present in the responses of many geniculate cells. To investigate this, we employed a variation of an electrical stimulation protocol in the LGN that has been claimed to suppress intracortical inputs and isolate the raw geniculocortical input to a striate cortical cell. Such stimulation led to a sharpening of the orientation sensitivity of geniculate cells themselves and some broadening of cortical orientation selectivity. These findings are consistent with the idea that non-specific inhibition of the signals from LGN cells which exhibit an orientation bias can generate the sharp orientation selectivity of primary visual cortical cells. This obviates the need for an excitatory convergence from geniculate cells whose receptive fields are arranged along a row in visual space as in the classical model and provides a framework for orientation sensitivity originating in the retina and getting sharpened through inhibition at higher levels of the visual pathway. PMID:21486788

  1. Predicting Cortical Dark/Bright Asymmetries from Natural Image Statistics and Early Visual Transforms

    PubMed Central

    Cooper, Emily A.; Norcia, Anthony M.

    2015-01-01

    The nervous system has evolved in an environment with structure and predictability. One of the ubiquitous principles of sensory systems is the creation of circuits that capitalize on this predictability. Previous work has identified predictable non-uniformities in the distributions of basic visual features in natural images that are relevant to the encoding tasks of the visual system. Here, we report that the well-established statistical distributions of visual features -- such as visual contrast, spatial scale, and depth -- differ between bright and dark image components. Following this analysis, we go on to trace how these differences in natural images translate into different patterns of cortical input that arise from the separate bright (ON) and dark (OFF) pathways originating in the retina. We use models of these early visual pathways to transform natural images into statistical patterns of cortical input. The models include the receptive fields and non-linear response properties of the magnocellular (M) and parvocellular (P) pathways, with their ON and OFF pathway divisions. The results indicate that there are regularities in visual cortical input beyond those that have previously been appreciated from the direct analysis of natural images. In particular, several dark/bright asymmetries provide a potential account for recently discovered asymmetries in how the brain processes visual features, such as violations of classic energy-type models. On the basis of our analysis, we expect that the dark/bright dichotomy in natural images plays a key role in the generation of both cortical and perceptual asymmetries. PMID:26020624

  2. Audiovisual speech integration in the superior temporal region is dysfunctional in dyslexia.

    PubMed

    Ye, Zheng; Rüsseler, Jascha; Gerth, Ivonne; Münte, Thomas F

    2017-07-25

    Dyslexia is an impairment of reading and spelling that affects both children and adults even after many years of schooling. Dyslexic readers have deficits in the integration of auditory and visual inputs but the neural mechanisms of the deficits are still unclear. This fMRI study examined the neural processing of auditorily presented German numbers 0-9 and videos of lip movements of a German native speaker voicing numbers 0-9 in unimodal (auditory or visual) and bimodal (always congruent) conditions in dyslexic readers and their matched fluent readers. We confirmed results of previous studies that the superior temporal gyrus/sulcus plays a critical role in audiovisual speech integration: fluent readers showed greater superior temporal activations for combined audiovisual stimuli than auditory-/visual-only stimuli. Importantly, such an enhancement effect was absent in dyslexic readers. Moreover, the auditory network (bilateral superior temporal regions plus medial PFC) was dynamically modulated during audiovisual integration in fluent, but not in dyslexic readers. These results suggest that superior temporal dysfunction may underly poor audiovisual speech integration in readers with dyslexia. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  3. Automatic Segmentation of Drosophila Neural Compartments Using GAL4 Expression Data Reveals Novel Visual Pathways.

    PubMed

    Panser, Karin; Tirian, Laszlo; Schulze, Florian; Villalba, Santiago; Jefferis, Gregory S X E; Bühler, Katja; Straw, Andrew D

    2016-08-08

    Identifying distinct anatomical structures within the brain and developing genetic tools to target them are fundamental steps for understanding brain function. We hypothesize that enhancer expression patterns can be used to automatically identify functional units such as neuropils and fiber tracts. We used two recent, genome-scale Drosophila GAL4 libraries and associated confocal image datasets to segment large brain regions into smaller subvolumes. Our results (available at https://strawlab.org/braincode) support this hypothesis because regions with well-known anatomy, namely the antennal lobes and central complex, were automatically segmented into familiar compartments. The basis for the structural assignment is clustering of voxels based on patterns of enhancer expression. These initial clusters are agglomerated to make hierarchical predictions of structure. We applied the algorithm to central brain regions receiving input from the optic lobes. Based on the automated segmentation and manual validation, we can identify and provide promising driver lines for 11 previously identified and 14 novel types of visual projection neurons and their associated optic glomeruli. The same strategy can be used in other brain regions and likely other species, including vertebrates. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  4. Task-specific reorganization of the auditory cortex in deaf humans

    PubMed Central

    Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin

    2017-01-01

    The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior–lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain. PMID:28069964

  5. Top-down influence on the visual cortex of the blind during sensory substitution.

    PubMed

    Murphy, Matthew C; Nau, Amy C; Fisher, Christopher; Kim, Seong-Gi; Schuman, Joel S; Chan, Kevin C

    2016-01-15

    Visual sensory substitution devices provide a non-surgical and flexible approach to vision rehabilitation in the blind. These devices convert images taken by a camera into cross-modal sensory signals that are presented as a surrogate for direct visual input. While previous work has demonstrated that the visual cortex of blind subjects is recruited during sensory substitution, the cognitive basis of this activation remains incompletely understood. To test the hypothesis that top-down input provides a significant contribution to this activation, we performed functional MRI scanning in 11 blind (7 acquired and 4 congenital) and 11 sighted subjects under two conditions: passive listening of image-encoded soundscapes before sensory substitution training and active interpretation of the same auditory sensory substitution signals after a 10-minute training session. We found that the modulation of visual cortex activity due to active interpretation was significantly stronger in the blind over sighted subjects. In addition, congenitally blind subjects showed stronger task-induced modulation in the visual cortex than acquired blind subjects. In a parallel experiment, we scanned 18 blind (11 acquired and 7 congenital) and 18 sighted subjects at rest to investigate alterations in functional connectivity due to visual deprivation. The results demonstrated that visual cortex connectivity of the blind shifted away from sensory networks and toward known areas of top-down input. Taken together, our data support the model of the brain, including the visual system, as a highly flexible task-based and not sensory-based machine. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Task-specific reorganization of the auditory cortex in deaf humans.

    PubMed

    Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin

    2017-01-24

    The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior-lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain.

  7. The Effect of Conventional and Transparent Surgical Masks on Speech Understanding in Individuals with and without Hearing Loss.

    PubMed

    Atcherson, Samuel R; Mendel, Lisa Lucks; Baltimore, Wesley J; Patro, Chhayakanta; Lee, Sungmin; Pousson, Monique; Spann, M Joshua

    2017-01-01

    It is generally well known that speech perception is often improved with integrated audiovisual input whether in quiet or in noise. In many health-care environments, however, conventional surgical masks block visual access to the mouth and obscure other potential facial cues. In addition, these environments can be noisy. Although these masks may not alter the acoustic properties, the presence of noise in addition to the lack of visual input can have a deleterious effect on speech understanding. A transparent ("see-through") surgical mask may help to overcome this issue. To compare the effect of noise and various visual input conditions on speech understanding for listeners with normal hearing (NH) and hearing impairment using different surgical masks. Participants were assigned to one of three groups based on hearing sensitivity in this quasi-experimental, cross-sectional study. A total of 31 adults participated in this study: one talker, ten listeners with NH, ten listeners with moderate sensorineural hearing loss, and ten listeners with severe-to-profound hearing loss. Selected lists from the Connected Speech Test were digitally recorded with and without surgical masks and then presented to the listeners at 65 dB HL in five conditions against a background of four-talker babble (+10 dB SNR): without a mask (auditory only), without a mask (auditory and visual), with a transparent mask (auditory only), with a transparent mask (auditory and visual), and with a paper mask (auditory only). A significant difference was found in the spectral analyses of the speech stimuli with and without the masks; however, no more than ∼2 dB root mean square. Listeners with NH performed consistently well across all conditions. Both groups of listeners with hearing impairment benefitted from visual input from the transparent mask. The magnitude of improvement in speech perception in noise was greatest for the severe-to-profound group. Findings confirm improved speech perception performance in noise for listeners with hearing impairment when visual input is provided using a transparent surgical mask. Most importantly, the use of the transparent mask did not negatively affect speech perception performance in noise. American Academy of Audiology

  8. Temporal precision in the visual pathway through the interplay of excitation and stimulus-driven suppression.

    PubMed

    Butts, Daniel A; Weng, Chong; Jin, Jianzhong; Alonso, Jose-Manuel; Paninski, Liam

    2011-08-03

    Visual neurons can respond with extremely precise temporal patterning to visual stimuli that change on much slower time scales. Here, we investigate how the precise timing of cat thalamic spike trains-which can have timing as precise as 1 ms-is related to the stimulus, in the context of both artificial noise and natural visual stimuli. Using a nonlinear modeling framework applied to extracellular data, we demonstrate that the precise timing of thalamic spike trains can be explained by the interplay between an excitatory input and a delayed suppressive input that resembles inhibition, such that neuronal responses only occur in brief windows where excitation exceeds suppression. The resulting description of thalamic computation resembles earlier models of contrast adaptation, suggesting a more general role for mechanisms of contrast adaptation in visual processing. Thus, we describe a more complex computation underlying thalamic responses to artificial and natural stimuli that has implications for understanding how visual information is represented in the early stages of visual processing.

  9. Electrophysiological evidence of altered visual processing in adults who experienced visual deprivation during infancy.

    PubMed

    Segalowitz, Sidney J; Sternin, Avital; Lewis, Terri L; Dywan, Jane; Maurer, Daphne

    2017-04-01

    We examined the role of early visual input in visual system development by testing adults who had been born with dense bilateral cataracts that blocked all patterned visual input during infancy until the cataractous lenses were removed surgically and the eyes fitted with compensatory contact lenses. Patients viewed checkerboards and textures to explore early processing regions (V1, V2), Glass patterns to examine global form processing (V4), and moving stimuli to explore global motion processing (V5). Patients' ERPs differed from those of controls in that (1) the V1 component was much smaller for all but the simplest stimuli and (2) extrastriate components did not differentiate amongst texture stimuli, Glass patterns, or motion stimuli. The results indicate that early visual deprivation contributes to permanent abnormalities at early and mid levels of visual processing, consistent with enduring behavioral deficits in the ability to process complex textures, global form, and global motion. © 2017 Wiley Periodicals, Inc.

  10. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality.

    PubMed

    Cecere, Roberto; Gross, Joachim; Thut, Gregor

    2016-06-01

    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  11. Visual Functions of the Thalamus

    PubMed Central

    Usrey, W. Martin; Alitto, Henry J.

    2017-01-01

    The thalamus is the heavily interconnected partner of the neocortex. All areas of the neocortex receive afferent input from and send efferent projections to specific thalamic nuclei. Through these connections, the thalamus serves to provide the cortex with sensory input, and to facilitate interareal cortical communication and motor and cognitive functions. In the visual system, the lateral geniculate nucleus (LGN) of the dorsal thalamus is the gateway through which visual information reaches the cerebral cortex. Visual processing in the LGN includes spatial and temporal influences on visual signals that serve to adjust response gain, transform the temporal structure of retinal activity patterns, and increase the signal-to-noise ratio of the retinal signal while preserving its basic content. This review examines recent advances in our understanding of LGN function and circuit organization and places these findings in a historical context. PMID:28217740

  12. Thermal effects in the Input Optics of the Enhanced Laser Interferometer Gravitational-Wave Observatory interferometers.

    PubMed

    Dooley, Katherine L; Arain, Muzammil A; Feldbaum, David; Frolov, Valery V; Heintze, Matthew; Hoak, Daniel; Khazanov, Efim A; Lucianetti, Antonio; Martin, Rodica M; Mueller, Guido; Palashov, Oleg; Quetschke, Volker; Reitze, David H; Savage, R L; Tanner, D B; Williams, Luke F; Wu, Wan

    2012-03-01

    We present the design and performance of the LIGO Input Optics subsystem as implemented for the sixth science run of the LIGO interferometers. The Initial LIGO Input Optics experienced thermal side effects when operating with 7 W input power. We designed, built, and implemented improved versions of the Input Optics for Enhanced LIGO, an incremental upgrade to the Initial LIGO interferometers, designed to run with 30 W input power. At four times the power of Initial LIGO, the Enhanced LIGO Input Optics demonstrated improved performance including better optical isolation, less thermal drift, minimal thermal lensing, and higher optical efficiency. The success of the Input Optics design fosters confidence for its ability to perform well in Advanced LIGO.

  13. Adaptive Intuitionistic Fuzzy Enhancement of Brain Tumor MR Images

    NASA Astrophysics Data System (ADS)

    Deng, He; Deng, Wankai; Sun, Xianping; Ye, Chaohui; Zhou, Xin

    2016-10-01

    Image enhancement techniques are able to improve the contrast and visual quality of magnetic resonance (MR) images. However, conventional methods cannot make up some deficiencies encountered by respective brain tumor MR imaging modes. In this paper, we propose an adaptive intuitionistic fuzzy sets-based scheme, called as AIFE, which takes information provided from different MR acquisitions and tries to enhance the normal and abnormal structural regions of the brain while displaying the enhanced results as a single image. The AIFE scheme firstly separates an input image into several sub images, then divides each sub image into object and background areas. After that, different novel fuzzification, hyperbolization and defuzzification operations are implemented on each object/background area, and finally an enhanced result is achieved via nonlinear fusion operators. The fuzzy implementations can be processed in parallel. Real data experiments demonstrate that the AIFE scheme is not only effectively useful to have information from images acquired with different MR sequences fused in a single image, but also has better enhancement performance when compared to conventional baseline algorithms. This indicates that the proposed AIFE scheme has potential for improving the detection and diagnosis of brain tumors.

  14. Posterior Inferotemporal Cortex Cells Use Multiple Input Pathways for Shape Encoding.

    PubMed

    Ponce, Carlos R; Lomber, Stephen G; Livingstone, Margaret S

    2017-05-10

    In the macaque monkey brain, posterior inferior temporal (PIT) cortex cells contribute to visual object recognition. They receive concurrent inputs from visual areas V4, V3, and V2. We asked how these different anatomical pathways shape PIT response properties by deactivating them while monitoring PIT activity in two male macaques. We found that cooling of V4 or V2|3 did not lead to consistent changes in population excitatory drive; however, population pattern analyses showed that V4-based pathways were more important than V2|3-based pathways. We did not find any image features that predicted decoding accuracy differences between both interventions. Using the HMAX hierarchical model of visual recognition, we found that different groups of simulated "PIT" units with different input histories (lacking "V2|3" or "V4" input) allowed for comparable levels of object-decoding performance and that removing a large fraction of "PIT" activity resulted in similar drops in performance as in the cooling experiments. We conclude that distinct input pathways to PIT relay similar types of shape information, with V1-dependent V4 cells providing more quantitatively useful information for overall encoding than cells in V2 projecting directly to PIT. SIGNIFICANCE STATEMENT Convolutional neural networks are the best models of the visual system, but most emphasize input transformations across a serial hierarchy akin to the primary "ventral stream" (V1 → V2 → V4 → IT). However, the ventral stream also comprises parallel "bypass" pathways: V1 also connects to V4, and V2 to IT. To explore the advantages of mixing long and short pathways in the macaque brain, we used cortical cooling to silence inputs to posterior IT and compared the findings with an HMAX model with parallel pathways. Copyright © 2017 the authors 0270-6474/17/375019-16$15.00/0.

  15. Posterior Inferotemporal Cortex Cells Use Multiple Input Pathways for Shape Encoding

    PubMed Central

    2017-01-01

    In the macaque monkey brain, posterior inferior temporal (PIT) cortex cells contribute to visual object recognition. They receive concurrent inputs from visual areas V4, V3, and V2. We asked how these different anatomical pathways shape PIT response properties by deactivating them while monitoring PIT activity in two male macaques. We found that cooling of V4 or V2|3 did not lead to consistent changes in population excitatory drive; however, population pattern analyses showed that V4-based pathways were more important than V2|3-based pathways. We did not find any image features that predicted decoding accuracy differences between both interventions. Using the HMAX hierarchical model of visual recognition, we found that different groups of simulated “PIT” units with different input histories (lacking “V2|3” or “V4” input) allowed for comparable levels of object-decoding performance and that removing a large fraction of “PIT” activity resulted in similar drops in performance as in the cooling experiments. We conclude that distinct input pathways to PIT relay similar types of shape information, with V1-dependent V4 cells providing more quantitatively useful information for overall encoding than cells in V2 projecting directly to PIT. SIGNIFICANCE STATEMENT Convolutional neural networks are the best models of the visual system, but most emphasize input transformations across a serial hierarchy akin to the primary “ventral stream” (V1 → V2 → V4 → IT). However, the ventral stream also comprises parallel “bypass” pathways: V1 also connects to V4, and V2 to IT. To explore the advantages of mixing long and short pathways in the macaque brain, we used cortical cooling to silence inputs to posterior IT and compared the findings with an HMAX model with parallel pathways. PMID:28416597

  16. Functional and structural comparison of visual lateralization in birds – similar but still different

    PubMed Central

    Ströckens, Felix

    2014-01-01

    Vertebrate brains display physiological and anatomical left-right differences, which are related to hemispheric dominances for specific functions. Functional lateralizations likely rely on structural left-right differences in intra- and interhemispheric connectivity patterns that develop in tight gene-environment interactions. The visual systems of chickens and pigeons show that asymmetrical light stimulation during ontogeny induces a dominance of the left hemisphere for visuomotor control that is paralleled by projection asymmetries within the ascending visual pathways. But structural asymmetries vary essentially between both species concerning the affected pathway (thalamo- vs. tectofugal system), constancy of effects (transient vs. permanent), and the hemisphere receiving stronger bilateral input (right vs. left). These discrepancies suggest that at least two aspects of visual processes are influenced by asymmetric light stimulation: (1) visuomotor dominance develops within the ontogenetically stronger stimulated hemisphere but not necessarily in the one receiving stronger bottom-up input. As a secondary consequence of asymmetrical light experience, lateralized top-down mechanisms play a critical role in the emergence of hemispheric dominance. (2) Ontogenetic light experiences may affect the dominant use of left- and right-hemispheric strategies. Evidences from social and spatial cognition tasks indicate that chickens rely more on a right-hemispheric global strategy whereas pigeons display a dominance of the left hemisphere. Thus, behavioral asymmetries are linked to a stronger bilateral input to the right hemisphere in chickens but to the left one in pigeons. The degree of bilateral visual input may determine the dominant visual processing strategy when redundant encoding is possible. This analysis supports that environmental stimulation affects the balance between hemispheric-specific processing by lateralized interactions of bottom-up and top-down systems. PMID:24723898

  17. Retinal Origin of Direction Selectivity in the Superior Colliculus

    PubMed Central

    Shi, Xuefeng; Barchini, Jad; Ledesma, Hector Acaron; Koren, David; Jin, Yanjiao; Liu, Xiaorong; Wei, Wei; Cang, Jianhua

    2017-01-01

    Detecting visual features in the environment such as motion direction is crucial for survival. The circuit mechanisms that give rise to direction selectivity in a major visual center, the superior colliculus (SC), are entirely unknown. Here, we optogenetically isolate the retinal inputs that individual direction-selective SC neurons receive and find that they are already selective as a result of precisely converging inputs from similarly-tuned retinal ganglion cells. The direction selective retinal input is linearly amplified by the intracollicular circuits without changing its preferred direction or level of selectivity. Finally, using 2-photon calcium imaging, we show that SC direction selectivity is dramatically reduced in transgenic mice that have decreased retinal selectivity. Together, our studies demonstrate a retinal origin of direction selectivity in the SC, and reveal a central visual deficit as a consequence of altered feature selectivity in the retina. PMID:28192394

  18. Enhancing links between visual short term memory, visual attention and cognitive control processes through practice: An electrophysiological insight.

    PubMed

    Fuggetta, Giorgio; Duke, Philip A

    2017-05-01

    The operation of attention on visible objects involves a sequence of cognitive processes. The current study firstly aimed to elucidate the effects of practice on neural mechanisms underlying attentional processes as measured with both behavioural and electrophysiological measures. Secondly, it aimed to identify any pattern in the relationship between Event-Related Potential (ERP) components which play a role in the operation of attention in vision. Twenty-seven participants took part in two recording sessions one week apart, performing an experimental paradigm which combined a match-to-sample task with a memory-guided efficient visual-search task within one trial sequence. Overall, practice decreased behavioural response times, increased accuracy, and modulated several ERP components that represent cognitive and neural processing stages. This neuromodulation through practice was also associated with an enhanced link between behavioural measures and ERP components and with an enhanced cortico-cortical interaction of functionally interconnected ERP components. Principal component analysis (PCA) of the ERP amplitude data revealed three components, having different rostro-caudal topographic representations. The first component included both the centro-parietal and parieto-occipital mismatch triggered negativity - involved in integration of visual representations of the target with current task-relevant representations stored in visual working memory - loaded with second negative posterior-bilateral (N2pb) component, involved in categorising specific pop-out target features. The second component comprised the amplitude of bilateral anterior P2 - related to detection of a specific pop-out feature - loaded with bilateral anterior N2, related to detection of conflicting features, and fronto-central mismatch triggered negativity. The third component included the parieto-occipital N1 - related to early neural responses to the stimulus array - which loaded with the second negative posterior-contralateral (N2pc) component, mediating the process of orienting and focusing covert attention on peripheral target features. We discussed these three components as representing different neurocognitive systems modulated with practice within which the input selection process operates. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  19. Impaired integration of object knowledge and visual input in a case of ventral simultanagnosia with bilateral damage to area V4.

    PubMed

    Leek, E Charles; d'Avossa, Giovanni; Tainturier, Marie-Josèphe; Roberts, Daniel J; Yuen, Sung Lai; Hu, Mo; Rafal, Robert

    2012-01-01

    This study examines how brain damage can affect the cognitive processes that support the integration of sensory input and prior knowledge during shape perception. It is based on the first detailed study of acquired ventral simultanagnosia, which was found in a patient (M.T.) with posterior occipitotemporal lesions encompassing V4 bilaterally. Despite showing normal object recognition for single items in both accuracy and response times (RTs), and intact low-level vision assessed across an extensive battery of tests, M.T. was impaired in object identification with overlapping figures displays. Task performance was modulated by familiarity: Unlike controls, M.T. was faster with overlapping displays of abstract shapes than with overlapping displays of common objects. His performance with overlapping common object displays was also influenced by both the semantic relatedness and visual similarity of the display items. These findings challenge claims that visual perception is driven solely by feedforward mechanisms and show how brain damage can selectively impair high-level perceptual processes supporting the integration of stored knowledge and visual sensory input.

  20. Neuronal populations in the occipital cortex of the blind synchronize to the temporal dynamics of speech

    PubMed Central

    Van Ackeren, Markus Johannes; Barbero, Francesca M; Mattioni, Stefania; Bottini, Roberto

    2018-01-01

    The occipital cortex of early blind individuals (EB) activates during speech processing, challenging the notion of a hard-wired neurobiology of language. But, at what stage of speech processing do occipital regions participate in EB? Here we demonstrate that parieto-occipital regions in EB enhance their synchronization to acoustic fluctuations in human speech in the theta-range (corresponding to syllabic rate), irrespective of speech intelligibility. Crucially, enhanced synchronization to the intelligibility of speech was selectively observed in primary visual cortex in EB, suggesting that this region is at the interface between speech perception and comprehension. Moreover, EB showed overall enhanced functional connectivity between temporal and occipital cortices that are sensitive to speech intelligibility and altered directionality when compared to the sighted group. These findings suggest that the occipital cortex of the blind adopts an architecture that allows the tracking of speech material, and therefore does not fully abstract from the reorganized sensory inputs it receives. PMID:29338838

  1. Preserving information in neural transmission.

    PubMed

    Sincich, Lawrence C; Horton, Jonathan C; Sharpee, Tatyana O

    2009-05-13

    Along most neural pathways, the spike trains transmitted from one neuron to the next are altered. In the process, neurons can either achieve a more efficient stimulus representation, or extract some biologically important stimulus parameter, or succeed at both. We recorded the inputs from single retinal ganglion cells and the outputs from connected lateral geniculate neurons in the macaque to examine how visual signals are relayed from retina to cortex. We found that geniculate neurons re-encoded multiple temporal stimulus features to yield output spikes that carried more information about stimuli than was available in each input spike. The coding transformation of some relay neurons occurred with no decrement in information rate, despite output spike rates that averaged half the input spike rates. This preservation of transmitted information was achieved by the short-term summation of inputs that geniculate neurons require to spike. A reduced model of the retinal and geniculate visual responses, based on two stimulus features and their associated nonlinearities, could account for >85% of the total information available in the spike trains and the preserved information transmission. These results apply to neurons operating on a single time-varying input, suggesting that synaptic temporal integration can alter the temporal receptive field properties to create a more efficient representation of visual signals in the thalamus than the retina.

  2. Visual colorimetric detection of tin(II) and nitrite using a molybdenum oxide nanomaterial-based three-input logic gate.

    PubMed

    Du, Jiayan; Zhao, Mengxin; Huang, Wei; Deng, Yuequan; He, Yi

    2018-05-09

    We report a molybdenum oxide (MoO 3 ) nanomaterial-based three-input logic gate that uses Sn 2+ , NO 2 - , and H + ions as inputs. Under acidic conditions, Sn 2+ is able to reduce MoO 3 nanosheets, generating oxygen-vacancy-rich MoO 3-x nanomaterials along with strong localized surface plasmon resonance (LSPR) and an intense blue solution as the output signal. When NO 2 - is introduced, the redox reaction between the MoO 3 nanosheets and Sn 2+ is strongly inhibited because the NO 2 - consumes both H + and Sn 2+ . The three-input logic gate was employed for the visual colorimetric detection of Sn 2+ and NO 2 - under different input states. The colorimetric assay's limit of detection for Sn 2+ and the lowest concentration of NO 2 - detectable by the assay were found to be 27.5 nM and 0.1 μM, respectively. The assay permits the visual detection of Sn 2+ and NO 2 - down to concentrations as low as 2 μM and 25 μM, respectively. The applicability of the logic-gate-based colorimetric assay was demonstrated by using it to detect Sn 2+ and NO 2 - in several water sources.

  3. Congenital Anophthalmia and Binocular Neonatal Enucleation Differently Affect the Proteome of Primary and Secondary Visual Cortices in Mice.

    PubMed

    Laramée, Marie-Eve; Smolders, Katrien; Hu, Tjing-Tjing; Bronchti, Gilles; Boire, Denis; Arckens, Lutgarde

    2016-01-01

    In blind individuals, visually deprived occipital areas are activated by non-visual stimuli. The extent of this cross-modal activation depends on the age at onset of blindness. Cross-modal inputs have access to several anatomical pathways to reactivate deprived visual areas. Ectopic cross-modal subcortical connections have been shown in anophthalmic animals but not in animals deprived of sight at a later age. Direct and indirect cross-modal cortical connections toward visual areas could also be involved, yet the number of neurons implicated is similar between blind mice and sighted controls. Changes at the axon terminal, dendritic spine or synaptic level are therefore expected upon loss of visual inputs. Here, the proteome of V1, V2M and V2L from P0-enucleated, anophthalmic and sighted mice, sharing a common genetic background (C57BL/6J x ZRDCT/An), was investigated by 2-D DIGE and Western analyses to identify molecular adaptations to enucleation and/or anophthalmia. Few proteins were differentially expressed in enucleated or anophthalmic mice in comparison to sighted mice. The loss of sight affected three pathways: metabolism, synaptic transmission and morphogenesis. Most changes were detected in V1, followed by V2M. Overall, cross-modal adaptations could be promoted in both models of early blindness but not through the exact same molecular strategy. A lower metabolic activity observed in visual areas of blind mice suggests that even if cross-modal inputs reactivate visual areas, they could remain suboptimally processed.

  4. Congenital Anophthalmia and Binocular Neonatal Enucleation Differently Affect the Proteome of Primary and Secondary Visual Cortices in Mice

    PubMed Central

    Smolders, Katrien; Hu, Tjing-Tjing; Bronchti, Gilles; Boire, Denis; Arckens, Lutgarde

    2016-01-01

    In blind individuals, visually deprived occipital areas are activated by non-visual stimuli. The extent of this cross-modal activation depends on the age at onset of blindness. Cross-modal inputs have access to several anatomical pathways to reactivate deprived visual areas. Ectopic cross-modal subcortical connections have been shown in anophthalmic animals but not in animals deprived of sight at a later age. Direct and indirect cross-modal cortical connections toward visual areas could also be involved, yet the number of neurons implicated is similar between blind mice and sighted controls. Changes at the axon terminal, dendritic spine or synaptic level are therefore expected upon loss of visual inputs. Here, the proteome of V1, V2M and V2L from P0-enucleated, anophthalmic and sighted mice, sharing a common genetic background (C57BL/6J x ZRDCT/An), was investigated by 2-D DIGE and Western analyses to identify molecular adaptations to enucleation and/or anophthalmia. Few proteins were differentially expressed in enucleated or anophthalmic mice in comparison to sighted mice. The loss of sight affected three pathways: metabolism, synaptic transmission and morphogenesis. Most changes were detected in V1, followed by V2M. Overall, cross-modal adaptations could be promoted in both models of early blindness but not through the exact same molecular strategy. A lower metabolic activity observed in visual areas of blind mice suggests that even if cross-modal inputs reactivate visual areas, they could remain suboptimally processed. PMID:27410964

  5. Visual Processing: Hungry Like the Mouse.

    PubMed

    Piscopo, Denise M; Niell, Cristopher M

    2016-09-07

    In this issue of Neuron, Burgess et al. (2016) explore how motivational state interacts with visual processing, by examining hunger modulation of food-associated visual responses in postrhinal cortical neurons and their inputs from amygdala. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. 77 FR 20005 - Solicitation of Input From Stakeholders Regarding the Proposed Crop Protection Competitive Grants...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-03

    ... DEPARTMENT OF AGRICULTURE National Institute of Food and Agriculture Solicitation of Input From... Food and Agriculture, USDA. ACTION: Notice of public meeting and request for stakeholder input. SUMMARY... held by conference call (audio) and internet (visual only). Connection details for those meetings will...

  7. Style grammars for interactive visualization of architecture.

    PubMed

    Aliaga, Daniel G; Rosen, Paul A; Bekins, Daniel R

    2007-01-01

    Interactive visualization of architecture provides a way to quickly visualize existing or novel buildings and structures. Such applications require both fast rendering and an effortless input regimen for creating and changing architecture using high-level editing operations that automatically fill in the necessary details. Procedural modeling and synthesis is a powerful paradigm that yields high data amplification and can be coupled with fast-rendering techniques to quickly generate plausible details of a scene without much or any user interaction. Previously, forward generating procedural methods have been proposed where a procedure is explicitly created to generate particular content. In this paper, we present our work in inverse procedural modeling of buildings and describe how to use an extracted repertoire of building grammars to facilitate the visualization and quick modification of architectural structures and buildings. We demonstrate an interactive application where the user draws simple building blocks and, using our system, can automatically complete the building "in the style of" other buildings using view-dependent texture mapping or nonphotorealistic rendering techniques. Our system supports an arbitrary number of building grammars created from user subdivided building models and captured photographs. Using only edit, copy, and paste metaphors, the entire building styles can be altered and transferred from one building to another in a few operations, enhancing the ability to modify an existing architectural structure or to visualize a novel building in the style of the others.

  8. The horizontal brain slice preparation: a novel approach for visualizing and recording from all layers of the tadpole tectum.

    PubMed

    Hamodi, Ali S; Pratt, Kara G

    2015-01-01

    The Xenopus tadpole optic tectum is a multisensory processing center that receives direct visual input as well as nonvisual mechanosensory input. The tectal neurons that comprise the optic tectum are organized into layers. These neurons project their dendrites laterally into the neuropil where visual inputs target the distal region of the dendrite and nonvisual inputs target the proximal region of the same dendrite. The Xenopus tadpole tectum is a popular model to study the development of sensory circuits. However, whole cell patch-clamp electrophysiological studies of the tadpole tectum (using the whole brain or in vivo preparations) have focused solely on the deep-layer tectal neurons because only neurons of the deep layer are visible and accessible for whole cell electrophysiological recordings. As a result, whereas the development and plasticity of these deep-layer neurons has been well-studied, essentially nothing has been reported about the electrophysiology of neurons residing beyond this layer. Hence, there exists a large gap in our understanding about the functional development of the amphibian tectum as a whole. To remedy this, we developed a novel isolated brain preparation that allows visualizing and recording from all layers of the tectum. We refer to this preparation as the "horizontal brain slice preparation." Here, we describe the preparation method and illustrate how it can be used to characterize the electrophysiology of neurons across all of the layers of the tectum as well as the spatial pattern of synaptic input from the different sensory modalities. Copyright © 2015 the American Physiological Society.

  9. Neurochemical changes in the pericalcarine cortex in congenital blindness attributable to bilateral anophthalmia.

    PubMed

    Coullon, Gaelle S L; Emir, Uzay E; Fine, Ione; Watkins, Kate E; Bridge, Holly

    2015-09-01

    Congenital blindness leads to large-scale functional and structural reorganization in the occipital cortex, but relatively little is known about the neurochemical changes underlying this cross-modal plasticity. To investigate the effect of complete and early visual deafferentation on the concentration of metabolites in the pericalcarine cortex, (1)H magnetic resonance spectroscopy was performed in 14 sighted subjects and 5 subjects with bilateral anophthalmia, a condition in which both eyes fail to develop. In the pericalcarine cortex, where primary visual cortex is normally located, the proportion of gray matter was significantly greater, and levels of choline, glutamate, glutamine, myo-inositol, and total creatine were elevated in anophthalmic relative to sighted subjects. Anophthalmia had no effect on the structure or neurochemistry of a sensorimotor cortex control region. More gray matter, combined with high levels of choline and myo-inositol, resembles the profile of the cortex at birth and suggests that the lack of visual input from the eyes might have delayed or arrested the maturation of this cortical region. High levels of choline and glutamate/glutamine are consistent with enhanced excitatory circuits in the anophthalmic occipital cortex, which could reflect a shift toward enhanced plasticity or sensitivity that could in turn mediate or unmask cross-modal responses. Finally, it is possible that the change in function of the occipital cortex results in biochemical profiles that resemble those of auditory, language, or somatosensory cortex. Copyright © 2015 the American Physiological Society.

  10. Unconscious integration of multisensory bodily inputs in the peripersonal space shapes bodily self-consciousness.

    PubMed

    Salomon, Roy; Noel, Jean-Paul; Łukowska, Marta; Faivre, Nathan; Metzinger, Thomas; Serino, Andrea; Blanke, Olaf

    2017-09-01

    Recent studies have highlighted the role of multisensory integration as a key mechanism of self-consciousness. In particular, integration of bodily signals within the peripersonal space (PPS) underlies the experience of the self in a body we own (self-identification) and that is experienced as occupying a specific location in space (self-location), two main components of bodily self-consciousness (BSC). Experiments investigating the effects of multisensory integration on BSC have typically employed supra-threshold sensory stimuli, neglecting the role of unconscious sensory signals in BSC, as tested in other consciousness research. Here, we used psychophysical techniques to test whether multisensory integration of bodily stimuli underlying BSC also occurs for multisensory inputs presented below the threshold of conscious perception. Our results indicate that visual stimuli rendered invisible through continuous flash suppression boost processing of tactile stimuli on the body (Exp. 1), and enhance the perception of near-threshold tactile stimuli (Exp. 2), only once they entered PPS. We then employed unconscious multisensory stimulation to manipulate BSC. Participants were presented with tactile stimulation on their body and with visual stimuli on a virtual body, seen at a distance, which were either visible or rendered invisible. We found that participants reported higher self-identification with the virtual body in the synchronous visuo-tactile stimulation (as compared to asynchronous stimulation; Exp. 3), and shifted their self-location toward the virtual body (Exp.4), even if stimuli were fully invisible. Our results indicate that multisensory inputs, even outside of awareness, are integrated and affect the phenomenological content of self-consciousness, grounding BSC firmly in the field of psychophysical consciousness studies. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Does Input Enhancement Work for Learning Politeness Strategies?

    ERIC Educational Resources Information Center

    Khatib, Mohammad; Safari, Mahmood

    2013-01-01

    The present study investigated the effect of input enhancement on the acquisition of English politeness strategies by intermediate EFL learners. Two groups of freshman English majors were randomly assigned to the experimental (enhanced input) group and the control (mere exposure) group. Initially, a TOEFL test and a discourse completion test (DCT)…

  12. An integrated approach for the knowledge discovery in computer simulation models with a multi-dimensional parameter space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khawli, Toufik Al; Eppelt, Urs; Hermanns, Torsten

    2016-06-08

    In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part ismore » to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.« less

  13. An integrated approach for the knowledge discovery in computer simulation models with a multi-dimensional parameter space

    NASA Astrophysics Data System (ADS)

    Khawli, Toufik Al; Gebhardt, Sascha; Eppelt, Urs; Hermanns, Torsten; Kuhlen, Torsten; Schulz, Wolfgang

    2016-06-01

    In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part is to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.

  14. Context-dependent spatially periodic activity in the human entorhinal cortex

    PubMed Central

    Nguyen, T. Peter; Török, Ágoston; Shen, Jason Y.; Briggs, Deborah E.; Modur, Pradeep N.; Buchanan, Robert J.

    2017-01-01

    The spatially periodic activity of grid cells in the entorhinal cortex (EC) of the rodent, primate, and human provides a coordinate system that, together with the hippocampus, informs an individual of its location relative to the environment and encodes the memory of that location. Among the most defining features of grid-cell activity are the 60° rotational symmetry of grids and preservation of grid scale across environments. Grid cells, however, do display a limited degree of adaptation to environments. It remains unclear if this level of environment invariance generalizes to human grid-cell analogs, where the relative contribution of visual input to the multimodal sensory input of the EC is significantly larger than in rodents. Patients diagnosed with nontractable epilepsy who were implanted with entorhinal cortical electrodes performing virtual navigation tasks to memorized locations enabled us to investigate associations between grid-like patterns and environment. Here, we report that the activity of human entorhinal cortical neurons exhibits adaptive scaling in grid period, grid orientation, and rotational symmetry in close association with changes in environment size, shape, and visual cues, suggesting scale invariance of the frequency, rather than the wavelength, of spatially periodic activity. Our results demonstrate that neurons in the human EC represent space with an enhanced flexibility relative to neurons in rodents because they are endowed with adaptive scalability and context dependency. PMID:28396399

  15. Natural asynchronies in audiovisual communication signals regulate neuronal multisensory interactions in voice-sensitive cortex.

    PubMed

    Perrodin, Catherine; Kayser, Christoph; Logothetis, Nikos K; Petkov, Christopher I

    2015-01-06

    When social animals communicate, the onset of informative content in one modality varies considerably relative to the other, such as when visual orofacial movements precede a vocalization. These naturally occurring asynchronies do not disrupt intelligibility or perceptual coherence. However, they occur on time scales where they likely affect integrative neuronal activity in ways that have remained unclear, especially for hierarchically downstream regions in which neurons exhibit temporally imprecise but highly selective responses to communication signals. To address this, we exploited naturally occurring face- and voice-onset asynchronies in primate vocalizations. Using these as stimuli we recorded cortical oscillations and neuronal spiking responses from functional MRI (fMRI)-localized voice-sensitive cortex in the anterior temporal lobe of macaques. We show that the onset of the visual face stimulus resets the phase of low-frequency oscillations, and that the face-voice asynchrony affects the prominence of two key types of neuronal multisensory responses: enhancement or suppression. Our findings show a three-way association between temporal delays in audiovisual communication signals, phase-resetting of ongoing oscillations, and the sign of multisensory responses. The results reveal how natural onset asynchronies in cross-sensory inputs regulate network oscillations and neuronal excitability in the voice-sensitive cortex of macaques, a suggested animal model for human voice areas. These findings also advance predictions on the impact of multisensory input on neuronal processes in face areas and other brain regions.

  16. Input Manipulation, Enhancement and Processing: Theoretical Views and Empirical Research

    ERIC Educational Resources Information Center

    Benati, Alessandro

    2016-01-01

    Researchers in the field of instructed second language acquisition have been examining the issue of how learners interact with input by conducting research measuring particular kinds of instructional interventions (input-oriented and meaning-based). These interventions include such things as input flood, textual enhancement and processing…

  17. Emergence of Orientation Selectivity in the Mammalian Visual Pathway

    PubMed Central

    Scholl, Benjamin; Tan, Andrew Y. Y.; Corey, Joseph

    2013-01-01

    Orientation selectivity is a property of mammalian primary visual cortex (V1) neurons, yet its emergence along the visual pathway varies across species. In carnivores and primates, elongated receptive fields first appear in V1, whereas in lagomorphs such receptive fields emerge earlier, in the retina. Here we examine the mouse visual pathway and reveal the existence of orientation selectivity in lateral geniculate nucleus (LGN) relay cells. Cortical inactivation does not reduce this orientation selectivity, indicating that cortical feedback is not its source. Orientation selectivity is similar for LGN relay cells spiking and subthreshold input to V1 neurons, suggesting that cortical orientation selectivity is inherited from the LGN in mouse. In contrast, orientation selectivity of cat LGN relay cells is small relative to subthreshold inputs onto V1 simple cells. Together, these differences show that although orientation selectivity exists in visual neurons of both rodents and carnivores, its emergence along the visual pathway, and thus its underlying neuronal circuitry, is fundamentally different. PMID:23804085

  18. Age-Related Differences in Cortical and Subcortical Activities during Observation and Motor Imagery of Dynamic Postural Tasks: An fMRI Study.

    PubMed

    Mouthon, A; Ruffieux, J; Mouthon, M; Hoogewoud, H-M; Annoni, J-M; Taube, W

    2018-01-01

    Age-related changes in brain activation other than in the primary motor cortex are not well known with respect to dynamic balance control. Therefore, the current study aimed to explore age-related differences in the control of static and dynamic postural tasks using fMRI during mental simulation of balance tasks. For this purpose, 16 elderly (72 ± 5 years) and 16 young adults (27 ± 5 years) were asked to mentally simulate a static and a dynamic balance task by motor imagery (MI), action observation (AO), or the combination of AO and MI (AO + MI). Age-related differences were detected in the form of larger brain activations in elderly compared to young participants, especially in the challenging dynamic task when applying AO + MI. Interestingly, when MI (no visual input) was contrasted to AO (visual input), elderly participants revealed deactivation of subcortical areas. The finding that the elderly demonstrated overactivation in mostly cortical areas in challenging postural conditions with visual input (AO + MI and AO) but deactivation in subcortical areas during MI (no vision) may indicate that elderly individuals allocate more cortical resources to the internal representation of dynamic postural tasks. Furthermore, it might be assumed that they depend more strongly on visual input to activate subcortical internal representations.

  19. Age-Related Differences in Cortical and Subcortical Activities during Observation and Motor Imagery of Dynamic Postural Tasks: An fMRI Study

    PubMed Central

    Ruffieux, J.; Mouthon, M.; Hoogewoud, H.-M.; Taube, W.

    2018-01-01

    Age-related changes in brain activation other than in the primary motor cortex are not well known with respect to dynamic balance control. Therefore, the current study aimed to explore age-related differences in the control of static and dynamic postural tasks using fMRI during mental simulation of balance tasks. For this purpose, 16 elderly (72 ± 5 years) and 16 young adults (27 ± 5 years) were asked to mentally simulate a static and a dynamic balance task by motor imagery (MI), action observation (AO), or the combination of AO and MI (AO + MI). Age-related differences were detected in the form of larger brain activations in elderly compared to young participants, especially in the challenging dynamic task when applying AO + MI. Interestingly, when MI (no visual input) was contrasted to AO (visual input), elderly participants revealed deactivation of subcortical areas. The finding that the elderly demonstrated overactivation in mostly cortical areas in challenging postural conditions with visual input (AO + MI and AO) but deactivation in subcortical areas during MI (no vision) may indicate that elderly individuals allocate more cortical resources to the internal representation of dynamic postural tasks. Furthermore, it might be assumed that they depend more strongly on visual input to activate subcortical internal representations. PMID:29675037

  20. A model of color vision with a robot system

    NASA Astrophysics Data System (ADS)

    Wang, Haihui

    2006-01-01

    In this paper, we propose to generalize the saccade target method and state that perceptual stability in general arises by learning the effects one's actions have on sensor responses. The apparent visual stability of color percept across saccadic eye movements can be explained by positing that perception involves observing how sensory input changes in response to motor activities. The changes related to self-motion can be learned, and once learned, used to form stable percepts. The variation of sensor data in response to a motor act is therefore a requirement for stable perception rather than something that has to be compensated for in order to perceive a stable world. In this paper, we have provided a simple implementation of this sensory-motor contingency view of perceptual stability. We showed how a straightforward application of the temporal difference enhancement learning technique yielding color percepts that are stable across saccadic eye movements, even though the raw sensor input may change radically.

  1. Simulation of talking faces in the human brain improves auditory speech recognition

    PubMed Central

    von Kriegstein, Katharina; Dogan, Özgür; Grüter, Martina; Giraud, Anne-Lise; Kell, Christian A.; Grüter, Thomas; Kleinschmidt, Andreas; Kiebel, Stefan J.

    2008-01-01

    Human face-to-face communication is essentially audiovisual. Typically, people talk to us face-to-face, providing concurrent auditory and visual input. Understanding someone is easier when there is visual input, because visual cues like mouth and tongue movements provide complementary information about speech content. Here, we hypothesized that, even in the absence of visual input, the brain optimizes both auditory-only speech and speaker recognition by harvesting speaker-specific predictions and constraints from distinct visual face-processing areas. To test this hypothesis, we performed behavioral and neuroimaging experiments in two groups: subjects with a face recognition deficit (prosopagnosia) and matched controls. The results show that observing a specific person talking for 2 min improves subsequent auditory-only speech and speaker recognition for this person. In both prosopagnosics and controls, behavioral improvement in auditory-only speech recognition was based on an area typically involved in face-movement processing. Improvement in speaker recognition was only present in controls and was based on an area involved in face-identity processing. These findings challenge current unisensory models of speech processing, because they show that, in auditory-only speech, the brain exploits previously encoded audiovisual correlations to optimize communication. We suggest that this optimization is based on speaker-specific audiovisual internal models, which are used to simulate a talking face. PMID:18436648

  2. The Effect of Visual Variability on the Learning of Academic Concepts.

    PubMed

    Bourgoyne, Ashley; Alt, Mary

    2017-06-10

    The purpose of this study was to identify effects of variability of visual input on development of conceptual representations of academic concepts for college-age students with normal language (NL) and those with language-learning disabilities (LLD). Students with NL (n = 11) and LLD (n = 11) participated in a computer-based training for introductory biology course concepts. Participants were trained on half the concepts under a low-variability condition and half under a high-variability condition. Participants completed a posttest in which they were asked to identify and rate the accuracy of novel and trained visual representations of the concepts. We performed separate repeated measures analyses of variance to examine the accuracy of identification and ratings. Participants were equally accurate on trained and novel items in the high-variability condition, but were less accurate on novel items only in the low-variability condition. The LLD group showed the same pattern as the NL group; they were just less accurate. Results indicated that high-variability visual input may facilitate the acquisition of academic concepts in college students with NL and LLD. High-variability visual input may be especially beneficial for generalization to novel representations of concepts. Implicit learning methods may be harnessed by college courses to provide students with basic conceptual knowledge when they are entering courses or beginning new units.

  3. Mechanisms of inhibition in cat visual cortex.

    PubMed Central

    Berman, N J; Douglas, R J; Martin, K A; Whitteridge, D

    1991-01-01

    1. Neurones from layers 2-6 of the cat primary visual cortex were studied using extracellular and intracellular recordings made in vivo. The aim was to identify inhibitory events and determine whether they were associated with small or large (shunting) changes in the input conductance of the neurones. 2. Visual stimulation of subfields of simple receptive fields produced depolarizing or hyperpolarizing potentials that were associated with increased or decreased firing rates respectively. Hyperpolarizing potentials were small, 5 mV or less. In the same neurones, brief electrical stimulation of cortical afferents produced a characteristic sequence of a brief depolarization followed by a long-lasting (200-400 ms) hyperpolarization. 3. During the response to a stationary flashed bar, the synaptic activation increased the input conductance of the neurone by about 5-20%. Conductance changes of similar magnitude were obtained by electrically stimulating the neurone. Neurones stimulated with non-optimal orientations or directions of motion showed little change in input conductance. 4. These data indicate that while visually or electrically induced inhibition can be readily demonstrated in visual cortex, the inhibition is not associated with large sustained conductance changes. Thus a shunting or multiplicative inhibitory mechanism is not the principal mechanism of inhibition. Images Fig. 2 Fig. 3 Fig. 4 Fig. 5 Fig. 6 PMID:1804983

  4. Selective transfer of visual working memory training on Chinese character learning.

    PubMed

    Opitz, Bertram; Schneiders, Julia A; Krick, Christoph M; Mecklinger, Axel

    2014-01-01

    Previous research has shown a systematic relationship between phonological working memory capacity and second language proficiency for alphabetic languages. However, little is known about the impact of working memory processes on second language learning in a non-alphabetic language such as Mandarin Chinese. Due to the greater complexity of the Chinese writing system we expect that visual working memory rather than phonological working memory exerts a unique influence on learning Chinese characters. This issue was explored in the present experiment by comparing visual working memory training with an active (auditory working memory training) control condition and a passive, no training control condition. Training induced modulations in language-related brain networks were additionally examined using functional magnetic resonance imaging in a pretest-training-posttest design. As revealed by pre- to posttest comparisons and analyses of individual differences in working memory training gains, visual working memory training led to positive transfer effects on visual Chinese vocabulary learning compared to both control conditions. In addition, we found sustained activation after visual working memory training in the (predominantly visual) left infero-temporal cortex that was associated with behavioral transfer. In the control conditions, activation either increased (active control condition) or decreased (passive control condition) without reliable behavioral transfer effects. This suggests that visual working memory training leads to more efficient processing and more refined responses in brain regions involved in visual processing. Furthermore, visual working memory training boosted additional activation in the precuneus, presumably reflecting mental image generation of the learned characters. We, therefore, suggest that the conjoint activity of the mid-fusiform gyrus and the precuneus after visual working memory training reflects an interaction of working memory and imagery processes with complex visual stimuli that fosters the coherent synthesis of a percept from a complex visual input in service of enhanced Chinese character learning. © 2013 Published by Elsevier Ltd.

  5. Influence of moving visual environment on sit-to-stand kinematics in children and adults.

    PubMed

    Slaboda, Jill C; Barton, Joseph E; Keshner, Emily A

    2009-08-01

    The effect of visual field motion on the sit-to-stand kinematics of adults and children was investigated. Children (8 to12 years of age) and adults (21 to 49 years of age) were seated in a virtual environment that rotated in the pitch and roll directions. Participants stood up either (1) concurrent with onset of visual motion or (2) after an immersion period in the moving visual environment, and (3) without visual input. Angular velocities of the head with respect to the trunk, and trunk with respect to the environment, w ere calculated as was head andtrunk center of mass. Both adults and children reduced head and trunk angular velocity after immersion in the moving visual environment. Unlike adults, children demonstrated significant differences in displacement of the head center of mass during the immersion and concurrent trials when compared to trials without visual input. Results suggest a time-dependent effect of vision on sit-to-stand kinematics in adults, whereas children are influenced by the immediate presence or absence of vision.

  6. The intralaminar thalamus—an expressway linking visual stimuli to circuits determining agency and action selection

    PubMed Central

    Fisher, Simon D.; Reynolds, John N. J.

    2014-01-01

    Anatomical investigations have revealed connections between the intralaminar thalamic nuclei and areas such as the superior colliculus (SC) that receive short latency input from visual and auditory primary sensory areas. The intralaminar nuclei in turn project to the major input nucleus of the basal ganglia, the striatum, providing this nucleus with a source of subcortical excitatory input. Together with a converging input from the cerebral cortex, and a neuromodulatory dopaminergic input from the midbrain, the components previously found necessary for reinforcement learning in the basal ganglia are present. With this intralaminar sensory input, the basal ganglia are thought to play a primary role in determining what aspect of an organism’s own behavior has caused salient environmental changes. Additionally, subcortical loops through thalamic and basal ganglia nuclei are proposed to play a critical role in action selection. In this mini review we will consider the anatomical and physiological evidence underlying the existence of these circuits. We will propose how the circuits interact to modulate basal ganglia output and solve common behavioral learning problems of agency determination and action selection. PMID:24765070

  7. The Effects of Input-Enhanced Instruction on Iranian EFL Learners' Production of Appropriate and Accurate Suggestions

    ERIC Educational Resources Information Center

    Ghavamnia, M.; Eslami-Rasekh, A.; Vahid Dastjerdi, H.

    2018-01-01

    This study investigates the relative effectiveness of four types of input-enhanced instruction on the development of Iranian EFL learners' production of pragmatically appropriate and grammatically accurate suggestions. Over a 16-week course, input delivered through video clips was enhanced differently in four intact classes: (1) metapragmatic…

  8. Matrix Metalloproteinase-9 regulates neuronal circuit development and excitability

    PubMed Central

    Murase, Sachiko; Lantz, Crystal; Kim, Eunyoung; Gupta, Nitin; Higgins, Richard; Stopfer, Mark; Hoffman, Dax A.; Quinlan, Elizabeth M.

    2015-01-01

    In early postnatal development, naturally occurring cell death, dendritic outgrowth and synaptogenesis sculpt neuronal ensembles into functional neuronal circuits. Here we demonstrate that deletion of the extracellular proteinase MMP-9 affects each of these processes, resulting in maladapted neuronal circuitry. MMP-9 deletion increases the number of CA1 pyramidal neurons, but decreases dendritic length and complexity while dendritic spine density is unchanged. Parallel changes in neuronal morphology are observed in primary visual cortex, and persist into adulthood. Individual CA1 neurons in MMP-9−/− mice have enhanced input resistance and a significant increase in the frequency, but not amplitude, of miniature excitatory postsynaptic currents (mEPSCs). Additionally, deletion of MMP-9 significant increases spontaneous neuronal activity in awake MMP-9−/− mice and enhances response to acute challenge by the excitotoxin kainate. Thus MMP-9-dependent proteolysis regulates several aspects of circuit maturation to constrain excitability throughout life. PMID:26093382

  9. A thermosyphon heat pipe cooler for high power LEDs cooling

    NASA Astrophysics Data System (ADS)

    Li, Ji; Tian, Wenkai; Lv, Lucang

    2016-08-01

    Light emitting diode (LED) cooling is facing the challenge of high heat flux more seriously with the increase of input power and diode density. The proposed unique thermosyphon heat pipe heat sink is particularly suitable for cooling of high power density LED chips and other electronics, which has a heat dissipation potential of up to 280 W within an area of 20 mm × 22 mm (>60 W/cm2) under natural air convection. Meanwhile, a thorough visualization investigation was carried out to explore the two phase flow characteristics in the proposed thermosyphon heat pipe. Implementing this novel thermosyphon heat pipe heat sink in the cooling of a commercial 100 W LED integrated chip, a very low apparent thermal resistance of 0.34 K/W was obtained under natural air convection with the aid of the enhanced boiling heat transfer at the evaporation side and the enhanced natural air convection at the condensation side.

  10. Gravity dependence of the effect of optokinetic stimulation on the subjective visual vertical.

    PubMed

    Ward, Bryan K; Bockisch, Christopher J; Caramia, Nicoletta; Bertolini, Giovanni; Tarnutzer, Alexander Andrea

    2017-05-01

    Accurate and precise estimates of direction of gravity are essential for spatial orientation. According to Bayesian theory, multisensory vestibular, visual, and proprioceptive input is centrally integrated in a weighted fashion based on the reliability of the component sensory signals. For otolithic input, a decreasing signal-to-noise ratio was demonstrated with increasing roll angle. We hypothesized that the weights of vestibular (otolithic) and extravestibular (visual/proprioceptive) sensors are roll-angle dependent and predicted an increased weight of extravestibular cues with increasing roll angle, potentially following the Bayesian hypothesis. To probe this concept, the subjective visual vertical (SVV) was assessed in different roll positions (≤ ± 120°, steps = 30°, n = 10) with/without presenting an optokinetic stimulus (velocity = ± 60°/s). The optokinetic stimulus biased the SVV toward the direction of stimulus rotation for roll angles ≥ ± 30° ( P < 0.005). Offsets grew from 3.9 ± 1.8° (upright) to 22.1 ± 11.8° (±120° roll tilt, P < 0.001). Trial-to-trial variability increased with roll angle, demonstrating a nonsignificant increase when providing optokinetic stimulation. Variability and optokinetic bias were correlated ( R 2 = 0.71, slope = 0.71, 95% confidence interval = 0.57-0.86). An optimal-observer model combining an optokinetic bias with vestibular input reproduced measured errors closely. These findings support the hypothesis of a weighted multisensory integration when estimating direction of gravity with optokinetic stimulation. Visual input was weighted more when vestibular input became less reliable, i.e., at larger roll-tilt angles. However, according to Bayesian theory, the variability of combined cues is always lower than the variability of each source cue. If the observed increase in variability, although nonsignificant, is true, either it must depend on an additional source of variability, added after SVV computation, or it would conflict with the Bayesian hypothesis. NEW & NOTEWORTHY Applying a rotating optokinetic stimulus while recording the subjective visual vertical in different whole body roll angles, we noted the optokinetic-induced bias to correlate with the roll angle. These findings allow the hypothesis that the established optimal weighting of single-sensory cues depending on their reliability to estimate direction of gravity could be extended to a bias caused by visual self-motion stimuli. Copyright © 2017 the American Physiological Society.

  11. Flexible Coding of Visual Working Memory Representations during Distraction.

    PubMed

    Lorenc, Elizabeth S; Sreenivasan, Kartik K; Nee, Derek E; Vandenbroucke, Annelinde R E; D'Esposito, Mark

    2018-06-06

    Visual working memory (VWM) recruits a broad network of brain regions, including prefrontal, parietal, and visual cortices. Recent evidence supports a "sensory recruitment" model of VWM, whereby precise visual details are maintained in the same stimulus-selective regions responsible for perception. A key question in evaluating the sensory recruitment model is how VWM representations persist through distracting visual input, given that the early visual areas that putatively represent VWM content are susceptible to interference from visual stimulation.To address this question, we used a functional magnetic resonance imaging inverted encoding model approach to quantitatively assess the effect of distractors on VWM representations in early visual cortex and the intraparietal sulcus (IPS), another region previously implicated in the storage of VWM information. This approach allowed us to reconstruct VWM representations for orientation, both before and after visual interference, and to examine whether oriented distractors systematically biased these representations. In our human participants (both male and female), we found that orientation information was maintained simultaneously in early visual areas and IPS in anticipation of possible distraction, and these representations persisted in the absence of distraction. Importantly, early visual representations were susceptible to interference; VWM orientations reconstructed from visual cortex were significantly biased toward distractors, corresponding to a small attractive bias in behavior. In contrast, IPS representations did not show such a bias. These results provide quantitative insight into the effect of interference on VWM representations, and they suggest a dynamic tradeoff between visual and parietal regions that allows flexible adaptation to task demands in service of VWM. SIGNIFICANCE STATEMENT Despite considerable evidence that stimulus-selective visual regions maintain precise visual information in working memory, it remains unclear how these representations persist through subsequent input. Here, we used quantitative model-based fMRI analyses to reconstruct the contents of working memory and examine the effects of distracting input. Although representations in the early visual areas were systematically biased by distractors, those in the intraparietal sulcus appeared distractor-resistant. In contrast, early visual representations were most reliable in the absence of distraction. These results demonstrate the dynamic, adaptive nature of visual working memory processes, and provide quantitative insight into the ways in which representations can be affected by interference. Further, they suggest that current models of working memory should be revised to incorporate this flexibility. Copyright © 2018 the authors 0270-6474/18/385267-10$15.00/0.

  12. Intelligent Visual Input: A Graphical Method for Rapid Entry of Patient-Specific Data

    PubMed Central

    Bergeron, Bryan P.; Greenes, Robert A.

    1987-01-01

    Intelligent Visual Input (IVI) provides a rapid, graphical method of data entry for both expert system interaction and medical record keeping purposes. Key components of IVI include: a high-resolution graphic display; an interface supportive of rapid selection, i.e., one utilizing a mouse or light pen; algorithm simplification modules; and intelligent graphic algorithm expansion modules. A prototype IVI system, designed to facilitate entry of physical exam findings, is used to illustrates the potential advantages of this approach.

  13. Layer-specific input to distinct cell types in layer 6 of monkey primary visual cortex.

    PubMed

    Briggs, F; Callaway, E M

    2001-05-15

    Layer 6 of monkey V1 contains a physiologically and anatomically diverse population of excitatory pyramidal neurons. Distinctive arborization patterns of axons and dendrites within the functionally specialized cortical layers define eight types of layer 6 pyramidal neurons and suggest unique information processing roles for each cell type. To address how input sources contribute to cellular function, we examined the laminar sources of functional excitatory input onto individual layer 6 pyramidal neurons using scanning laser photostimulation. We find that excitatory input sources correlate with cell type. Class I neurons with axonal arbors selectively targeting magnocellular (M) recipient layer 4Calpha receive input from M-dominated layer 4B, whereas class I neurons whose axonal arbors target parvocellular (P) recipient layer 4Cbeta receive input from P-dominated layer 2/3. Surprisingly, these neuronal types do not differ significantly in the inputs they receive directly from layers 4Calpha or 4Cbeta. Class II cells, which lack dense axonal arbors within layer 4C, receive excitatory input from layers targeted by their local axons. Specifically, type IIA cells project axons to and receive input from the deep but not superficial layers. Type IIB neurons project to and receive input from the deepest and most superficial, but not middle layers. Type IIC neurons arborize throughout the cortical layers and tend to receive inputs from all cortical layers. These observations have implications for the functional roles of different layer 6 cell types in visual information processing.

  14. Effects of chronic iTBS-rTMS and enriched environment on visual cortex early critical period and visual pattern discrimination in dark-reared rats.

    PubMed

    Castillo-Padilla, Diana V; Funke, Klaus

    2016-01-01

    Early cortical critical period resembles a state of enhanced neuronal plasticity enabling the establishment of specific neuronal connections during first sensory experience. Visual performance with regard to pattern discrimination is impaired if the cortex is deprived from visual input during the critical period. We wondered how unspecific activation of the visual cortex before closure of the critical period using repetitive transcranial magnetic stimulation (rTMS) could affect the critical period and the visual performance of the experimental animals. Would it cause premature closure of the plastic state and thus worsen experience-dependent visual performance, or would it be able to preserve plasticity? Effects of intermittent theta-burst stimulation (iTBS) were compared with those of an enriched environment (EE) during dark-rearing (DR) from birth. Rats dark-reared in a standard cage showed poor improvement in a visual pattern discrimination task, while rats housed in EE or treated with iTBS showed a performance indistinguishable from rats reared in normal light/dark cycle. The behavioral effects were accompanied by correlated changes in the expression of brain-derived neurotrophic factor (BDNF) and atypical PKC (PKCζ/PKMζ), two factors controlling stabilization of synaptic potentiation. It appears that not only nonvisual sensory activity and exercise but also cortical activation induced by rTMS has the potential to alleviate the effects of DR on cortical development, most likely due to stimulation of BDNF synthesis and release. As we showed previously, iTBS reduced the expression of parvalbumin in inhibitory cortical interneurons, indicating that modulation of the activity of fast-spiking interneurons contributes to the observed effects of iTBS. © 2015 Wiley Periodicals, Inc.

  15. Sleep Disturbances among Persons Who Are Visually Impaired: Survey of Dog Guide Users.

    ERIC Educational Resources Information Center

    Fouladi, Massoud K.; Moseley, Merrick J.; Jones, Helen S.; Tobin, Michael J.

    1998-01-01

    A survey completed by 1237 adults with severe visual impairments found that 20% described the quality of their sleep as poor or very poor. Exercise was associated with better sleep and depression with poorer sleep. However, visual acuity did not predict sleep quality, casting doubt on the idea that restricted visual input (light) causes sleep…

  16. Locomotor Sensory Organization Test: How Sensory Conflict Affects the Temporal Structure of Sway Variability During Gait.

    PubMed

    Chien, Jung Hung; Mukherjee, Mukul; Siu, Ka-Chun; Stergiou, Nicholas

    2016-05-01

    When maintaining postural stability temporally under increased sensory conflict, a more rigid response is used where the available degrees of freedom are essentially frozen. The current study investigated if such a strategy is also utilized during more dynamic situations of postural control as is the case with walking. This study attempted to answer this question by using the Locomotor Sensory Organization Test (LSOT). This apparatus incorporates SOT inspired perturbations of the visual and the somatosensory system. Ten healthy young adults performed the six conditions of the traditional SOT and the corresponding six conditions on the LSOT. The temporal structure of sway variability was evaluated from all conditions. The results showed that in the anterior posterior direction somatosensory input is crucial for postural control for both walking and standing; visual input also had an effect but was not as prominent as the somatosensory input. In the medial lateral direction and with respect to walking, visual input has a much larger effect than somatosensory input. This is possibly due to the added contributions by peripheral vision during walking; in standing such contributions may not be as significant for postural control. In sum, as sensory conflict increases more rigid and regular sway patterns are found during standing confirming the previous results presented in the literature, however the opposite was the case with walking where more exploratory and adaptive movement patterns are present.

  17. Neural Network Machine Learning and Dimension Reduction for Data Visualization

    NASA Technical Reports Server (NTRS)

    Liles, Charles A.

    2014-01-01

    Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.

  18. Spatial attention improves the quality of population codes in human visual cortex.

    PubMed

    Saproo, Sameer; Serences, John T

    2010-08-01

    Selective attention enables sensory input from behaviorally relevant stimuli to be processed in greater detail, so that these stimuli can more accurately influence thoughts, actions, and future goals. Attention has been shown to modulate the spiking activity of single feature-selective neurons that encode basic stimulus properties (color, orientation, etc.). However, the combined output from many such neurons is required to form stable representations of relevant objects and little empirical work has formally investigated the relationship between attentional modulations on population responses and improvements in encoding precision. Here, we used functional MRI and voxel-based feature tuning functions to show that spatial attention induces a multiplicative scaling in orientation-selective population response profiles in early visual cortex. In turn, this multiplicative scaling correlates with an improvement in encoding precision, as evidenced by a concurrent increase in the mutual information between population responses and the orientation of attended stimuli. These data therefore demonstrate how multiplicative scaling of neural responses provides at least one mechanism by which spatial attention may improve the encoding precision of population codes. Increased encoding precision in early visual areas may then enhance the speed and accuracy of perceptual decisions computed by higher-order neural mechanisms.

  19. Bimanual Coordination Learning with Different Augmented Feedback Modalities and Information Types

    PubMed Central

    Chiou, Shiau-Chuen; Chang, Erik Chihhung

    2016-01-01

    Previous studies have shown that bimanual coordination learning is more resistant to the removal of augmented feedback when acquired with auditory than with visual channel. However, it is unclear whether this differential “guidance effect” between feedback modalities is due to enhanced sensorimotor integration via the non-dominant auditory channel or strengthened linkage to kinesthetic information under rhythmic input. The current study aimed to examine how modalities (visual vs. auditory) and information types (continuous visuospatial vs. discrete rhythmic) of concurrent augmented feedback influence bimanual coordination learning. Participants either learned a 90°-out-of-phase pattern for three consecutive days with Lissajous feedback indicating the integrated position of both arms, or with visual or auditory rhythmic feedback reflecting the relative timing of the movement. The results showed diverse performance change after practice when the feedback was removed between Lissajous and the other two rhythmic groups, indicating that the guidance effect may be modulated by the type of information provided during practice. Moreover, significant performance improvement in the dual-task condition where the irregular rhythm counting task was applied as a secondary task also suggested that lower involvement of conscious control may result in better performance in bimanual coordination. PMID:26895286

  20. Bimanual Coordination Learning with Different Augmented Feedback Modalities and Information Types.

    PubMed

    Chiou, Shiau-Chuen; Chang, Erik Chihhung

    2016-01-01

    Previous studies have shown that bimanual coordination learning is more resistant to the removal of augmented feedback when acquired with auditory than with visual channel. However, it is unclear whether this differential "guidance effect" between feedback modalities is due to enhanced sensorimotor integration via the non-dominant auditory channel or strengthened linkage to kinesthetic information under rhythmic input. The current study aimed to examine how modalities (visual vs. auditory) and information types (continuous visuospatial vs. discrete rhythmic) of concurrent augmented feedback influence bimanual coordination learning. Participants either learned a 90°-out-of-phase pattern for three consecutive days with Lissajous feedback indicating the integrated position of both arms, or with visual or auditory rhythmic feedback reflecting the relative timing of the movement. The results showed diverse performance change after practice when the feedback was removed between Lissajous and the other two rhythmic groups, indicating that the guidance effect may be modulated by the type of information provided during practice. Moreover, significant performance improvement in the dual-task condition where the irregular rhythm counting task was applied as a secondary task also suggested that lower involvement of conscious control may result in better performance in bimanual coordination.

  1. Rapid interactions between lexical semantic and word form analysis during word recognition in context: evidence from ERPs.

    PubMed

    Kim, Albert; Lai, Vicky

    2012-05-01

    We used ERPs to investigate the time course of interactions between lexical semantic and sublexical visual word form processing during word recognition. Participants read sentence-embedded pseudowords that orthographically resembled a contextually supported real word (e.g., "She measured the flour so she could bake a ceke…") or did not (e.g., "She measured the flour so she could bake a tont…") along with nonword consonant strings (e.g., "She measured the flour so she could bake a srdt…"). Pseudowords that resembled a contextually supported real word ("ceke") elicited an enhanced positivity at 130 msec (P130), relative to real words (e.g., "She measured the flour so she could bake a cake…"). Pseudowords that did not resemble a plausible real word ("tont") enhanced the N170 component, as did nonword consonant strings ("srdt"). The effect pattern shows that the visual word recognition system is, perhaps, counterintuitively, more rapidly sensitive to minor than to flagrant deviations from contextually predicted inputs. The findings are consistent with rapid interactions between lexical and sublexical representations during word recognition, in which rapid lexical access of a contextually supported word (CAKE) provides top-down excitation of form features ("cake"), highlighting the anomaly of an unexpected word "ceke."

  2. Digital implementation of a neural network for imaging

    NASA Astrophysics Data System (ADS)

    Wood, Richard; McGlashan, Alex; Yatulis, Jay; Mascher, Peter; Bruce, Ian

    2012-10-01

    This paper outlines the design and testing of a digital imaging system that utilizes an artificial neural network with unsupervised and supervised learning to convert streaming input (real time) image space into parameter space. The primary objective of this work is to investigate the effectiveness of using a neural network to significantly reduce the information density of streaming images so that objects can be readily identified by a limited set of primary parameters and act as an enhanced human machine interface (HMI). Many applications are envisioned including use in biomedical imaging, anomaly detection and as an assistive device for the visually impaired. A digital circuit was designed and tested using a Field Programmable Gate Array (FPGA) and an off the shelf digital camera. Our results indicate that the networks can be readily trained when subject to limited sets of objects such as the alphabet. We can also separate limited object sets with rotational and positional invariance. The results also show that limited visual fields form with only local connectivity.

  3. Fourier domain image fusion for differential X-ray phase-contrast breast imaging.

    PubMed

    Coello, Eduardo; Sperl, Jonathan I; Bequé, Dirk; Benz, Tobias; Scherer, Kai; Herzen, Julia; Sztrókay-Gaul, Anikó; Hellerhoff, Karin; Pfeiffer, Franz; Cozzini, Cristina; Grandl, Susanne

    2017-04-01

    X-Ray Phase-Contrast (XPC) imaging is a novel technology with a great potential for applications in clinical practice, with breast imaging being of special interest. This work introduces an intuitive methodology to combine and visualize relevant diagnostic features, present in the X-ray attenuation, phase shift and scattering information retrieved in XPC imaging, using a Fourier domain fusion algorithm. The method allows to present complementary information from the three acquired signals in one single image, minimizing the noise component and maintaining visual similarity to a conventional X-ray image, but with noticeable enhancement in diagnostic features, details and resolution. Radiologists experienced in mammography applied the image fusion method to XPC measurements of mastectomy samples and evaluated the feature content of each input and the fused image. This assessment validated that the combination of all the relevant diagnostic features, contained in the XPC images, was present in the fused image as well. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Learning receptive fields using predictive feedback.

    PubMed

    Jehee, Janneke F M; Rothkopf, Constantin; Beck, Jeffrey M; Ballard, Dana H

    2006-01-01

    Previously, it was suggested that feedback connections from higher- to lower-level areas carry predictions of lower-level neural activities, whereas feedforward connections carry the residual error between the predictions and the actual lower-level activities [Rao, R.P.N., Ballard, D.H., 1999. Nature Neuroscience 2, 79-87.]. A computational model implementing the hypothesis learned simple cell receptive fields when exposed to natural images. Here, we use predictive feedback to explain tuning properties in medial superior temporal area (MST). We implement the hypothesis using a new, biologically plausible, algorithm based on matching pursuit, which retains all the features of the previous implementation, including its ability to efficiently encode input. When presented with natural images, the model developed receptive field properties as found in primary visual cortex. In addition, when exposed to visual motion input resulting from movements through space, the model learned receptive field properties resembling those in MST. These results corroborate the idea that predictive feedback is a general principle used by the visual system to efficiently encode natural input.

  5. The utility of modeling word identification from visual input within models of eye movements in reading

    PubMed Central

    Bicknell, Klinton; Levy, Roger

    2012-01-01

    Decades of empirical work have shown that a range of eye movement phenomena in reading are sensitive to the details of the process of word identification. Despite this, major models of eye movement control in reading do not explicitly model word identification from visual input. This paper presents a argument for developing models of eye movements that do include detailed models of word identification. Specifically, we argue that insights into eye movement behavior can be gained by understanding which phenomena naturally arise from an account in which the eyes move for efficient word identification, and that one important use of such models is to test which eye movement phenomena can be understood this way. As an extended case study, we present evidence from an extension of a previous model of eye movement control in reading that does explicitly model word identification from visual input, Mr. Chips (Legge, Klitz, & Tjan, 1997), to test two proposals for the effect of using linguistic context on reading efficiency. PMID:23074362

  6. Thermal input control and enhancement for laser based residual stress measurements using liquid temperature indicating coatings

    DOEpatents

    Pechersky, M.J.

    1999-07-06

    An improved method for measuring residual stress in a material is disclosed comprising the steps of applying a spot of temperature indicating coating to the surface to be studied, establishing a speckle pattern surrounds the spot of coating with a first laser then heating the spot of coating with a far infrared laser until the surface plastically deforms. Comparing the speckle patterns before and after deformation by subtracting one pattern from the other will produce a fringe pattern that serves as a visual and quantitative indication of the degree to which the plasticized surface responded to the stress during heating and enables calculation of the stress. 3 figs.

  7. Thermal input control and enhancement for laser based residual stress measurements using liquid temperature indicating coatings

    DOEpatents

    Pechersky, Martin J.

    1999-01-01

    An improved method for measuring residual stress in a material comprising the steps of applying a spot of temperature indicating coating to the surface to be studied, establishing a speckle pattern surrounds the spot of coating with a first laser then heating the spot of coating with a far infrared laser until the surface plastically deforms. Comparing the speckle patterns before and after deformation by subtracting one pattern from the other will produce a fringe pattern that serves as a visual and quantitative indication of the degree to which the plasticized surface responded to the stress during heating and enables calculation of the stress.

  8. Enhanced image fusion using directional contrast rules in fuzzy transform domain.

    PubMed

    Nandal, Amita; Rosales, Hamurabi Gamboa

    2016-01-01

    In this paper a novel image fusion algorithm based on directional contrast in fuzzy transform (FTR) domain is proposed. Input images to be fused are first divided into several non-overlapping blocks. The components of these sub-blocks are fused using directional contrast based fuzzy fusion rule in FTR domain. The fused sub-blocks are then transformed into original size blocks using inverse-FTR. Further, these inverse transformed blocks are fused according to select maximum based fusion rule for reconstructing the final fused image. The proposed fusion algorithm is both visually and quantitatively compared with other standard and recent fusion algorithms. Experimental results demonstrate that the proposed method generates better results than the other methods.

  9. Modeling the impact of common noise inputs on the network activity of retinal ganglion cells

    PubMed Central

    Ahmadian, Yashar; Shlens, Jonathon; Pillow, Jonathan W.; Kulkarni, Jayant; Litke, Alan M.; Chichilnisky, E. J.; Simoncelli, Eero; Paninski, Liam

    2013-01-01

    Synchronized spontaneous firing among retinal ganglion cells (RGCs), on timescales faster than visual responses, has been reported in many studies. Two candidate mechanisms of synchronized firing include direct coupling and shared noisy inputs. In neighboring parasol cells of primate retina, which exhibit rapid synchronized firing that has been studied extensively, recent experimental work indicates that direct electrical or synaptic coupling is weak, but shared synaptic input in the absence of modulated stimuli is strong. However, previous modeling efforts have not accounted for this aspect of firing in the parasol cell population. Here we develop a new model that incorporates the effects of common noise, and apply it to analyze the light responses and synchronized firing of a large, densely-sampled network of over 250 simultaneously recorded parasol cells. We use a generalized linear model in which the spike rate in each cell is determined by the linear combination of the spatio-temporally filtered visual input, the temporally filtered prior spikes of that cell, and unobserved sources representing common noise. The model accurately captures the statistical structure of the spike trains and the encoding of the visual stimulus, without the direct coupling assumption present in previous modeling work. Finally, we examined the problem of decoding the visual stimulus from the spike train given the estimated parameters. The common-noise model produces Bayesian decoding performance as accurate as that of a model with direct coupling, but with significantly more robustness to spike timing perturbations. PMID:22203465

  10. Forecasting hotspots using predictive visual analytics approach

    DOEpatents

    Maciejewski, Ross; Hafen, Ryan; Rudolph, Stephen; Cleveland, William; Ebert, David

    2014-12-30

    A method for forecasting hotspots is provided. The method may include the steps of receiving input data at an input of the computational device, generating a temporal prediction based on the input data, generating a geospatial prediction based on the input data, and generating output data based on the time series and geospatial predictions. The output data may be configured to display at least one user interface at an output of the computational device.

  11. Neuron analysis of visual perception

    NASA Technical Reports Server (NTRS)

    Chow, K. L.

    1980-01-01

    The receptive fields of single cells in the visual system of cat and squirrel monkey were studied investigating the vestibular input affecting the cells, and the cell's responses during visual discrimination learning process. The receptive field characteristics of the rabbit visual system, its normal development, its abnormal development following visual deprivation, and on the structural and functional re-organization of the visual system following neo-natal and prenatal surgery were also studied. The results of each individual part of each investigation are detailed.

  12. New false color mapping for image fusion

    NASA Astrophysics Data System (ADS)

    Toet, Alexander; Walraven, Jan

    1996-03-01

    A pixel-based color-mapping algorithm is presented that produces a fused false color rendering of two gray-level images representing different sensor modalities. The resulting images have a higher information content than each of the original images and retain sensor-specific image information. The unique component of each image modality is enhanced in the resulting fused color image representation. First, the common component of the two original input images is determined. Second, the common component is subtracted from the original images to obtain the unique component of each image. Third, the unique component of each image modality is subtracted from the image of the other modality. This step serves to enhance the representation of sensor-specific details in the final fused result. Finally, a fused color image is produced by displaying the images resulting from the last step through, respectively, the red and green channels of a color display. The method is applied to fuse thermal and visual images. The results show that the color mapping enhances the visibility of certain details and preserves the specificity of the sensor information. The fused images also have a fairly natural appearance. The fusion scheme involves only operations on corresponding pixels. The resolution of a fused image is therefore directly related to the resolution of the input images. Before fusing, the contrast of the images can be enhanced and their noise can be reduced by standard image- processing techniques. The color mapping algorithm is computationally simple. This implies that the investigated approaches can eventually be applied in real time and that the hardware needed is not too complicated or too voluminous (an important consideration when it has to fit in an airplane, for instance).

  13. Modality-Driven Classification and Visualization of Ensemble Variance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald

    Paper for the IEEE Visualization Conference Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space.

  14. Natural image sequences constrain dynamic receptive fields and imply a sparse code.

    PubMed

    Häusler, Chris; Susemihl, Alex; Nawrot, Martin P

    2013-11-06

    In their natural environment, animals experience a complex and dynamic visual scenery. Under such natural stimulus conditions, neurons in the visual cortex employ a spatially and temporally sparse code. For the input scenario of natural still images, previous work demonstrated that unsupervised feature learning combined with the constraint of sparse coding can predict physiologically measured receptive fields of simple cells in the primary visual cortex. This convincingly indicated that the mammalian visual system is adapted to the natural spatial input statistics. Here, we extend this approach to the time domain in order to predict dynamic receptive fields that can account for both spatial and temporal sparse activation in biological neurons. We rely on temporal restricted Boltzmann machines and suggest a novel temporal autoencoding training procedure. When tested on a dynamic multi-variate benchmark dataset this method outperformed existing models of this class. Learning features on a large dataset of natural movies allowed us to model spatio-temporal receptive fields for single neurons. They resemble temporally smooth transformations of previously obtained static receptive fields and are thus consistent with existing theories. A neuronal spike response model demonstrates how the dynamic receptive field facilitates temporal and population sparseness. We discuss the potential mechanisms and benefits of a spatially and temporally sparse representation of natural visual input. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  15. Link between orientation and retinotopic maps in primary visual cortex

    PubMed Central

    Paik, Se-Bum; Ringach, Dario L.

    2012-01-01

    Maps representing the preference of neurons for the location and orientation of a stimulus on the visual field are a hallmark of primary visual cortex. It is not yet known how these maps develop and what function they play in visual processing. One hypothesis postulates that orientation maps are initially seeded by the spatial interference of ON- and OFF-center retinal receptive field mosaics. Here we show that such a mechanism predicts a link between the layout of orientation preferences around singularities of different signs and the cardinal axes of the retinotopic map. Moreover, we confirm the predicted relationship holds in tree shrew primary visual cortex. These findings provide additional support for the notion that spatially structured input from the retina may provide a blueprint for the early development of cortical maps and receptive fields. More broadly, it raises the possibility that spatially structured input from the periphery may shape the organization of primary sensory cortex of other modalities as well. PMID:22509015

  16. A 25-Gbps high-sensitivity optical receiver with 10-Gbps photodiode using inductive input coupling for optical interconnects

    NASA Astrophysics Data System (ADS)

    Oku, Hideki; Narita, Kiyomi; Shiraishi, Takashi; Ide, Satoshi; Tanaka, Kazuhiro

    2012-01-01

    A 25-Gbps high-sensitivity optical receiver with a 10-Gbps photodiode (PD) using inductive input coupling has been demonstrated for optical interconnects. We introduced the inductive input coupling technique to achieve the 25-Gbps optical receiver using a 10-Gbps PD. We implemented an input inductor (Lin) between the PD and trans-impedance amplifier (TIA), and optimized inductance to enhance the bandwidth and reduce the input referred noise current through simulation with the RF PD-model. Near the resonance frequency of the tank circuit formed by PD capacitance, Lin, and TIA input capacitance, the PD photo-current through Lin into the TIA is enhanced. This resonance has the effects of enhancing the bandwidth at TIA input and reducing the input equivalent value of the noise current from TIA. We fabricated the 25-Gbps optical receiver with the 10-Gbps PD using an inductive input coupling technique. Due to the application of an inductor, the receiver bandwidth is enhanced from 10 GHz to 14.2 GHz. Thanks to this wide-band and low-noise performance, we were able to improve the sensitivity at an error rate of 1E-12 from non-error-free to -6.5 dBm. These results indicate that our technique is promising for cost-effective optical interconnects.

  17. Anticipation in Real-world Scenes: The Role of Visual Context and Visual Memory

    ERIC Educational Resources Information Center

    Coco, Moreno I.; Keller, Frank; Malcolm, George L.

    2016-01-01

    The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye-movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically…

  18. Infant Face Preferences after Binocular Visual Deprivation

    ERIC Educational Resources Information Center

    Mondloch, Catherine J.; Lewis, Terri L.; Levin, Alex V.; Maurer, Daphne

    2013-01-01

    Early visual deprivation impairs some, but not all, aspects of face perception. We investigated the possible developmental roots of later abnormalities by using a face detection task to test infants treated for bilateral congenital cataract within 1 hour of their first focused visual input. The seven patients were between 5 and 12 weeks old…

  19. Adding sound to theory of mind: Comparing children's development of mental-state understanding in the auditory and visual realms.

    PubMed

    Hasni, Anita A; Adamson, Lauren B; Williamson, Rebecca A; Robins, Diana L

    2017-12-01

    Theory of mind (ToM) gradually develops during the preschool years. Measures of ToM usually target visual experience, but auditory experiences also provide valuable social information. Given differences between the visual and auditory modalities (e.g., sights persist, sounds fade) and the important role environmental input plays in social-cognitive development, we asked whether modality might influence the progression of ToM development. The current study expands Wellman and Liu's ToM scale (2004) by testing 66 preschoolers using five standard visual ToM tasks and five newly crafted auditory ToM tasks. Age and gender effects were found, with 4- and 5-year-olds demonstrating greater ToM abilities than 3-year-olds and girls passing more tasks than boys; there was no significant effect of modality. Both visual and auditory tasks formed a scalable set. These results indicate that there is considerable consistency in when children are able to use visual and auditory inputs to reason about various aspects of others' mental states. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. How cortical neurons help us see: visual recognition in the human brain

    PubMed Central

    Blumberg, Julie; Kreiman, Gabriel

    2010-01-01

    Through a series of complex transformations, the pixel-like input to the retina is converted into rich visual perceptions that constitute an integral part of visual recognition. Multiple visual problems arise due to damage or developmental abnormalities in the cortex of the brain. Here, we provide an overview of how visual information is processed along the ventral visual cortex in the human brain. We discuss how neurophysiological recordings in macaque monkeys and in humans can help us understand the computations performed by visual cortex. PMID:20811161

  1. Gating-signal propagation by a feed-forward neural motif

    NASA Astrophysics Data System (ADS)

    Liang, Xiaoming; Yanchuk, Serhiy; Zhao, Liang

    2013-07-01

    We study the signal propagation in a feed-forward motif consisting of three bistable neurons: Two input neurons receive input signals and the third output neuron generates the output. We find that a weak input signal can be propagated from the input neurons to the output neuron without amplitude attenuation. We further reveal that the initial states of the input neurons and the coupling strength act as signal gates and determine whether the propagation is enhanced or not. We also investigate the effect of the input signal frequency on enhanced signal propagation.

  2. Distinct roles of the cortical layers of area V1 in figure-ground segregation.

    PubMed

    Self, Matthew W; van Kerkoerle, Timo; Supèr, Hans; Roelfsema, Pieter R

    2013-11-04

    What roles do the different cortical layers play in visual processing? We recorded simultaneously from all layers of the primary visual cortex while monkeys performed a figure-ground segregation task. This task can be divided into different subprocesses that are thought to engage feedforward, horizontal, and feedback processes at different time points. These different connection types have different patterns of laminar terminations in V1 and can therefore be distinguished with laminar recordings. We found that the visual response started 40 ms after stimulus presentation in layers 4 and 6, which are targets of feedforward connections from the lateral geniculate nucleus and distribute activity to the other layers. Boundary detection started shortly after the visual response. In this phase, boundaries of the figure induced synaptic currents and stronger neuronal responses in upper layer 4 and the superficial layers ~70 ms after stimulus onset, consistent with the hypothesis that they are detected by horizontal connections. In the next phase, ~30 ms later, synaptic inputs arrived in layers 1, 2, and 5 that receive feedback from higher visual areas, which caused the filling in of the representation of the entire figure with enhanced neuronal activity. The present results reveal unique contributions of the different cortical layers to the formation of a visual percept. This new blueprint of laminar processing may generalize to other tasks and to other areas of the cerebral cortex, where the layers are likely to have roles similar to those in area V1. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Cross-modal reorganization in cochlear implant users: Auditory cortex contributes to visual face processing.

    PubMed

    Stropahl, Maren; Plotz, Karsten; Schönfeld, Rüdiger; Lenarz, Thomas; Sandmann, Pascale; Yovel, Galit; De Vos, Maarten; Debener, Stefan

    2015-11-01

    There is converging evidence that the auditory cortex takes over visual functions during a period of auditory deprivation. A residual pattern of cross-modal take-over may prevent the auditory cortex to adapt to restored sensory input as delivered by a cochlear implant (CI) and limit speech intelligibility with a CI. The aim of the present study was to investigate whether visual face processing in CI users activates auditory cortex and whether this has adaptive or maladaptive consequences. High-density electroencephalogram data were recorded from CI users (n=21) and age-matched normal hearing controls (n=21) performing a face versus house discrimination task. Lip reading and face recognition abilities were measured as well as speech intelligibility. Evaluation of event-related potential (ERP) topographies revealed significant group differences over occipito-temporal scalp regions. Distributed source analysis identified significantly higher activation in the right auditory cortex for CI users compared to NH controls, confirming visual take-over. Lip reading skills were significantly enhanced in the CI group and appeared to be particularly better after a longer duration of deafness, while face recognition was not significantly different between groups. However, auditory cortex activation in CI users was positively related to face recognition abilities. Our results confirm a cross-modal reorganization for ecologically valid visual stimuli in CI users. Furthermore, they suggest that residual takeover, which can persist even after adaptation to a CI is not necessarily maladaptive. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Scientific Visualization and Computational Science: Natural Partners

    NASA Technical Reports Server (NTRS)

    Uselton, Samuel P.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Scientific visualization is developing rapidly, stimulated by computational science, which is gaining acceptance as a third alternative to theory and experiment. Computational science is based on numerical simulations of mathematical models derived from theory. But each individual simulation is like a hypothetical experiment; initial conditions are specified, and the result is a record of the observed conditions. Experiments can be simulated for situations that can not really be created or controlled. Results impossible to measure can be computed.. Even for observable values, computed samples are typically much denser. Numerical simulations also extend scientific exploration where the mathematics is analytically intractable. Numerical simulations are used to study phenomena from subatomic to intergalactic scales and from abstract mathematical structures to pragmatic engineering of everyday objects. But computational science methods would be almost useless without visualization. The obvious reason is that the huge amounts of data produced require the high bandwidth of the human visual system, and interactivity adds to the power. Visualization systems also provide a single context for all the activities involved from debugging the simulations, to exploring the data, to communicating the results. Most of the presentations today have their roots in image processing, where the fundamental task is: Given an image, extract information about the scene. Visualization has developed from computer graphics, and the inverse task: Given a scene description, make an image. Visualization extends the graphics paradigm by expanding the possible input. The goal is still to produce images; the difficulty is that the input is not a scene description displayable by standard graphics methods. Visualization techniques must either transform the data into a scene description or extend graphics techniques to display this odd input. Computational science is a fertile field for visualization research because the results vary so widely and include things that have no known appearance. The amount of data creates additional challenges for both hardware and software systems. Evaluations of visualization should ultimately reflect the insight gained into the scientific phenomena. So making good visualizations requires consideration of characteristics of the user and the purpose of the visualization. Knowledge about human perception and graphic design is also relevant. It is this breadth of knowledge that stimulates proposals for multidisciplinary visualization teams and intelligent visualization assistant software. Visualization is an immature field, but computational science is stimulating research on a broad front.

  5. WATERSHEED NUTRIENT INPUTS, PHYTOPLANKTON ACCUMULATION, AND C STOCKS IN CHESAPEAKE BAY

    EPA Science Inventory

    Inputs of N and P to Chesapeake Bay have been enhanced by anthropogenic activities. Fertilizers, developed areas, N emissions, and industrial effluents contribute to point and diffuse sources currently 2-20X higher than those from undisturbed watersheds. Enhanced nutrient inputs ...

  6. Neuroplasticity and amblyopia: vision at the balance point.

    PubMed

    Tailor, Vijay K; Schwarzkopf, D Samuel; Dahlmann-Noor, Annegret H

    2017-02-01

    New insights into triggers and brakes of plasticity in the visual system are being translated into new treatment approaches which may improve outcomes not only in children, but also in adults. Visual experience-driven plasticity is greatest in early childhood, triggered by maturation of inhibitory interneurons which facilitate strengthening of synchronous synaptic connections, and inactivation of others. Normal binocular development leads to progressive refinement of monocular visual acuity, stereoacuity and fusion of images from both eyes. At the end of the 'critical period', structural and functional brakes such as dampening of acetylcholine receptor signalling and formation of perineuronal nets limit further synaptic remodelling. Imbalanced visual input from the two eyes can lead to imbalanced neural processing and permanent visual deficits, the commonest of which is amblyopia. The efficacy of new behavioural, physical and pharmacological interventions aiming to balance visual input and visual processing have been described in humans, and some are currently under evaluation in randomised controlled trials. Outcomes may change amblyopia treatment for children and adults, but the safety of new approaches will need careful monitoring, as permanent adverse events may occur when plasticity is re-induced after the end of the critical period.Video abstracthttp://links.lww.com/CONR/A42.

  7. Experience-dependent plasticity from eye opening enables lasting, visual cortex-dependent enhancement of motion vision.

    PubMed

    Prusky, Glen T; Silver, Byron D; Tschetter, Wayne W; Alam, Nazia M; Douglas, Robert M

    2008-09-24

    Developmentally regulated plasticity of vision has generally been associated with "sensitive" or "critical" periods in juvenile life, wherein visual deprivation leads to loss of visual function. Here we report an enabling form of visual plasticity that commences in infant rats from eye opening, in which daily threshold testing of optokinetic tracking, amid otherwise normal visual experience, stimulates enduring, visual cortex-dependent enhancement (>60%) of the spatial frequency threshold for tracking. The perceptual ability to use spatial frequency in discriminating between moving visual stimuli is also improved by the testing experience. The capacity for inducing enhancement is transitory and effectively limited to infancy; however, enhanced responses are not consolidated and maintained unless in-kind testing experience continues uninterrupted into juvenile life. The data show that selective visual experience from infancy can alone enable visual function. They also indicate that plasticity associated with visual deprivation may not be the only cause of developmental visual dysfunction, because we found that experientially inducing enhancement in late infancy, without subsequent reinforcement of the experience in early juvenile life, can lead to enduring loss of function.

  8. Comparing Auditory-Only and Audiovisual Word Learning for Children with Hearing Loss.

    PubMed

    McDaniel, Jena; Camarata, Stephen; Yoder, Paul

    2018-05-15

    Although reducing visual input to emphasize auditory cues is a common practice in pediatric auditory (re)habilitation, the extant literature offers minimal empirical evidence for whether unisensory auditory-only (AO) or multisensory audiovisual (AV) input is more beneficial to children with hearing loss for developing spoken language skills. Using an adapted alternating treatments single case research design, we evaluated the effectiveness and efficiency of a receptive word learning intervention with and without access to visual speechreading cues. Four preschool children with prelingual hearing loss participated. Based on probes without visual cues, three participants demonstrated strong evidence for learning in the AO and AV conditions relative to a control (no-teaching) condition. No participants demonstrated a differential rate of learning between AO and AV conditions. Neither an inhibitory effect predicted by a unisensory theory nor a beneficial effect predicted by a multisensory theory for providing visual cues was identified. Clinical implications are discussed.

  9. Virtual Earth System Laboratory (VESL): Effective Visualization of Earth System Data and Process Simulations

    NASA Astrophysics Data System (ADS)

    Quinn, J. D.; Larour, E. Y.; Cheng, D. L. C.; Halkides, D. J.

    2016-12-01

    The Virtual Earth System Laboratory (VESL) is a Web-based tool, under development at the Jet Propulsion Laboratory and UC Irvine, for the visualization of Earth System data and process simulations. It contains features geared toward a range of applications, spanning research and outreach. It offers an intuitive user interface, in which model inputs are changed using sliders and other interactive components. Current capabilities include simulation of polar ice sheet responses to climate forcing, based on NASA's Ice Sheet System Model (ISSM). We believe that the visualization of data is most effective when tailored to the target audience, and that many of the best practices for modern Web design/development can be applied directly to the visualization of data: use of negative space, color schemes, typography, accessibility standards, tooltips, etc cetera. We present our prototype website, and invite input from potential users, including researchers, educators, and students.

  10. A neural mechanism of dynamic gating of task-relevant information by top-down influence in primary visual cortex.

    PubMed

    Kamiyama, Akikazu; Fujita, Kazuhisa; Kashimori, Yoshiki

    2016-12-01

    Visual recognition involves bidirectional information flow, which consists of bottom-up information coding from retina and top-down information coding from higher visual areas. Recent studies have demonstrated the involvement of early visual areas such as primary visual area (V1) in recognition and memory formation. V1 neurons are not passive transformers of sensory inputs but work as adaptive processor, changing their function according to behavioral context. Top-down signals affect tuning property of V1 neurons and contribute to the gating of sensory information relevant to behavior. However, little is known about the neuronal mechanism underlying the gating of task-relevant information in V1. To address this issue, we focus on task-dependent tuning modulations of V1 neurons in two tasks of perceptual learning. We develop a model of the V1, which receives feedforward input from lateral geniculate nucleus and top-down input from a higher visual area. We show here that the change in a balance between excitation and inhibition in V1 connectivity is necessary for gating task-relevant information in V1. The balance change well accounts for the modulations of tuning characteristic and temporal properties of V1 neuronal responses. We also show that the balance change of V1 connectivity is shaped by top-down signals with temporal correlations reflecting the perceptual strategies of the two tasks. We propose a learning mechanism by which synaptic balance is modulated. To conclude, top-down signal changes the synaptic balance between excitation and inhibition in V1 connectivity, enabling early visual area such as V1 to gate context-dependent information under multiple task performances. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Visual-somatosensory integration and balance: evidence for psychophysical integrative differences in aging.

    PubMed

    Mahoney, Jeannette R; Holtzer, Roee; Verghese, Joe

    2014-01-01

    Research detailing multisensory integration (MSI) processes in aging and their association with clinically relevant outcomes is virtually non-existent. To our knowledge, the relationship between MSI and balance has not been well-established in aging. Given known alterations in unisensory processing with increasing age, the aims of the current study were to determine differential behavioral patterns of MSI in aging and investigate whether MSI was significantly associated with balance and fall-risk. Seventy healthy older adults (M = 75 years; 58% female) participated in the current study. Participants were instructed to make speeded responses to visual, somatosensory, and visual-somatosensory (VS) stimuli. Based on reaction times (RTs) to all stimuli, participants were classified into one of two groups (MSI or NO MSI), depending on their MSI RT benefit. Static balance was assessed using mean unipedal stance time. Overall, results revealed that RTs to VS stimuli were significantly shorter than those elicited to constituent unisensory conditions. Further, the current experimental design afforded differential patterns of multisensory processing, with 75% of the elderly sample demonstrating multisensory enhancements. Interestingly, 25% of older adults did not demonstrate multisensory RT facilitation; a finding that was attributed to extremely fast RTs overall and specifically in response to somatosensory inputs. Individuals in the NO MSI group maintained significantly better unipedal stance times and reported less falls, compared to elders in the MSI group. This study reveals the existence of differential patterns of multisensory processing in aging, while describing the clinical translational value of MSI enhancements in predicting balance and falls risk.

  12. Evaluating the operations underlying multisensory integration in the cat superior colliculus.

    PubMed

    Stanford, Terrence R; Quessy, Stephan; Stein, Barry E

    2005-07-13

    It is well established that superior colliculus (SC) multisensory neurons integrate cues from different senses; however, the mechanisms responsible for producing multisensory responses are poorly understood. Previous studies have shown that spatially congruent cues from different modalities (e.g., auditory and visual) yield enhanced responses and that the greatest relative enhancements occur for combinations of the least effective modality-specific stimuli. Although these phenomena are well documented, little is known about the mechanisms that underlie them, because no study has systematically examined the operation that multisensory neurons perform on their modality-specific inputs. The goal of this study was to evaluate the computations that multisensory neurons perform in combining the influences of stimuli from two modalities. The extracellular activities of single neurons in the SC of the cat were recorded in response to visual, auditory, and bimodal visual-auditory stimulation. Each neuron was tested across a range of stimulus intensities and multisensory responses evaluated against the null hypothesis of simple summation of unisensory influences. We found that the multisensory response could be superadditive, additive, or subadditive but that the computation was strongly dictated by the efficacies of the modality-specific stimulus components. Superadditivity was most common within a restricted range of near-threshold stimulus efficacies, whereas for the majority of stimuli, response magnitudes were consistent with the linear summation of modality-specific influences. In addition to providing a constraint for developing models of multisensory integration, the relationship between response mode and stimulus efficacy emphasizes the importance of considering stimulus parameters when inducing or interpreting multisensory phenomena.

  13. Visual-Somatosensory Integration and Balance: Evidence for Psychophysical Integrative Differences in Aging

    PubMed Central

    Mahoney, Jeannette R.; Holtzer, Roee; Verghese, Joe

    2014-01-01

    Research detailing multisensory integration (MSI) processes in aging and their association with clinically relevant outcomes is virtually non-existent. To our knowledge, the relationship between MSI and balance has not been well-established in aging. Given known alterations in unisensory processing with increasing age, the aims of the current study were to determine differential behavioral patterns of MSI in aging and investigate whether MSI was significantly associated with balance and fall-risk. Seventy healthy older adults (M = 75 years; 58% female) participated in the current study. Participants were instructed to make speeded responses to visual, somatosensory, and visual-somatosensory (VS) stimuli. Based on reaction times (RTs) to all stimuli, participants were classified into one of two groups (MSI or NO MSI), depending on their MSI RT benefit. Static balance was assessed using mean unipedal stance time. Overall, results revealed that RTs to VS stimuli were significantly shorter than those elicited to constituent unisensory conditions. Further, the current experimental design afforded differential patterns of multisensory processing, with 75% of the elderly sample demonstrating multisensory enhancements. Interestingly, 25% of older adults did not demonstrate multisensory RT facilitation; a finding that was attributed to extremely fast RTs overall and specifically in response to somatosensory inputs. Individuals in the NO MSI group maintained significantly better unipedal stance times and reported less falls, compared to elders in the MSI group. This study reveals the existence of differential patterns of multisensory processing in aging, while describing the clinical translational value of MSI enhancements in predicting balance and falls risk. PMID:25102664

  14. Auditory and visual interactions between the superior and inferior colliculi in the ferret.

    PubMed

    Stitt, Iain; Galindo-Leon, Edgar; Pieper, Florian; Hollensteiner, Karl J; Engler, Gerhard; Engel, Andreas K

    2015-05-01

    The integration of visual and auditory spatial information is important for building an accurate perception of the external world, but the fundamental mechanisms governing such audiovisual interaction have only partially been resolved. The earliest interface between auditory and visual processing pathways is in the midbrain, where the superior (SC) and inferior colliculi (IC) are reciprocally connected in an audiovisual loop. Here, we investigate the mechanisms of audiovisual interaction in the midbrain by recording neural signals from the SC and IC simultaneously in anesthetized ferrets. Visual stimuli reliably produced band-limited phase locking of IC local field potentials (LFPs) in two distinct frequency bands: 6-10 and 15-30 Hz. These visual LFP responses co-localized with robust auditory responses that were characteristic of the IC. Imaginary coherence analysis confirmed that visual responses in the IC were not volume-conducted signals from the neighboring SC. Visual responses in the IC occurred later than retinally driven superficial SC layers and earlier than deep SC layers that receive indirect visual inputs, suggesting that retinal inputs do not drive visually evoked responses in the IC. In addition, SC and IC recording sites with overlapping visual spatial receptive fields displayed stronger functional connectivity than sites with separate receptive fields, indicating that visual spatial maps are aligned across both midbrain structures. Reciprocal coupling between the IC and SC therefore probably serves the dynamic integration of visual and auditory representations of space. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  15. Inhibition to excitation ratio regulates visual system responses and behavior in vivo.

    PubMed

    Shen, Wanhua; McKeown, Caroline R; Demas, James A; Cline, Hollis T

    2011-11-01

    The balance of inhibitory to excitatory (I/E) synaptic inputs is thought to control information processing and behavioral output of the central nervous system. We sought to test the effects of the decreased or increased I/E ratio on visual circuit function and visually guided behavior in Xenopus tadpoles. We selectively decreased inhibitory synaptic transmission in optic tectal neurons by knocking down the γ2 subunit of the GABA(A) receptors (GABA(A)R) using antisense morpholino oligonucleotides or by expressing a peptide corresponding to an intracellular loop of the γ2 subunit, called ICL, which interferes with anchoring GABA(A)R at synapses. Recordings of miniature inhibitory postsynaptic currents (mIPSCs) and miniature excitatory PSCs (mEPSCs) showed that these treatments decreased the frequency of mIPSCs compared with control tectal neurons without affecting mEPSC frequency, resulting in an ∼50% decrease in the ratio of I/E synaptic input. ICL expression and γ2-subunit knockdown also decreased the ratio of optic nerve-evoked synaptic I/E responses. We recorded visually evoked responses from optic tectal neurons, in which the synaptic I/E ratio was decreased. Decreasing the synaptic I/E ratio in tectal neurons increased the variance of first spike latency in response to full-field visual stimulation, increased recurrent activity in the tectal circuit, enlarged spatial receptive fields, and lengthened the temporal integration window. We used the benzodiazepine, diazepam (DZ), to increase inhibitory synaptic activity. DZ increased optic nerve-evoked inhibitory transmission but did not affect evoked excitatory currents, resulting in an increase in the I/E ratio of ∼30%. Increasing the I/E ratio with DZ decreased the variance of first spike latency, decreased spatial receptive field size, and lengthened temporal receptive fields. Sequential recordings of spikes and excitatory and inhibitory synaptic inputs to the same visual stimuli demonstrated that decreasing or increasing the I/E ratio disrupted input/output relations. We assessed the effect of an altered I/E ratio on a visually guided behavior that requires the optic tectum. Increasing and decreasing I/E in tectal neurons blocked the tectally mediated visual avoidance behavior. Because ICL expression, γ2-subunit knockdown, and DZ did not directly affect excitatory synaptic transmission, we interpret the results of our study as evidence that partially decreasing or increasing the ratio of I/E disrupts several measures of visual system information processing and visually guided behavior in an intact vertebrate.

  16. Visible Social Interactions Do Not Support the Development of False Belief Understanding in the Absence of Linguistic Input: Evidence from Deaf Adult Homesigners.

    PubMed

    Gagne, Deanna L; Coppola, Marie

    2017-01-01

    Congenitally deaf individuals exhibit enhanced visuospatial abilities relative to normally hearing individuals. An early example is the increased sensitivity of deaf signers to stimuli in the visual periphery (Neville and Lawson, 1987a). While these enhancements are robust and extend across a number of visual and spatial skills, they seem not to extend to other domains which could potentially build on these enhancements. For example, congenitally deaf children, in the absence of adequate language exposure and acquisition, do not develop typical social cognition skills as measured by traditional Theory of Mind tasks. These delays/deficits occur despite their presumed lifetime use of visuo-perceptual abilities to infer the intentions and behaviors of others (e.g., Pyers and Senghas, 2009; O'Reilly et al., 2014). In a series of studies, we explore the limits on the plasticity of visually based socio-cognitive abilities, from perspective taking to Theory of Mind/False Belief, in rarely studied individuals: deaf adults who have not acquired a conventional language (Homesigners). We compared Homesigners' performance to that of two other understudied groups in the same culture: Deaf signers of an emerging language (Cohort 1 of Nicaraguan Sign Language), and hearing speakers of Spanish with minimal schooling. We found that homesigners performed equivalently to both comparison groups with respect to several visual socio-cognitive abilities: Perspective Taking (Levels 1 and 2), adapted from Masangkay et al. (1974), and the False Photograph task, adapted from Leslie and Thaiss (1992). However, a lifetime of visuo-perceptual experiences (observing the behavior and interactions of others) did not support success on False Belief tasks, even when linguistic demands were minimized. Participants in the comparison groups outperformed the Homesigners, but did not universally pass the False Belief tasks. Our results suggest that while some of the social development achievements of young typically developing children may be dissociable from their linguistic experiences, language and/or educational experiences clearly scaffolds the transition into False Belief understanding. The lack of experience using a shared language cannot be overcome, even with the benefit of many years of observing others' behaviors and the potential neural reorganization and visuospatial enhancements resulting from deafness.

  17. Visible Social Interactions Do Not Support the Development of False Belief Understanding in the Absence of Linguistic Input: Evidence from Deaf Adult Homesigners

    PubMed Central

    Gagne, Deanna L.; Coppola, Marie

    2017-01-01

    Congenitally deaf individuals exhibit enhanced visuospatial abilities relative to normally hearing individuals. An early example is the increased sensitivity of deaf signers to stimuli in the visual periphery (Neville and Lawson, 1987a). While these enhancements are robust and extend across a number of visual and spatial skills, they seem not to extend to other domains which could potentially build on these enhancements. For example, congenitally deaf children, in the absence of adequate language exposure and acquisition, do not develop typical social cognition skills as measured by traditional Theory of Mind tasks. These delays/deficits occur despite their presumed lifetime use of visuo-perceptual abilities to infer the intentions and behaviors of others (e.g., Pyers and Senghas, 2009; O’Reilly et al., 2014). In a series of studies, we explore the limits on the plasticity of visually based socio-cognitive abilities, from perspective taking to Theory of Mind/False Belief, in rarely studied individuals: deaf adults who have not acquired a conventional language (Homesigners). We compared Homesigners’ performance to that of two other understudied groups in the same culture: Deaf signers of an emerging language (Cohort 1 of Nicaraguan Sign Language), and hearing speakers of Spanish with minimal schooling. We found that homesigners performed equivalently to both comparison groups with respect to several visual socio-cognitive abilities: Perspective Taking (Levels 1 and 2), adapted from Masangkay et al. (1974), and the False Photograph task, adapted from Leslie and Thaiss (1992). However, a lifetime of visuo-perceptual experiences (observing the behavior and interactions of others) did not support success on False Belief tasks, even when linguistic demands were minimized. Participants in the comparison groups outperformed the Homesigners, but did not universally pass the False Belief tasks. Our results suggest that while some of the social development achievements of young typically developing children may be dissociable from their linguistic experiences, language and/or educational experiences clearly scaffolds the transition into False Belief understanding. The lack of experience using a shared language cannot be overcome, even with the benefit of many years of observing others’ behaviors and the potential neural reorganization and visuospatial enhancements resulting from deafness. PMID:28626432

  18. Facial recognition using enhanced pixelized image for simulated visual prosthesis.

    PubMed

    Li, Ruonan; Zhhang, Xudong; Zhang, Hui; Hu, Guanshu

    2005-01-01

    A simulated face recognition experiment using enhanced pixelized images is designed and performed for the artificial visual prosthesis. The results of the simulation reveal new characteristics of visual performance in an enhanced pixelization condition, and then new suggestions on the future design of visual prosthesis are provided.

  19. Modeling the Development of Audiovisual Cue Integration in Speech Perception

    PubMed Central

    Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.

    2017-01-01

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558

  20. Modeling the Development of Audiovisual Cue Integration in Speech Perception.

    PubMed

    Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C

    2017-03-21

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.

  1. Summation of visual motion across eye movements reflects a nonspatial decision mechanism.

    PubMed

    Morris, Adam P; Liu, Charles C; Cropper, Simon J; Forte, Jason D; Krekelberg, Bart; Mattingley, Jason B

    2010-07-21

    Human vision remains perceptually stable even though retinal inputs change rapidly with each eye movement. Although the neural basis of visual stability remains unknown, a recent psychophysical study pointed to the existence of visual feature-representations anchored in environmental rather than retinal coordinates (e.g., "spatiotopic" receptive fields; Melcher and Morrone, 2003). In that study, sensitivity to a moving stimulus presented after a saccadic eye movement was enhanced when preceded by another moving stimulus at the same spatial location before the saccade. The finding is consistent with spatiotopic sensory integration, but it could also have arisen from a probabilistic improvement in performance due to the presence of more than one motion signal for the perceptual decision. Here we show that this statistical advantage accounts completely for summation effects in this task. We first demonstrate that measurements of summation are confounded by noise related to an observer's uncertainty about motion onset times. When this uncertainty is minimized, comparable summation is observed regardless of whether two motion signals occupy the same or different locations in space, and whether they contain the same or opposite directions of motion. These results are incompatible with the tuning properties of motion-sensitive sensory neurons and provide no evidence for a spatiotopic representation of visual motion. Instead, summation in this context reflects a decision mechanism that uses abstract representations of sensory events to optimize choice behavior.

  2. Infrared dim and small target detecting and tracking method inspired by Human Visual System

    NASA Astrophysics Data System (ADS)

    Dong, Xiabin; Huang, Xinsheng; Zheng, Yongbin; Shen, Lurong; Bai, Shengjian

    2014-01-01

    Detecting and tracking dim and small target in infrared images and videos is one of the most important techniques in many computer vision applications, such as video surveillance and infrared imaging precise guidance. Recently, more and more algorithms based on Human Visual System (HVS) have been proposed to detect and track the infrared dim and small target. In general, HVS concerns at least three mechanisms including contrast mechanism, visual attention and eye movement. However, most of the existing algorithms simulate only a single one of the HVS mechanisms, resulting in many drawbacks of these algorithms. A novel method which combines the three mechanisms of HVS is proposed in this paper. First, a group of Difference of Gaussians (DOG) filters which simulate the contrast mechanism are used to filter the input image. Second, a visual attention, which is simulated by a Gaussian window, is added at a point near the target in order to further enhance the dim small target. This point is named as the attention point. Eventually, the Proportional-Integral-Derivative (PID) algorithm is first introduced to predict the attention point of the next frame of an image which simulates the eye movement of human being. Experimental results of infrared images with different types of backgrounds demonstrate the high efficiency and accuracy of the proposed method to detect and track the dim and small targets.

  3. Visual Receptive Field Structure of Cortical Inhibitory Neurons Revealed by Two-Photon Imaging Guided Recording

    PubMed Central

    Liu, Bao-hua; Li, Pingyang; Li, Ya-tang; Sun, Yujiao J.; Yanagawa, Yuchio; Obata, Kunihiko; Zhang, Li I.; Tao, Huizhong W.

    2009-01-01

    Synaptic inhibition plays an important role in shaping receptive field (RF) properties in the visual cortex. However, the underlying mechanisms remain not well understood, partly due to difficulties in systematically studying functional properties of cortical inhibitory neurons in vivo. Here, we established two-photon imaging guided cell-attached recordings from genetically labelled inhibitory neurons and nearby “shadowed” excitatory neurons in the primary visual cortex of adult mice. Our results revealed that in layer 2/3, the majority of excitatory neurons exhibited both On and Off spike subfields, with their spatial arrangement varying from being completely segregated to overlapped. On the other hand, most layer 4 excitatory neurons exhibited only one discernable subfield. Interestingly, no RF structure with significantly segregated On and Off subfields was observed for layer 2/3 inhibitory neurons of either the fast-spike or regular-spike type. They predominantly possessed overlapped On and Off subfields with a significantly larger size than the excitatory neurons, and exhibited much weaker orientation tuning. These results from the mouse visual cortex suggest that different from the push-pull model proposed for simple cells, layer 2/3 simple-type neurons with segregated spike On and Off subfields likely receive spatially overlapped inhibitory On and Off inputs. We propose that the phase-insensitive inhibition can enhance the spatial distinctiveness of On and Off subfields through a gain control mechanism. PMID:19710305

  4. Reorganization of Visual Callosal Connections Following Alterations of Retinal Input and Brain Damage

    PubMed Central

    Restani, Laura; Caleo, Matteo

    2016-01-01

    Vision is a very important sensory modality in humans. Visual disorders are numerous and arising from diverse and complex causes. Deficits in visual function are highly disabling from a social point of view and in addition cause a considerable economic burden. For all these reasons there is an intense effort by the scientific community to gather knowledge on visual deficit mechanisms and to find possible new strategies for recovery and treatment. In this review, we focus on an important and sometimes neglected player of the visual function, the corpus callosum (CC). The CC is the major white matter structure in the brain and is involved in information processing between the two hemispheres. In particular, visual callosal connections interconnect homologous areas of visual cortices, binding together the two halves of the visual field. This interhemispheric communication plays a significant role in visual cortical output. Here, we will first review the essential literature on the physiology of the callosal connections in normal vision. The available data support the view that the callosum contributes to both excitation and inhibition to the target hemisphere, with a dynamic adaptation to the strength of the incoming visual input. Next, we will focus on data showing how callosal connections may sense visual alterations and respond to the classical paradigm for the study of visual plasticity, i.e., monocular deprivation (MD). This is a prototypical example of a model for the study of callosal plasticity in pathological conditions (e.g., strabismus and amblyopia) characterized by unbalanced input from the two eyes. We will also discuss the findings of callosal alterations in blind subjects. Noteworthy, we will discuss data showing that inter-hemispheric transfer mediates recovery of visual responsiveness following cortical damage. Finally, we will provide an overview of how callosal projections dysfunction could contribute to pathologies such as neglect and occipital epilepsy. A particular focus will be on reviewing noninvasive brain stimulation techniques and optogenetic approaches that allow to selectively manipulate callosal function and to probe its involvement in cortical processing and plasticity. Overall, the data indicate that experience can potently impact on transcallosal connectivity, and that the callosum itself is crucial for plasticity and recovery in various disorders of the visual pathway. PMID:27895559

  5. Vision drives accurate approach behavior during prey capture in laboratory mice

    PubMed Central

    Hoy, Jennifer L.; Yavorska, Iryna; Wehr, Michael; Niell, Cristopher M.

    2016-01-01

    Summary The ability to genetically identify and manipulate neural circuits in the mouse is rapidly advancing our understanding of visual processing in the mammalian brain [1,2]. However, studies investigating the circuitry that underlies complex ethologically-relevant visual behaviors in the mouse have been primarily restricted to fear responses [3–5]. Here, we show that a laboratory strain of mouse (Mus musculus, C57BL/6J) robustly pursues, captures and consumes live insect prey, and that vision is necessary for mice to perform the accurate orienting and approach behaviors leading to capture. Specifically, we differentially perturbed visual or auditory input in mice and determined that visual input is required for accurate approach, allowing maintenance of bearing to within 11 degrees of the target on average during pursuit. While mice were able to capture prey without vision, the accuracy of their approaches and capture rate dramatically declined. To better explore the contribution of vision to this behavior, we developed a simple assay that isolated visual cues and simplified analysis of the visually guided approach. Together, our results demonstrate that laboratory mice are capable of exhibiting dynamic and accurate visually-guided approach behaviors, and provide a means to estimate the visual features that drive behavior within an ethological context. PMID:27773567

  6. Early Binocular Input Is Critical for Development of Audiovisual but Not Visuotactile Simultaneity Perception.

    PubMed

    Chen, Yi-Chuan; Lewis, Terri L; Shore, David I; Maurer, Daphne

    2017-02-20

    Temporal simultaneity provides an essential cue for integrating multisensory signals into a unified perception. Early visual deprivation, in both animals and humans, leads to abnormal neural responses to audiovisual signals in subcortical and cortical areas [1-5]. Behavioral deficits in integrating complex audiovisual stimuli in humans are also observed [6, 7]. It remains unclear whether early visual deprivation affects visuotactile perception similarly to audiovisual perception and whether the consequences for either pairing differ after monocular versus binocular deprivation [8-11]. Here, we evaluated the impact of early visual deprivation on the perception of simultaneity for audiovisual and visuotactile stimuli in humans. We tested patients born with dense cataracts in one or both eyes that blocked all patterned visual input until the cataractous lenses were removed and the affected eyes fitted with compensatory contact lenses (mean duration of deprivation = 4.4 months; range = 0.3-28.8 months). Both monocularly and binocularly deprived patients demonstrated lower precision in judging audiovisual simultaneity. However, qualitatively different outcomes were observed for the two patient groups: the performance of monocularly deprived patients matched that of young children at immature stages, whereas that of binocularly deprived patients did not match any stage in typical development. Surprisingly, patients performed normally in judging visuotactile simultaneity after either monocular or binocular deprivation. Therefore, early binocular input is necessary to develop normal neural substrates for simultaneity perception of visual and auditory events but not visual and tactile events. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Receptive Field Vectors of Genetically-Identified Retinal Ganglion Cells Reveal Cell-Type-Dependent Visual Functions

    PubMed Central

    Katz, Matthew L.; Viney, Tim J.; Nikolic, Konstantin

    2016-01-01

    Sensory stimuli are encoded by diverse kinds of neurons but the identities of the recorded neurons that are studied are often unknown. We explored in detail the firing patterns of eight previously defined genetically-identified retinal ganglion cell (RGC) types from a single transgenic mouse line. We first introduce a new technique of deriving receptive field vectors (RFVs) which utilises a modified form of mutual information (“Quadratic Mutual Information”). We analysed the firing patterns of RGCs during presentation of short duration (~10 second) complex visual scenes (natural movies). We probed the high dimensional space formed by the visual input for a much smaller dimensional subspace of RFVs that give the most information about the response of each cell. The new technique is very efficient and fast and the derivation of novel types of RFVs formed by the natural scene visual input was possible even with limited numbers of spikes per cell. This approach enabled us to estimate the 'visual memory' of each cell type and the corresponding receptive field area by calculating Mutual Information as a function of the number of frames and radius. Finally, we made predictions of biologically relevant functions based on the RFVs of each cell type. RGC class analysis was complemented with results for the cells’ response to simple visual input in the form of black and white spot stimulation, and their classification on several key physiological metrics. Thus RFVs lead to predictions of biological roles based on limited data and facilitate analysis of sensory-evoked spiking data from defined cell types. PMID:26845435

  8. Lateral interactions in the outer retina

    PubMed Central

    Thoreson, Wallace B.; Mangel, Stuart C.

    2012-01-01

    Lateral interactions in the outer retina, particularly negative feedback from horizontal cells to cones and direct feed-forward input from horizontal cells to bipolar cells, play a number of important roles in early visual processing, such as generating center-surround receptive fields that enhance spatial discrimination. These circuits may also contribute to post-receptoral light adaptation and the generation of color opponency. In this review, we examine the contributions of horizontal cell feedback and feed-forward pathways to early visual processing. We begin by reviewing the properties of bipolar cell receptive fields, especially with respect to modulation of the bipolar receptive field surround by the ambient light level and to the contribution of horizontal cells to the surround. We then review evidence for and against three proposed mechanisms for negative feedback from horizontal cells to cones: 1) GABA release by horizontal cells, 2) ephaptic modulation of the cone pedicle membrane potential generated by currents flowing through hemigap junctions in horizontal cell dendrites, and 3) modulation of cone calcium currents (ICa) by changes in synaptic cleft proton levels. We also consider evidence for the presence of direct horizontal cell feed-forward input to bipolar cells and discuss a possible role for GABA at this synapse. We summarize proposed functions of horizontal cell feedback and feed-forward pathways. Finally, we examine the mechanisms and functions of two other forms of lateral interaction in the outer retina: negative feedback from horizontal cells to rods and positive feedback from horizontal cells to cones. PMID:22580106

  9. Natural asynchronies in audiovisual communication signals regulate neuronal multisensory interactions in voice-sensitive cortex

    PubMed Central

    Perrodin, Catherine; Kayser, Christoph; Logothetis, Nikos K.; Petkov, Christopher I.

    2015-01-01

    When social animals communicate, the onset of informative content in one modality varies considerably relative to the other, such as when visual orofacial movements precede a vocalization. These naturally occurring asynchronies do not disrupt intelligibility or perceptual coherence. However, they occur on time scales where they likely affect integrative neuronal activity in ways that have remained unclear, especially for hierarchically downstream regions in which neurons exhibit temporally imprecise but highly selective responses to communication signals. To address this, we exploited naturally occurring face- and voice-onset asynchronies in primate vocalizations. Using these as stimuli we recorded cortical oscillations and neuronal spiking responses from functional MRI (fMRI)-localized voice-sensitive cortex in the anterior temporal lobe of macaques. We show that the onset of the visual face stimulus resets the phase of low-frequency oscillations, and that the face–voice asynchrony affects the prominence of two key types of neuronal multisensory responses: enhancement or suppression. Our findings show a three-way association between temporal delays in audiovisual communication signals, phase-resetting of ongoing oscillations, and the sign of multisensory responses. The results reveal how natural onset asynchronies in cross-sensory inputs regulate network oscillations and neuronal excitability in the voice-sensitive cortex of macaques, a suggested animal model for human voice areas. These findings also advance predictions on the impact of multisensory input on neuronal processes in face areas and other brain regions. PMID:25535356

  10. Last but not least.

    PubMed

    Shapiro, Arthur G; Hamburger, Kai

    2007-01-01

    A central tenet of Gestalt psychology is that the visual scene can be separated into figure and ground. The two illusions we present demonstrate that Gestalt processes can group spatial contrast information that cuts across the figure/ground separation. This finding suggests that visual processes that organise the visual scene do not necessarily require structural segmentation as their primary input.

  11. Online Impact Prioritization of Essential Climate Variables on Climate Change

    NASA Astrophysics Data System (ADS)

    Forsythe-Newell, S. P.; Barkstrom, B. B.; Roberts, K. P.

    2007-12-01

    The National Oceanic & Atmospheric Administration (NOAA)'s NCDC Scientific Data Stewardship (SDS) Team has developed an online prototype that is capable of displaying the "big picture" perspective of all Essential Climate Variable (ECV) impacts on society and value to the IPCC. This prototype ECV-Model provides the ability to visualize global ECV information with options to drill down in great detail. It offers a quantifiable prioritization of ECV impacts that potentially may significantly enhance collaboration with respect to dealing effectively with climate change. The ECV-Model prototype assures anonymity and provides an online input mechanism for subject matter experts and decision makers to access, review and submit: (1) ranking of ECV"s, (2) new ECV's and associated impact categories and (3) feedback about ECV"s, satellites, etc. Input and feedback are vetted by experts before changes or additions are implemented online. The SDS prototype also provides an intuitive one-stop web site that displays past, current and planned launches of satellites; and general as well as detailed information in conjunction with imagery. NCDC's version 1.0 release will be available to the public and provide an easy "at-a-glance" interface to rapidly identify gaps and overlaps of satellites and associated instruments monitoring climate change ECV's. The SDS version 1.1 will enhance depiction of gaps and overlaps with instruments associated with In-Situ and Satellites related to ECVs. NOAA's SDS model empowers decision makers and the scientific community to rapidly identify weaknesses and strengths in monitoring climate change ECV's and potentially significantly enhance collaboration.

  12. ERGONOMICS ABSTRACTS 48347-48982.

    ERIC Educational Resources Information Center

    Ministry of Technology, London (England). Warren Spring Lab.

    IN THIS COLLECTION OF ERGONOMICS ABSTRACTS AND ANNOTATIONS THE FOLLOWING AREAS OF CONCERN ARE REPRESENTED--GENERAL REFERENCES, METHODS, FACILITIES, AND EQUIPMENT RELATING TO ERGONOMICS, SYSTEMS OF MAN AND MACHINES, VISUAL, AUDITORY, AND OTHER SENSORY INPUTS AND PROCESSES (INCLUDING SPEECH AND INTELLIGIBILITY), INPUT CHANNELS, BODY MEASUREMENTS,…

  13. Right-Brained Kids in Left-Brained Schools

    ERIC Educational Resources Information Center

    Hunter, Madeline

    1976-01-01

    Students who learn well through left hemisphere brain input (oral and written) have minimal practice in using the right hemisphere, while those who are more proficient in right hemisphere (visual) input processing are handicapped by having to use primarily their left brains. (MB)

  14. State of the art in nuclear telerobotics: focus on the man/machine connection

    NASA Astrophysics Data System (ADS)

    Greaves, Amna E.

    1995-12-01

    The interface between the human controller and remotely operated device is a crux of telerobotic investigation today. This human-to-machine connection is the means by which we communicate our commands to the device, as well as the medium for decision-critical feedback to the operator. The amount of information transferred through the user interface is growing. This can be seen as a direct result of our need to support added complexities, as well as a rapidly expanding domain of applications. A user interface, or UI, is therefore subject to increasing demands to present information in a meaningful manner to the user. Virtual reality, and multi degree-of-freedom input devices lend us the ability to augment the man/machine interface, and handle burgeoning amounts of data in a more intuitive and anthropomorphically correct manner. Along with the aid of 3-D input and output devices, there are several visual tools that can be employed as part of a graphical UI that enhance and accelerate our comprehension of the data being presented. Thus an advanced UI that features these improvements would reduce the amount of fatigue on the teleoperator, increase his level of safety, facilitate learning, augment his control, and potentially reduce task time. This paper investigates the cutting edge concepts and enhancements that lead to the next generation of telerobotic interface systems.

  15. Input-dependent modulation of MEG gamma oscillations reflects gain control in the visual cortex.

    PubMed

    Orekhova, Elena V; Sysoeva, Olga V; Schneiderman, Justin F; Lundström, Sebastian; Galuta, Ilia A; Goiaeva, Dzerasa E; Prokofyev, Andrey O; Riaz, Bushra; Keeler, Courtney; Hadjikhani, Nouchine; Gillberg, Christopher; Stroganova, Tatiana A

    2018-05-31

    Gamma-band oscillations arise from the interplay between neural excitation (E) and inhibition (I) and may provide a non-invasive window into the state of cortical circuitry. A bell-shaped modulation of gamma response power by increasing the intensity of sensory input was observed in animals and is thought to reflect neural gain control. Here we sought to find a similar input-output relationship in humans with MEG via modulating the intensity of a visual stimulation by changing the velocity/temporal-frequency of visual motion. In the first experiment, adult participants observed static and moving gratings. The frequency of the MEG gamma response monotonically increased with motion velocity whereas power followed a bell-shape. In the second experiment, on a large group of children and adults, we found that despite drastic developmental changes in frequency and power of gamma oscillations, the relative suppression at high motion velocities was scaled to the same range of values across the life-span. In light of animal and modeling studies, the modulation of gamma power and frequency at high stimulation intensities characterizes the capacity of inhibitory neurons to counterbalance increasing excitation in visual networks. Gamma suppression may thus provide a non-invasive measure of inhibitory-based gain control in the healthy and diseased brain.

  16. From attentional gating in macaque primary visual cortex to dyslexia in humans.

    PubMed

    Vidyasagar, T R

    2001-01-01

    Selective attention is an important aspect of brain function that we need in coping with the immense and constant barrage of sensory information. One model of attention (Feature Integration Theory) that suggests an early selection of spatial locations of objects via an attentional spotlight would also solve the 'binding problem' (that is how do different attributes of each object get correctly bound together?). Our experiments have demonstrated modulation of specific locations of interest at the level of the primary visual cortex both in visual discrimination and memory tasks, where the actual locations of the targets was also important in being able to perform the task. It is suggested that the feedback mediating the modulation arises from the posterior parietal cortex, which would also be consistent with its known role in attentional control. In primates, the magnocellular (M) and parvocellular (P) pathways are the two major streams of inputs from the retina, carrying distinctly different types of information and they remain fairly segregated in their projections to the primary visual cortex and further into the extra-striate regions. The P inputs go mainly into the ventral (temporal) stream, while the dorsal (parietal) stream is dominated by M inputs. A theory of attentional gating is proposed here where the M dominated dorsal stream gates the P inputs into the ventral stream. This framework is used to provide a neural explanation of the processes involved in reading and in learning to read. This scheme also explains how a magnocellular deficit could cause the common reading impairment, dyslexia.

  17. The role of visuohaptic experience in visually perceived depth.

    PubMed

    Ho, Yun-Xian; Serwe, Sascha; Trommershäuser, Julia; Maloney, Laurence T; Landy, Michael S

    2009-06-01

    Berkeley suggested that "touch educates vision," that is, haptic input may be used to calibrate visual cues to improve visual estimation of properties of the world. Here, we test whether haptic input may be used to "miseducate" vision, causing observers to rely more heavily on misleading visual cues. Human subjects compared the depth of two cylindrical bumps illuminated by light sources located at different positions relative to the surface. As in previous work using judgments of surface roughness, we find that observers judge bumps to have greater depth when the light source is located eccentric to the surface normal (i.e., when shadows are more salient). Following several sessions of visual judgments of depth, subjects then underwent visuohaptic training in which haptic feedback was artificially correlated with the "pseudocue" of shadow size and artificially decorrelated with disparity and texture. Although there were large individual differences, almost all observers demonstrated integration of haptic cues during visuohaptic training. For some observers, subsequent visual judgments of bump depth were unaffected by the training. However, for 5 of 12 observers, training significantly increased the weight given to pseudocues, causing subsequent visual estimates of shape to be less veridical. We conclude that haptic information can be used to reweight visual cues, putting more weight on misleading pseudocues, even when more trustworthy visual cues are available in the scene.

  18. Effect of rehabilitation worker input on visual function outcomes in individuals with low vision: study protocol for a randomised controlled trial.

    PubMed

    Acton, Jennifer H; Molik, Bablin; Binns, Alison; Court, Helen; Margrain, Tom H

    2016-02-24

    Visual Rehabilitation Officers help people with a visual impairment maintain their independence. This intervention adopts a flexible, goal-centred approach, which may include training in mobility, use of optical and non-optical aids, and performance of activities of daily living. Although Visual Rehabilitation Officers are an integral part of the low vision service in the United Kingdom, evidence that they are effective is lacking. The purpose of this exploratory trial is to estimate the impact of a Visual Rehabilitation Officer on self-reported visual function, psychosocial and quality-of-life outcomes in individuals with low vision. In this exploratory, assessor-masked, parallel group, randomised controlled trial, participants will be allocated either to receive home visits from a Visual Rehabilitation Officer (n = 30) or to a waiting list control group (n = 30) in a 1:1 ratio. Adult volunteers with a visual impairment, who have been identified as needing rehabilitation officer input by a social worker, will take part. Those with an urgent need for a Visual Rehabilitation Officer or who have a cognitive impairment will be excluded. The primary outcome measure will be self-reported visual function (48-item Veterans Affairs Low Vision Visual Functioning Questionnaire). Secondary outcome measures will include psychological and quality-of-life metrics: the Patient Health Questionnaire (PHQ-9), the Warwick-Edinburgh Mental Well-being Scale (WEMWBS), the Adjustment to Age-related Visual Loss Scale (AVL-12), the Standardised Health-related Quality of Life Questionnaire (EQ-5D) and the UCLA Loneliness Scale. The interviewer collecting the outcomes will be masked to the group allocations. The analysis will be undertaken on a complete case and intention-to-treat basis. Analysis of covariance (ANCOVA) will be applied to follow-up questionnaire scores, with the baseline score as a covariate. This trial is expected to provide robust effect size estimates of the intervention effect. The data will be used to design a large-scale randomised controlled trial to evaluate fully the Visual Rehabilitation Officer intervention. A rigorous evaluation of Rehabilitation Officer input is vital to direct a future low vision rehabilitation strategy and to help direct government resources. The trial was registered with ( ISRCTN44807874 ) on 9 March 2015.

  19. A Biophysical Neural Model To Describe Spatial Visual Attention

    NASA Astrophysics Data System (ADS)

    Hugues, Etienne; José, Jorge V.

    2008-02-01

    Visual scenes have enormous spatial and temporal information that are transduced into neural spike trains. Psychophysical experiments indicate that only a small portion of a spatial image is consciously accessible. Electrophysiological experiments in behaving monkeys have revealed a number of modulations of the neural activity in special visual area known as V4, when the animal is paying attention directly towards a particular stimulus location. The nature of the attentional input to V4, however, remains unknown as well as to the mechanisms responsible for these modulations. We use a biophysical neural network model of V4 to address these issues. We first constrain our model to reproduce the experimental results obtained for different external stimulus configurations and without paying attention. To reproduce the known neuronal response variability, we found that the neurons should receive about equal, or balanced, levels of excitatory and inhibitory inputs and whose levels are high as they are in in vivo conditions. Next we consider attentional inputs that can induce and reproduce the observed spiking modulations. We also elucidate the role played by the neural network to generate these modulations.

  20. A Biophysical Neural Model To Describe Spatial Visual Attention

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hugues, Etienne; Jose, Jorge V.

    2008-02-14

    Visual scenes have enormous spatial and temporal information that are transduced into neural spike trains. Psychophysical experiments indicate that only a small portion of a spatial image is consciously accessible. Electrophysiological experiments in behaving monkeys have revealed a number of modulations of the neural activity in special visual area known as V4, when the animal is paying attention directly towards a particular stimulus location. The nature of the attentional input to V4, however, remains unknown as well as to the mechanisms responsible for these modulations. We use a biophysical neural network model of V4 to address these issues. We firstmore » constrain our model to reproduce the experimental results obtained for different external stimulus configurations and without paying attention. To reproduce the known neuronal response variability, we found that the neurons should receive about equal, or balanced, levels of excitatory and inhibitory inputs and whose levels are high as they are in in vivo conditions. Next we consider attentional inputs that can induce and reproduce the observed spiking modulations. We also elucidate the role played by the neural network to generate these modulations.« less

  1. Exploring the potential of analysing visual search behaviour data using FROC (free-response receiver operating characteristic) method: an initial study

    NASA Astrophysics Data System (ADS)

    Dong, Leng; Chen, Yan; Dias, Sarah; Stone, William; Dias, Joseph; Rout, John; Gale, Alastair G.

    2017-03-01

    Visual search techniques and FROC analysis have been widely used in radiology to understand medical image perceptual behaviour and diagnostic performance. The potential of exploiting the advantages of both methodologies is of great interest to medical researchers. In this study, eye tracking data of eight dental practitioners was investigated. The visual search measures and their analyses are considered here. Each participant interpreted 20 dental radiographs which were chosen by an expert dental radiologist. Various eye movement measurements were obtained based on image area of interest (AOI) information. FROC analysis was then carried out by using these eye movement measurements as a direct input source. The performance of FROC methods using different input parameters was tested. The results showed that there were significant differences in FROC measures, based on eye movement data, between groups with different experience levels. Namely, the area under the curve (AUC) score evidenced higher values for experienced group for the measurements of fixation and dwell time. Also, positive correlations were found for AUC scores between the eye movement data conducted FROC and rating based FROC. FROC analysis using eye movement measurements as input variables can act as a potential performance indicator to deliver assessment in medical imaging interpretation and assess training procedures. Visual search data analyses lead to new ways of combining eye movement data and FROC methods to provide an alternative dimension to assess performance and visual search behaviour in the area of medical imaging perceptual tasks.

  2. Slow Feature Analysis on Retinal Waves Leads to V1 Complex Cells

    PubMed Central

    Dähne, Sven; Wilbert, Niko; Wiskott, Laurenz

    2014-01-01

    The developing visual system of many mammalian species is partially structured and organized even before the onset of vision. Spontaneous neural activity, which spreads in waves across the retina, has been suggested to play a major role in these prenatal structuring processes. Recently, it has been shown that when employing an efficient coding strategy, such as sparse coding, these retinal activity patterns lead to basis functions that resemble optimal stimuli of simple cells in primary visual cortex (V1). Here we present the results of applying a coding strategy that optimizes for temporal slowness, namely Slow Feature Analysis (SFA), to a biologically plausible model of retinal waves. Previously, SFA has been successfully applied to model parts of the visual system, most notably in reproducing a rich set of complex-cell features by training SFA with quasi-natural image sequences. In the present work, we obtain SFA units that share a number of properties with cortical complex-cells by training on simulated retinal waves. The emergence of two distinct properties of the SFA units (phase invariance and orientation tuning) is thoroughly investigated via control experiments and mathematical analysis of the input-output functions found by SFA. The results support the idea that retinal waves share relevant temporal and spatial properties with natural visual input. Hence, retinal waves seem suitable training stimuli to learn invariances and thereby shape the developing early visual system such that it is best prepared for coding input from the natural world. PMID:24810948

  3. Proceedings of the Lake Wilderness Attention Conference Held at Seattle Washington, 22-24 September 1980.

    DTIC Science & Technology

    1981-07-10

    Pohlmann, L. D. Some models of observer behavior in two-channel auditory signal detection. Perception and Psychophy- sics, 1973, 14, 101-109. Spelke...spatial), and processing modalities ( auditory versus visual input, vocal versus manual response). If validated, this configuration has both theoretical...conclusion that auditory and visual processes will compete, as will spatial and verbal (albeit to a lesser extent than auditory - auditory , visual-visual

  4. The effect of linguistic and visual salience in visual world studies.

    PubMed

    Cavicchio, Federica; Melcher, David; Poesio, Massimo

    2014-01-01

    Research using the visual world paradigm has demonstrated that visual input has a rapid effect on language interpretation tasks such as reference resolution and, conversely, that linguistic material-including verbs, prepositions and adjectives-can influence fixations to potential referents. More recent research has started to explore how this effect of linguistic input on fixations is mediated by properties of the visual stimulus, in particular by visual salience. In the present study we further explored the role of salience in the visual world paradigm manipulating language-driven salience and visual salience. Specifically, we tested how linguistic salience (i.e., the greater accessibility of linguistically introduced entities) and visual salience (bottom-up attention grabbing visual aspects) interact. We recorded participants' eye-movements during a MapTask, asking them to look from landmark to landmark displayed upon a map while hearing direction-giving instructions. The landmarks were of comparable size and color, except in the Visual Salience condition, in which one landmark had been made more visually salient. In the Linguistic Salience conditions, the instructions included references to an object not on the map. Response times and fixations were recorded. Visual Salience influenced the time course of fixations at both the beginning and the end of the trial but did not show a significant effect on response times. Linguistic Salience reduced response times and increased fixations to landmarks when they were associated to a Linguistic Salient entity not present itself on the map. When the target landmark was both visually and linguistically salient, it was fixated longer, but fixations were quicker when the target item was linguistically salient only. Our results suggest that the two types of salience work in parallel and that linguistic salience affects fixations even when the entity is not visually present.

  5. Effects of Textual Enhancement and Input Enrichment on L2 Development

    ERIC Educational Resources Information Center

    Rassaei, Ehsan

    2015-01-01

    Research on second language (L2) acquisition has recently sought to include formal instruction into second and foreign language classrooms in a more unobtrusive and implicit manner. Textual enhancement and input enrichment are two techniques which are aimed at drawing learners' attention to specific linguistic features in input and at the same…

  6. Integrating Information from Different Senses in the Auditory Cortex

    PubMed Central

    King, Andrew J.; Walker, Kerry M.M.

    2015-01-01

    Multisensory integration was once thought to be the domain of brain areas high in the cortical hierarchy, with early sensory cortical fields devoted to unisensory processing of inputs from their given set of sensory receptors. More recently, a wealth of evidence documenting visual and somatosensory responses in auditory cortex, even as early as the primary fields, has changed this view of cortical processing. These multisensory inputs may serve to enhance responses to sounds that are accompanied by other sensory cues, effectively making them easier to hear, but may also act more selectively to shape the receptive field properties of auditory cortical neurons to the location or identity of these events. We discuss the new, converging evidence that multiplexing of neural signals may play a key role in informatively encoding and integrating signals in auditory cortex across multiple sensory modalities. We highlight some of the many open research questions that exist about the neural mechanisms that give rise to multisensory integration in auditory cortex, which should be addressed in future experimental and theoretical studies. PMID:22798035

  7. Representation and disconnection in imaginal neglect.

    PubMed

    Rode, G; Cotton, F; Revol, P; Jacquin-Courtois, S; Rossetti, Y; Bartolomeo, P

    2010-08-01

    Patients with neglect failure to detect, orient, or respond to stimuli from a spatially confined region, usually on their left side. Often, the presence of perceptual input increases left omissions, while sensory deprivation decreases them, possibly by removing attention-catching right-sided stimuli (Bartolomeo, 2007). However, such an influence of visual deprivation on representational neglect was not observed in patients while they were imagining a map of France (Rode et al., 2007). Therefore, these patients with imaginal neglect either failed to generate the left side of mental images (Bisiach & Luzzatti, 1978), or suffered from a co-occurrence of deficits in automatic (bottom-up) and voluntary (top-down) orienting of attention. However, in Rode et al.'s experiment visual input was not directly relevant to the task; moreover, distraction from visual input might primarily manifest itself when representation guides somatomotor actions, beyond those involved in the generation and mental exploration of an internal map (Thomas, 1999). To explore these possibilities, we asked a patient with right hemisphere damage, R.D., to explore visual and imagined versions of a map of France in three conditions: (1) 'imagine the map in your mind' (imaginal); (2) 'describe a real map' (visual); and (3) 'list the names of French towns' (propositional). For the imaginal and visual conditions, verbal and manual pointing responses were collected; the task was also given before and after mental rotation of the map by 180 degrees . R.D. mentioned more towns on the right side of the map in the imaginal and visual conditions, but showed no representational deficit in the propositional condition. The rightward inner exploration bias in the imaginal and visual conditions was similar in magnitude and was not influenced by mental rotation or response type (verbal responses or manual pointing to locations on a map), thus suggesting that the representational deficit was robust and independent of perceptual input in R.D. Structural and diffusion MRI demonstrated damage to several white matter tracts in the right hemisphere and to the splenium of corpus callosum. A second right-brain damaged patient (P.P.), who showed signs of visual but not imaginal neglect, had damage to the same intra-hemispheric tracts, but the callosal connections were spared. Imaginal neglect in R.D. may result from fronto-parietal dysfunction impairing orientation towards left-sided items and posterior callosal disconnection preventing the symmetrical processing of spatial information from long-term memory. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  8. An egalitarian network model for the emergence of simple and complex cells in visual cortex

    PubMed Central

    Tao, Louis; Shelley, Michael; McLaughlin, David; Shapley, Robert

    2004-01-01

    We explain how simple and complex cells arise in a large-scale neuronal network model of the primary visual cortex of the macaque. Our model consists of ≈4,000 integrate-and-fire, conductance-based point neurons, representing the cells in a small, 1-mm2 patch of an input layer of the primary visual cortex. In the model the local connections are isotropic and nonspecific, and convergent input from the lateral geniculate nucleus confers cortical cells with orientation and spatial phase preference. The balance between lateral connections and lateral geniculate nucleus drive determines whether individual neurons in this recurrent circuit are simple or complex. The model reproduces qualitatively the experimentally observed distributions of both extracellular and intracellular measures of simple and complex response. PMID:14695891

  9. Effects of Length of Retention Interval on Proactive Interference in Short-Term Visual Memory

    ERIC Educational Resources Information Center

    Meudell, Peter R.

    1977-01-01

    These experiments show two things: (a) In visual memory, long-term interference on a current item from items previously stored only seems to occur when the current item's retention interval is relatively long, and (b) the visual code appears to decay rapidly, reaching asymptote within 3 seconds of input in the presence of an interpolated task.…

  10. Training-Induced Recovery of Low-Level Vision Followed by Mid-Level Perceptual Improvements in Developmental Object and Face Agnosia

    ERIC Educational Resources Information Center

    Lev, Maria; Gilaie-Dotan, Sharon; Gotthilf-Nezri, Dana; Yehezkel, Oren; Brooks, Joseph L.; Perry, Anat; Bentin, Shlomo; Bonneh, Yoram; Polat, Uri

    2015-01-01

    Long-term deprivation of normal visual inputs can cause perceptual impairments at various levels of visual function, from basic visual acuity deficits, through mid-level deficits such as contour integration and motion coherence, to high-level face and object agnosia. Yet it is unclear whether training during adulthood, at a post-developmental…

  11. Visual-perceptual-kinesthetic inputs on influencing writing performances in children with handwriting difficulties.

    PubMed

    Tse, Linda F L; Thanapalan, Kannan C; Chan, Chetwyn C H

    2014-02-01

    This study investigated the role of visual-perceptual input in writing Chinese characters among senior school-aged children who had handwriting difficulties (CHD). The participants were 27 CHD (9-11 years old) and 61 normally developed control. There were three writing conditions: copying, and dictations with or without visual feedback. The motor-free subtests of the Developmental Test of Visual Perception (DTVP-2) were conducted. The CHD group showed significantly slower mean speeds of character production and less legibility of produced characters than the control group in all writing conditions (ps<0.001). There were significant deteriorations in legibility from copying to dictation without visual feedback. Nevertheless, the Group by Condition interaction effect was not statistically significant. Only position in space of DTVP-2 was significantly correlated with the legibility among CHD (r=-0.62, p=0.001). Poor legibility seems to be related to the less-intact spatial representation of the characters in working memory, which can be rectified by viewing the characters during writing. Visual feedback regarding one's own actions in writing can also improve legibility of characters among these children. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Design by Dragging: An Interface for Creative Forward and Inverse Design with Simulation Ensembles

    PubMed Central

    Coffey, Dane; Lin, Chi-Lun; Erdman, Arthur G.; Keefe, Daniel F.

    2014-01-01

    We present an interface for exploring large design spaces as encountered in simulation-based engineering, design of visual effects, and other tasks that require tuning parameters of computationally-intensive simulations and visually evaluating results. The goal is to enable a style of design with simulations that feels as-direct-as-possible so users can concentrate on creative design tasks. The approach integrates forward design via direct manipulation of simulation inputs (e.g., geometric properties, applied forces) in the same visual space with inverse design via “tugging” and reshaping simulation outputs (e.g., scalar fields from finite element analysis (FEA) or computational fluid dynamics (CFD)). The interface includes algorithms for interpreting the intent of users’ drag operations relative to parameterized models, morphing arbitrary scalar fields output from FEA and CFD simulations, and in-place interactive ensemble visualization. The inverse design strategy can be extended to use multi-touch input in combination with an as-rigid-as-possible shape manipulation to support rich visual queries. The potential of this new design approach is confirmed via two applications: medical device engineering of a vacuum-assisted biopsy device and visual effects design using a physically based flame simulation. PMID:24051845

  13. Flexibility and Stability in Sensory Processing Revealed Using Visual-to-Auditory Sensory Substitution

    PubMed Central

    Hertz, Uri; Amedi, Amir

    2015-01-01

    The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. PMID:24518756

  14. Flexibility and Stability in Sensory Processing Revealed Using Visual-to-Auditory Sensory Substitution.

    PubMed

    Hertz, Uri; Amedi, Amir

    2015-08-01

    The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. © The Author 2014. Published by Oxford University Press.

  15. GRAPEVINE: Grids about anything by Poisson's equation in a visually interactive networking environment

    NASA Technical Reports Server (NTRS)

    Sorenson, Reese L.; Mccann, Karen

    1992-01-01

    A proven 3-D multiple-block elliptic grid generator, designed to run in 'batch mode' on a supercomputer, is improved by the creation of a modern graphical user interface (GUI) running on a workstation. The two parts are connected in real time by a network. The resultant system offers a significant speedup in the process of preparing and formatting input data and the ability to watch the grid solution converge by replotting the grid at each iteration step. The result is a reduction in user time and CPU time required to generate the grid and an enhanced understanding of the elliptic solution process. This software system, called GRAPEVINE, is described, and certain observations are made concerning the creation of such software.

  16. Examining the Effect of Age on Visual-Vestibular Self-Motion Perception Using a Driving Paradigm.

    PubMed

    Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L

    2017-05-01

    Previous psychophysical research has examined how younger adults and non-human primates integrate visual and vestibular cues to perceive self-motion. However, there is much to be learned about how multisensory self-motion perception changes with age, and how these changes affect performance on everyday tasks involving self-motion. Evidence suggests that older adults display heightened multisensory integration compared with younger adults; however, few previous studies have examined this for visual-vestibular integration. To explore age differences in the way that visual and vestibular cues contribute to self-motion perception, we had younger and older participants complete a basic driving task containing visual and vestibular cues. We compared their performance against a previously established control group that experienced visual cues alone. Performance measures included speed, speed variability, and lateral position. Vestibular inputs resulted in more precise speed control among older adults, but not younger adults, when traversing curves. Older adults demonstrated more variability in lateral position when vestibular inputs were available versus when they were absent. These observations align with previous evidence of age-related differences in multisensory integration and demonstrate that they may extend to visual-vestibular integration. These findings may have implications for vehicle and simulator design when considering older users.

  17. ERGONOMICS ABSTRACTS 48983-49619.

    ERIC Educational Resources Information Center

    Ministry of Technology, London (England). Warren Spring Lab.

    THE LITERATURE OF ERGONOMICS, OR BIOTECHNOLOGY, IS CLASSIFIED INTO 15 AREAS--METHODS, SYSTEMS OF MEN AND MACHINES, VISUAL AND AUDITORY AND OTHER INPUTS AND PROCESSES, INPUT CHANNELS, BODY MEASUREMENTS, DESIGN OF CONTROLS AND INTEGRATION WITH DISPLAYS, LAYOUT OF PANELS AND CONSOLES, DESIGN OF WORK SPACE, CLOTHING AND PERSONAL EQUIPMENT, SPECIAL…

  18. Visual Input to the Drosophila Central Complex by Developmentally and Functionally Distinct Neuronal Populations.

    PubMed

    Omoto, Jaison Jiro; Keleş, Mehmet Fatih; Nguyen, Bao-Chau Minh; Bolanos, Cheyenne; Lovick, Jennifer Kelly; Frye, Mark Arthur; Hartenstein, Volker

    2017-04-24

    The Drosophila central brain consists of stereotyped neural lineages, developmental-structural units of macrocircuitry formed by the sibling neurons of single progenitors called neuroblasts. We demonstrate that the lineage principle guides the connectivity and function of neurons, providing input to the central complex, a collection of neuropil compartments important for visually guided behaviors. One of these compartments is the ellipsoid body (EB), a structure formed largely by the axons of ring (R) neurons, all of which are generated by a single lineage, DALv2. Two further lineages, DALcl1 and DALcl2, produce neurons that connect the anterior optic tubercle, a central brain visual center, with R neurons. Finally, DALcl1/2 receive input from visual projection neurons of the optic lobe medulla, completing a three-legged circuit that we call the anterior visual pathway (AVP). The AVP bears a fundamental resemblance to the sky-compass pathway, a visual navigation circuit described in other insects. Neuroanatomical analysis and two-photon calcium imaging demonstrate that DALcl1 and DALcl2 form two parallel channels, establishing connections with R neurons located in the peripheral and central domains of the EB, respectively. Although neurons of both lineages preferentially respond to bright objects, DALcl1 neurons have small ipsilateral, retinotopically ordered receptive fields, whereas DALcl2 neurons share a large excitatory receptive field in the contralateral hemifield. DALcl2 neurons become inhibited when the object enters the ipsilateral hemifield and display an additional excitation after the object leaves the field of view. Thus, the spatial position of a bright feature, such as a celestial body, may be encoded within this pathway. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Modulation of working memory function by motivation through loss-aversion.

    PubMed

    Krawczyk, Daniel C; D'Esposito, Mark

    2013-04-01

    Cognitive performance is affected by motivation. Few studies, however, have investigated the neural mechanisms of the influence of motivation through potential monetary punishment on working memory. We employed functional MRI during a delayed recognition task that manipulated top-down control demands with added monetary incentives to some trials in the form of potential losses of bonus money. Behavioral performance on the task was influenced by loss-threatening incentives in the form of faster and more accurate performance. As shown previously, we found enhancement of activity for relevant stimuli occurs throughout all task periods (e.g., stimulus encoding, maintenance, and response) in both prefrontal and visual association cortex. Further, these activation patterns were enhanced for trials with possible monetary loss relative to nonincentive trials. During the incentive cue, the amygdala and striatum showed significantly greater activation when money was at a possible loss on the trial. We also evaluated patterns of functional connectivity between regions responsive to monetary consequences and prefrontal areas responsive to the task. This analysis revealed greater delay period connectivity between and the left insula and prefrontal cortex with possible monetary loss relative to nonincentive trials. Overall, these results reveal that incentive motivation can modulate performance on working memory tasks through top-down signals via amplification of activity within prefrontal and visual association regions selective to processing the perceptual inputs of the stimuli to be remembered. Copyright © 2011 Wiley Periodicals, Inc.

  20. Modulation of working memory function by motivation through loss-aversion

    PubMed Central

    Krawczyk, Daniel C.; D’Esposito, Mark

    2012-01-01

    Cognitive performance is affected by motivation. Few studies, however, have investigated the neural mechanisms of the influence of motivation through potential monetary punishment on working memory. We employed functional MRI during a delayed recognition task that manipulated top-down control demands with added monetary incentives to some trials in the form of potential losses of bonus money. Behavioral performance on the task was influenced by loss-threatening incentives in the form of faster and more accurate performance. As shown previously, we found enhancement of activity for relevant stimuli occurs throughout all task periods (e.g. stimulus encoding, maintenance, and response) in both prefrontal and visual association cortex. Further, these activation patterns were enhanced for trials with possible monetary loss relative to non-incentive trials. During the incentive cue, the amygdala and striatum showed significantly greater activation when money was at a possible loss on the trial. We also evaluated patterns of functional connectivity between regions responsive to monetary consequences and prefrontal areas responsive to the task. This analysis revealed greater delay period connectivity between and the left insula and prefrontal cortex with possible monetary loss relative to non-incentive trials. Overall, these results reveal that incentive motivation can modulate performance on working memory tasks through top-down signals via amplification of activity within prefrontal and visual association regions selective to processing the perceptual inputs of the stimuli to be remembered. PMID:22113962

  1. Interactive Visual Analytics Approch for Exploration of Geochemical Model Simulations with Different Parameter Sets

    NASA Astrophysics Data System (ADS)

    Jatnieks, Janis; De Lucia, Marco; Sips, Mike; Dransch, Doris

    2015-04-01

    Many geoscience applications can benefit from testing many combinations of input parameters for geochemical simulation models. It is, however, a challenge to screen the input and output data from the model to identify the significant relationships between input parameters and output variables. For addressing this problem we propose a Visual Analytics approach that has been developed in an ongoing collaboration between computer science and geoscience researchers. Our Visual Analytics approach uses visualization methods of hierarchical horizontal axis, multi-factor stacked bar charts and interactive semi-automated filtering for input and output data together with automatic sensitivity analysis. This guides the users towards significant relationships. We implement our approach as an interactive data exploration tool. It is designed with flexibility in mind, so that a diverse set of tasks such as inverse modeling, sensitivity analysis and model parameter refinement can be supported. Here we demonstrate the capabilities of our approach by two examples for gas storage applications. For the first example our Visual Analytics approach enabled the analyst to observe how the element concentrations change around previously established baselines in response to thousands of different combinations of mineral phases. This supported combinatorial inverse modeling for interpreting observations about the chemical composition of the formation fluids at the Ketzin pilot site for CO2 storage. The results indicate that, within the experimental error range, the formation fluid cannot be considered at local thermodynamical equilibrium with the mineral assemblage of the reservoir rock. This is a valuable insight from the predictive geochemical modeling for the Ketzin site. For the second example our approach supports sensitivity analysis for a reaction involving the reductive dissolution of pyrite with formation of pyrrothite in presence of gaseous hydrogen. We determine that this reaction is thermodynamically favorable under a broad range of conditions. This includes low temperatures and absence of microbial catalysators. Our approach has potential for use in other applications that involve exploration of relationships in geochemical simulation model data.

  2. Recurrent V1-V2 interaction in early visual boundary processing.

    PubMed

    Neumann, H; Sepp, W

    1999-11-01

    A majority of cortical areas are connected via feedforward and feedback fiber projections. In feedforward pathways we mainly observe stages of feature detection and integration. The computational role of the descending pathways at different stages of processing remains mainly unknown. Based on empirical findings we suggest that the top-down feedback pathways subserve a context-dependent gain control mechanism. We propose a new computational model for recurrent contour processing in which normalized activities of orientation selective contrast cells are fed forward to the next processing stage. There, the arrangement of input activation is matched against local patterns of contour shape. The resulting activities are subsequently fed back to the previous stage to locally enhance those initial measurements that are consistent with the top-down generated responses. In all, we suggest a computational theory for recurrent processing in the visual cortex in which the significance of local measurements is evaluated on the basis of a broader visual context that is represented in terms of contour code patterns. The model serves as a framework to link physiological with perceptual data gathered in psychophysical experiments. It handles a variety of perceptual phenomena, such as the local grouping of fragmented shape outline, texture surround and density effects, and the interpolation of illusory contours.

  3. Immersive Earth Science: Data Visualization in Virtual Reality

    NASA Astrophysics Data System (ADS)

    Skolnik, S.; Ramirez-Linan, R.

    2017-12-01

    Utilizing next generation technology, Navteca's exploration of 3D and volumetric temporal data in Virtual Reality (VR) takes advantage of immersive user experiences where stakeholders are literally inside the data. No longer restricted by the edges of a screen, VR provides an innovative way of viewing spatially distributed 2D and 3D data that leverages a 360 field of view and positional-tracking input, allowing users to see and experience data differently. These concepts are relevant to many sectors, industries, and fields of study, as real-time collaboration in VR can enhance understanding and mission with VR visualizations that display temporally-aware 3D, meteorological, and other volumetric datasets. The ability to view data that is traditionally "difficult" to visualize, such as subsurface features or air columns, is a particularly compelling use of the technology. Various development iterations have resulted in Navteca's proof of concept that imports and renders volumetric point-cloud data in the virtual reality environment by interfacing PC-based VR hardware to a back-end server and popular GIS software. The integration of the geo-located data in VR and subsequent display of changeable basemaps, overlaid datasets, and the ability to zoom, navigate, and select specific areas show the potential for immersive VR to revolutionize the way Earth data is viewed, analyzed, and communicated.

  4. A preconscious neural mechanism of hypnotically altered colors: a double case study.

    PubMed

    Koivisto, Mika; Kirjanen, Svetlana; Revonsuo, Antti; Kallio, Sakari

    2013-01-01

    Hypnotic suggestions may change the perceived color of objects. Given that chromatic stimulus information is processed rapidly and automatically by the visual system, how can hypnotic suggestions affect perceived colors in a seemingly immediate fashion? We studied the mechanisms of such color alterations by measuring electroencephalography in two highly suggestible participants as they perceived briefly presented visual shapes under posthypnotic color alternation suggestions such as "all the squares are blue". One participant consistently reported seeing the suggested colors. Her reports correlated with enhanced evoked upper beta-band activity (22 Hz) 70-120 ms after stimulus in response to the shapes mentioned in the suggestion. This effect was not observed in a control condition where the participants merely tried to simulate the effects of the suggestion on behavior. The second participant neither reported color alterations nor showed the evoked beta activity, although her subjective experience and event-related potentials were changed by the suggestions. The results indicate a preconscious mechanism that first compares early visual input with a memory representation of the suggestion and consequently triggers the color alteration process in response to the objects specified by the suggestion. Conscious color experience is not purely the result of bottom-up processing but it can be modulated, at least in some individuals, by top-down factors such as hypnotic suggestions.

  5. The Energetics and Physiological Impact of Cohesin Extrusion.

    PubMed

    Vian, Laura; Pękowska, Aleksandra; Rao, Suhas S P; Kieffer-Kwon, Kyong-Rim; Jung, Seolkyoung; Baranello, Laura; Huang, Su-Chen; El Khattabi, Laila; Dose, Marei; Pruett, Nathanael; Sanborn, Adrian L; Canela, Andres; Maman, Yaakov; Oksanen, Anna; Resch, Wolfgang; Li, Xingwang; Lee, Byoungkoo; Kovalchuk, Alexander L; Tang, Zhonghui; Nelson, Steevenson; Di Pierro, Michele; Cheng, Ryan R; Machol, Ido; St Hilaire, Brian Glenn; Durand, Neva C; Shamim, Muhammad S; Stamenova, Elena K; Onuchic, José N; Ruan, Yijun; Nussenzweig, Andre; Levens, David; Aiden, Erez Lieberman; Casellas, Rafael

    2018-05-17

    Cohesin extrusion is thought to play a central role in establishing the architecture of mammalian genomes. However, extrusion has not been visualized in vivo, and thus, its functional impact and energetics are unknown. Using ultra-deep Hi-C, we show that loop domains form by a process that requires cohesin ATPases. Once formed, however, loops and compartments are maintained for hours without energy input. Strikingly, without ATP, we observe the emergence of hundreds of CTCF-independent loops that link regulatory DNA. We also identify architectural "stripes," where a loop anchor interacts with entire domains at high frequency. Stripes often tether super-enhancers to cognate promoters, and in B cells, they facilitate Igh transcription and recombination. Stripe anchors represent major hotspots for topoisomerase-mediated lesions, which promote chromosomal translocations and cancer. In plasmacytomas, stripes can deregulate Igh-translocated oncogenes. We propose that higher organisms have coopted cohesin extrusion to enhance transcription and recombination, with implications for tumor development. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Advanced integrated enhanced vision systems

    NASA Astrophysics Data System (ADS)

    Kerr, J. R.; Luk, Chiu H.; Hammerstrom, Dan; Pavel, Misha

    2003-09-01

    In anticipation of its ultimate role in transport, business and rotary wing aircraft, we clarify the role of Enhanced Vision Systems (EVS): how the output data will be utilized, appropriate architecture for total avionics integration, pilot and control interfaces, and operational utilization. Ground-map (database) correlation is critical, and we suggest that "synthetic vision" is simply a subset of the monitor/guidance interface issue. The core of integrated EVS is its sensor processor. In order to approximate optimal, Bayesian multi-sensor fusion and ground correlation functionality in real time, we are developing a neural net approach utilizing human visual pathway and self-organizing, associative-engine processing. In addition to EVS/SVS imagery, outputs will include sensor-based navigation and attitude signals as well as hazard detection. A system architecture is described, encompassing an all-weather sensor suite; advanced processing technology; intertial, GPS and other avionics inputs; and pilot and machine interfaces. Issues of total-system accuracy and integrity are addressed, as well as flight operational aspects relating to both civil certification and military applications in IMC.

  7. NBodyLab Simulation Experiments with GRAPE-6a AND MD-GRAPE2 Acceleration

    NASA Astrophysics Data System (ADS)

    Johnson, V.; Ates, A.

    2005-12-01

    NbodyLab is an astrophysical N-body simulation testbed for student research. It is accessible via a web interface and runs as a backend framework under Linux. NbodyLab can generate data models or perform star catalog lookups, transform input data sets, perform direct summation gravitational force calculations using a variety of integration schemes, and produce analysis and visualization output products. NEMO (Teuben 1994), a popular stellar dynamics toolbox, is used for some functions. NbodyLab integrators can optionally utilize two types of low-cost desktop supercomputer accelerators, the newly available GRAPE-6a (125 Gflops peak) and the MD-GRAPE2 (64-128 Gflops peak). The initial version of NBodyLab was presented at ADASS 2002. This paper summarizes software enhancements developed subsequently, focusing on GRAPE-6a related enhancements, and gives examples of computational experiments and astrophysical research, including star cluster and solar system studies, that can be conducted with the new testbed functionality.

  8. Flexible Environmental Modeling with Python and Open - GIS

    NASA Astrophysics Data System (ADS)

    Pryet, Alexandre; Atteia, Olivier; Delottier, Hugo; Cousquer, Yohann

    2015-04-01

    Numerical modeling now represents a prominent task of environmental studies. During the last decades, numerous commercial programs have been made available to environmental modelers. These software applications offer user-friendly graphical user interfaces that allow an efficient management of many case studies. However, they suffer from a lack of flexibility and closed-source policies impede source code reviewing and enhancement for original studies. Advanced modeling studies require flexible tools capable of managing thousands of model runs for parameter optimization, uncertainty and sensitivity analysis. In addition, there is a growing need for the coupling of various numerical models associating, for instance, groundwater flow modeling to multi-species geochemical reactions. Researchers have produced hundreds of open-source powerful command line programs. However, there is a need for a flexible graphical user interface allowing an efficient processing of geospatial data that comes along any environmental study. Here, we present the advantages of using the free and open-source Qgis platform and the Python scripting language for conducting environmental modeling studies. The interactive graphical user interface is first used for the visualization and pre-processing of input geospatial datasets. Python scripting language is then employed for further input data processing, call to one or several models, and post-processing of model outputs. Model results are eventually sent back to the GIS program, processed and visualized. This approach combines the advantages of interactive graphical interfaces and the flexibility of Python scripting language for data processing and model calls. The numerous python modules available facilitate geospatial data processing and numerical analysis of model outputs. Once input data has been prepared with the graphical user interface, models may be run thousands of times from the command line with sequential or parallel calls. We illustrate this approach with several case studies in groundwater hydrology and geochemistry and provide links to several python libraries that facilitate pre- and post-processing operations.

  9. Drawing enhances cross-modal memory plasticity in the human brain: a case study in a totally blind adult

    PubMed Central

    Likova, Lora T.

    2012-01-01

    In a memory-guided drawing task under blindfolded conditions, we have recently used functional Magnetic Resonance Imaging (fMRI) to demonstrate that the primary visual cortex (V1) may operate as the visuo-spatial buffer, or “sketchpad,” for working memory. The results implied, however, a modality-independent or amodal form of its operation. In the present study, to validate the role of V1 in non-visual memory, we eliminated not only the visual input but all levels of visual processing by replicating the paradigm in a congenitally blind individual. Our novel Cognitive-Kinesthetic method was used to train this totally blind subject to draw complex images guided solely by tactile memory. Control tasks of tactile exploration and memorization of the image to be drawn, and memory-free scribbling were also included. FMRI was run before training and after training. Remarkably, V1 of this congenitally blind individual, which before training exhibited noisy, immature, and non-specific responses, after training produced full-fledged response time-courses specific to the tactile-memory drawing task. The results reveal the operation of a rapid training-based plasticity mechanism that recruits the resources of V1 in the process of learning to draw. The learning paradigm allowed us to investigate for the first time the evolution of plastic re-assignment in V1 in a congenitally blind subject. These findings are consistent with a non-visual memory involvement of V1, and specifically imply that the observed cortical reorganization can be empowered by the process of learning to draw. PMID:22593738

  10. Drawing enhances cross-modal memory plasticity in the human brain: a case study in a totally blind adult.

    PubMed

    Likova, Lora T

    2012-01-01

    In a memory-guided drawing task under blindfolded conditions, we have recently used functional Magnetic Resonance Imaging (fMRI) to demonstrate that the primary visual cortex (V1) may operate as the visuo-spatial buffer, or "sketchpad," for working memory. The results implied, however, a modality-independent or amodal form of its operation. In the present study, to validate the role of V1 in non-visual memory, we eliminated not only the visual input but all levels of visual processing by replicating the paradigm in a congenitally blind individual. Our novel Cognitive-Kinesthetic method was used to train this totally blind subject to draw complex images guided solely by tactile memory. Control tasks of tactile exploration and memorization of the image to be drawn, and memory-free scribbling were also included. FMRI was run before training and after training. Remarkably, V1 of this congenitally blind individual, which before training exhibited noisy, immature, and non-specific responses, after training produced full-fledged response time-courses specific to the tactile-memory drawing task. The results reveal the operation of a rapid training-based plasticity mechanism that recruits the resources of V1 in the process of learning to draw. The learning paradigm allowed us to investigate for the first time the evolution of plastic re-assignment in V1 in a congenitally blind subject. These findings are consistent with a non-visual memory involvement of V1, and specifically imply that the observed cortical reorganization can be empowered by the process of learning to draw.

  11. Integrating Patient-Reported Outcomes into Spine Surgical Care through Visual Dashboards: Lessons Learned from Human-Centered Design.

    PubMed

    Hartzler, Andrea L; Chaudhuri, Shomir; Fey, Brett C; Flum, David R; Lavallee, Danielle

    2015-01-01

    The collection of patient-reported outcomes (PROs) draws attention to issues of importance to patients-physical function and quality of life. The integration of PRO data into clinical decisions and discussions with patients requires thoughtful design of user-friendly interfaces that consider user experience and present data in personalized ways to enhance patient care. Whereas most prior work on PROs focuses on capturing data from patients, little research details how to design effective user interfaces that facilitate use of this data in clinical practice. We share lessons learned from engaging health care professionals to inform design of visual dashboards, an emerging type of health information technology (HIT). We employed human-centered design (HCD) methods to create visual displays of PROs to support patient care and quality improvement. HCD aims to optimize the design of interactive systems through iterative input from representative users who are likely to use the system in the future. Through three major steps, we engaged health care professionals in targeted, iterative design activities to inform the development of a PRO Dashboard that visually displays patient-reported pain and disability outcomes following spine surgery. Design activities to engage health care administrators, providers, and staff guided our work from design concept to specifications for dashboard implementation. Stakeholder feedback from these health care professionals shaped user interface design features, including predefined overviews that illustrate at-a-glance trends and quarterly snapshots, granular data filters that enable users to dive into detailed PRO analytics, and user-defined views to share and reuse. Feedback also revealed important considerations for quality indicators and privacy-preserving sharing and use of PROs. Our work illustrates a range of engagement methods guided by human-centered principles and design recommendations for optimizing PRO Dashboards for patient care and quality improvement. Engaging health care professionals as stakeholders is a critical step toward the design of user-friendly HIT that is accepted, usable, and has the potential to enhance quality of care and patient outcomes.

  12. Does a Flatter General Gradient of Visual Attention Explain Peripheral Advantages and Central Deficits in Deaf Adults?

    PubMed Central

    Samar, Vincent J.; Berger, Lauren

    2017-01-01

    Individuals deaf from early age often outperform hearing individuals in the visual periphery on attention-dependent dorsal stream tasks (e.g., spatial localization or movement detection), but sometimes show central visual attention deficits, usually on ventral stream object identification tasks. It has been proposed that early deafness adaptively redirects attentional resources from central to peripheral vision to monitor extrapersonal space in the absence of auditory cues, producing a more evenly distributed attention gradient across visual space. However, little direct evidence exists that peripheral advantages are functionally tied to central deficits, rather than determined by independent mechanisms, and previous studies using several attention tasks typically report peripheral advantages or central deficits, not both. To test the general altered attentional gradient proposal, we employed a novel divided attention paradigm that measured target localization performance along a gradient from parafoveal to peripheral locations, independent of concurrent central object identification performance in prelingually deaf and hearing groups who differed in access to auditory input. Deaf participants without cochlear implants (No-CI), with cochlear implants (CI), and hearing participants identified vehicles presented centrally, and concurrently reported the location of parafoveal (1.4°) and peripheral (13.3°) targets among distractors. No-CI participants but not CI participants showed a central identification accuracy deficit. However, all groups displayed equivalent target localization accuracy at peripheral and parafoveal locations and nearly parallel parafoveal-peripheral gradients. Furthermore, the No-CI group’s central identification deficit remained after statistically controlling peripheral performance; conversely, the parafoveal and peripheral group performance equivalencies remained after controlling central identification accuracy. These results suggest that, in the absence of auditory input, reduced central attentional capacity is not necessarily associated with enhanced peripheral attentional capacity or with flattening of a general attention gradient. Our findings converge with earlier studies suggesting that a general graded trade-off of attentional resources across the visual field does not adequately explain the complex task-dependent spatial distribution of deaf-hearing performance differences reported in the literature. Rather, growing evidence suggests that the spatial distribution of attention-mediated performance in deaf people is determined by sophisticated cross-modal plasticity mechanisms that recruit specific sensory and polymodal cortex to achieve specific compensatory processing goals. PMID:28559861

  13. [Visual input affects the expression of the early genes c-Fos and ZENK in auditory telencephalic centers of pied flycatcher nestlings during the acoustically-guided freezing].

    PubMed

    Korneeva, E V; Tiunova, A A; Aleksandrov, L I; Golubeva, T B; Anokhin, K V

    2014-01-01

    The present study analyzed expression of transcriptional factors c-Fos and ZENK in 9-day-old pied flycatcher nestlings' (Ficedula hypoleuca) telencephalic auditory centers (field L, caudomedial nidopallium and caudomedial mesopallium) involved in the acoustically-guided defense behavior. Species-typical alarm call was presented to the young in three groups: 1--intact group (sighted control), 2--nestlings visually deprived just before the experiment for a short time (unsighted control) 3--nestlings visually deprived right after hatching (experimental deprivation). Induction of c-Fos as well as ZENK in nestlings from the experimental deprivation group was decreased in both hemispheres as compared with intact group. In the group of unsighted control, only the decrease of c-Fos induction was observed exclusively in the right hemisphere. These findings suggest that limitation of visual input changes the population of neurons involved into the acoustically-guided behavior, the effect being dependant from the duration of deprivation.

  14. Dissociation and Convergence of the Dorsal and Ventral Visual Streams in the Human Prefrontal Cortex

    PubMed Central

    Takahashi, Emi; Ohki, Kenichi; Kim, Dae-Shik

    2012-01-01

    Visual information is largely processed through two pathways in the primate brain: an object pathway from the primary visual cortex to the temporal cortex (ventral stream) and a spatial pathway to the parietal cortex (dorsal stream). Whether and to what extent dissociation exists in the human prefrontal cortex (PFC) has long been debated. We examined anatomical connections from functionally defined areas in the temporal and parietal cortices to the PFC, using noninvasive functional and diffusion-weighted magnetic resonance imaging. The right inferior frontal gyrus (IFG) received converging input from both streams, while the right superior frontal gyrus received input only from the dorsal stream. Interstream functional connectivity to the IFG was dynamically recruited only when both object and spatial information were processed. These results suggest that the human PFC receives dissociated and converging visual pathways, and that the right IFG region serves as an integrator of the two types of information. PMID:23063444

  15. Direct visuomotor mapping for fast visually-evoked arm movements.

    PubMed

    Reynolds, Raymond F; Day, Brian L

    2012-12-01

    In contrast to conventional reaction time (RT) tasks, saccadic RT's to visual targets are very fast and unaffected by the number of possible targets. This can be explained by the sub-cortical circuitry underlying eye movements, which involves direct mapping between retinal input and motor output in the superior colliculus. Here we asked if the choice-invariance established for the eyes also applies to a special class of fast visuomotor responses of the upper limb. Using a target-pointing paradigm we observed very fast reaction times (<150 ms) which were completely unaffected as the number of possible target choices was increased from 1 to 4. When we introduced a condition of altered stimulus-response mapping, RT went up and a cost of choice was observed. These results can be explained by direct mapping between visual input and motor output, compatible with a sub-cortical pathway for visual control of the upper limb. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Looking for ideas: Eye behavior during goal-directed internally focused cognition☆

    PubMed Central

    Walcher, Sonja; Körner, Christof; Benedek, Mathias

    2017-01-01

    Humans have a highly developed visual system, yet we spend a high proportion of our time awake ignoring the visual world and attending to our own thoughts. The present study examined eye movement characteristics of goal-directed internally focused cognition. Deliberate internally focused cognition was induced by an idea generation task. A letter-by-letter reading task served as external task. Idea generation (vs. reading) was associated with more and longer blinks and fewer microsaccades indicating an attenuation of visual input. Idea generation was further associated with more and shorter fixations, more saccades and saccades with higher amplitudes as well as heightened stimulus-independent variation of eye vergence. The latter results suggest a coupling of eye behavior to internally generated information and associated cognitive processes, i.e. searching for ideas. Our results support eye behavior patterns as indicators of goal-directed internally focused cognition through mechanisms of attenuation of visual input and coupling of eye behavior to internally generated information. PMID:28689088

  17. A Multifactor Approach to Research in Instructional Technology.

    ERIC Educational Resources Information Center

    Ragan, Tillman J.

    In a field such as instructional design, explanations of educational outcomes must necessarily consider multiple input variables. To adequately understand the contribution made by the independent variables, it is helpful to have a visual conception of how the input variables interrelate. Two variable models are adequately represented by a two…

  18. Response Modality Variations Affect Determinations of Children's Learning Styles.

    ERIC Educational Resources Information Center

    Janowitz, Jeffrey M.

    The Swassing-Barbe Modality Index (SBMI) uses visual, auditory, and tactile inputs, but only reconstructed output, to measure children's modality strengths. In this experiment, the SBMI's three input modalities were crossed with two output modalities (spoken and drawn) in addition to the reconstructed standard to result in nine treatment…

  19. The Influence of Visual Feedback and Register Changes on Sign Language Production: A Kinematic Study with Deaf Signers

    ERIC Educational Resources Information Center

    Emmorey, Karen; Gertsberg, Nelly; Korpics, Franco; Wright, Charles E.

    2009-01-01

    Speakers monitor their speech output by listening to their own voice. However, signers do not look directly at their hands and cannot see their own face. We investigated the importance of a visual perceptual loop for sign language monitoring by examining whether changes in visual input alter sign production. Deaf signers produced American Sign…

  20. The role of pulvinar in the transmission of information in the visual hierarchy.

    PubMed

    Cortes, Nelson; van Vreeswijk, Carl

    2012-01-01

    VISUAL RECEPTIVE FIELD (RF) ATTRIBUTES IN VISUAL CORTEX OF PRIMATES HAVE BEEN EXPLAINED MAINLY FROM CORTICAL CONNECTIONS: visual RFs progress from simple to complex through cortico-cortical pathways from lower to higher levels in the visual hierarchy. This feedforward flow of information is paired with top-down processes through the feedback pathway. Although the hierarchical organization explains the spatial properties of RFs, is unclear how a non-linear transmission of activity through the visual hierarchy can yield smooth contrast response functions in all level of the hierarchy. Depending on the gain, non-linear transfer functions create either a bimodal response to contrast, or no contrast dependence of the response in the highest level of the hierarchy. One possible mechanism to regulate this transmission of visual contrast information from low to high level involves an external component that shortcuts the flow of information through the hierarchy. A candidate for this shortcut is the Pulvinar nucleus of the thalamus. To investigate representation of stimulus contrast a hierarchical model network of ten cortical areas is examined. In each level of the network, the activity from the previous layer is integrated and then non-linearly transmitted to the next level. The arrangement of interactions creates a gradient from simple to complex RFs of increasing size as one moves from lower to higher cortical levels. The visual input is modeled as a Gaussian random input, whose width codes for the contrast. This input is applied to the first area. The output activity ratio among different contrast values is analyzed for the last level to observe sensitivity to a contrast and contrast invariant tuning. For a purely cortical system, the output of the last area can be approximately contrast invariant, but the sensitivity to contrast is poor. To account for an alternative visual processing pathway, non-reciprocal connections from and to a parallel pulvinar like structure of nine areas is coupled to the system. Compared to the pure feedforward model, cortico-pulvino-cortical output presents much more sensitivity to contrast and has a similar level of contrast invariance of the tuning.

  1. The Role of Pulvinar in the Transmission of Information in the Visual Hierarchy

    PubMed Central

    Cortes, Nelson; van Vreeswijk, Carl

    2012-01-01

    Visual receptive field (RF) attributes in visual cortex of primates have been explained mainly from cortical connections: visual RFs progress from simple to complex through cortico-cortical pathways from lower to higher levels in the visual hierarchy. This feedforward flow of information is paired with top-down processes through the feedback pathway. Although the hierarchical organization explains the spatial properties of RFs, is unclear how a non-linear transmission of activity through the visual hierarchy can yield smooth contrast response functions in all level of the hierarchy. Depending on the gain, non-linear transfer functions create either a bimodal response to contrast, or no contrast dependence of the response in the highest level of the hierarchy. One possible mechanism to regulate this transmission of visual contrast information from low to high level involves an external component that shortcuts the flow of information through the hierarchy. A candidate for this shortcut is the Pulvinar nucleus of the thalamus. To investigate representation of stimulus contrast a hierarchical model network of ten cortical areas is examined. In each level of the network, the activity from the previous layer is integrated and then non-linearly transmitted to the next level. The arrangement of interactions creates a gradient from simple to complex RFs of increasing size as one moves from lower to higher cortical levels. The visual input is modeled as a Gaussian random input, whose width codes for the contrast. This input is applied to the first area. The output activity ratio among different contrast values is analyzed for the last level to observe sensitivity to a contrast and contrast invariant tuning. For a purely cortical system, the output of the last area can be approximately contrast invariant, but the sensitivity to contrast is poor. To account for an alternative visual processing pathway, non-reciprocal connections from and to a parallel pulvinar like structure of nine areas is coupled to the system. Compared to the pure feedforward model, cortico-pulvino-cortical output presents much more sensitivity to contrast and has a similar level of contrast invariance of the tuning. PMID:22654750

  2. Locomotor sensory organization test: a novel paradigm for the assessment of sensory contributions in gait.

    PubMed

    Chien, Jung Hung; Eikema, Diderik-Jan Anthony; Mukherjee, Mukul; Stergiou, Nicholas

    2014-12-01

    Feedback based balance control requires the integration of visual, proprioceptive and vestibular input to detect the body's movement within the environment. When the accuracy of sensory signals is compromised, the system reorganizes the relative contributions through a process of sensory recalibration, for upright postural stability to be maintained. Whereas this process has been studied extensively in standing using the Sensory Organization Test (SOT), less is known about these processes in more dynamic tasks such as locomotion. In the present study, ten healthy young adults performed the six conditions of the traditional SOT to quantify standing postural control when exposed to sensory conflict. The same subjects performed these six conditions using a novel experimental paradigm, the Locomotor SOT (LSOT), to study dynamic postural control during walking under similar types of sensory conflict. To quantify postural control during walking, the net Center of Pressure sway variability was used. This corresponds to the Performance Index of the center of pressure trajectory, which is used to quantify postural control during standing. Our results indicate that dynamic balance control during locomotion in healthy individuals is affected by the systematic manipulation of multisensory inputs. The sway variability patterns observed during locomotion reflect similar balance performance with standing posture, indicating that similar feedback processes may be involved. However, the contribution of visual input is significantly increased during locomotion, compared to standing in similar sensory conflict conditions. The increased visual gain in the LSOT conditions reflects the importance of visual input for the control of locomotion. Since balance perturbations tend to occur in dynamic tasks and in response to environmental constraints not present during the SOT, the LSOT may provide additional information for clinical evaluation on healthy and deficient sensory processing.

  3. The dark side of the alpha rhythm: fMRI evidence for induced alpha modulation during complete darkness.

    PubMed

    Ben-Simon, Eti; Podlipsky, Ilana; Okon-Singer, Hadas; Gruberger, Michal; Cvetkovic, Dean; Intrator, Nathan; Hendler, Talma

    2013-03-01

    The unique role of the EEG alpha rhythm in different states of cortical activity is still debated. The main theories regarding alpha function posit either sensory processing or attention allocation as the main processes governing its modulation. Closing and opening eyes, a well-known manipulation of the alpha rhythm, could be regarded as attention allocation from inward to outward focus though during light is also accompanied by visual change. To disentangle the effects of attention allocation and sensory visual input on alpha modulation, 14 healthy subjects were asked to open and close their eyes during conditions of light and of complete darkness while simultaneous recordings of EEG and fMRI were acquired. Thus, during complete darkness the eyes-open condition is not related to visual input but only to attention allocation, allowing direct examination of its role in alpha modulation. A data-driven ridge regression classifier was applied to the EEG data in order to ascertain the contribution of the alpha rhythm to eyes-open/eyes-closed inference in both lighting conditions. Classifier results revealed significant alpha contribution during both light and dark conditions, suggesting that alpha rhythm modulation is closely linked to the change in the direction of attention regardless of the presence of visual sensory input. Furthermore, fMRI activation maps derived from an alpha modulation time-course during the complete darkness condition exhibited a right frontal cortical network associated with attention allocation. These findings support the importance of top-down processes such as attention allocation to alpha rhythm modulation, possibly as a prerequisite to its known bottom-up processing of sensory input. © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  4. A hexagonal orthogonal-oriented pyramid as a model of image representation in visual cortex

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.

    1989-01-01

    Retinal ganglion cells represent the visual image with a spatial code, in which each cell conveys information about a small region in the image. In contrast, cells of the primary visual cortex use a hybrid space-frequency code in which each cell conveys information about a region that is local in space, spatial frequency, and orientation. A mathematical model for this transformation is described. The hexagonal orthogonal-oriented quadrature pyramid (HOP) transform, which operates on a hexagonal input lattice, uses basis functions that are orthogonal, self-similar, and localized in space, spatial frequency, orientation, and phase. The basis functions, which are generated from seven basic types through a recursive process, form an image code of the pyramid type. The seven basis functions, six bandpass and one low-pass, occupy a point and a hexagon of six nearest neighbors on a hexagonal lattice. The six bandpass basis functions consist of three with even symmetry, and three with odd symmetry. At the lowest level, the inputs are image samples. At each higher level, the input lattice is provided by the low-pass coefficients computed at the previous level. At each level, the output is subsampled in such a way as to yield a new hexagonal lattice with a spacing square root of 7 larger than the previous level, so that the number of coefficients is reduced by a factor of seven at each level. In the biological model, the input lattice is the retinal ganglion cell array. The resulting scheme provides a compact, efficient code of the image and generates receptive fields that resemble those of the primary visual cortex.

  5. Visual just noticeable differences

    NASA Astrophysics Data System (ADS)

    Nankivil, Derek; Chen, Minghan; Wooley, C. Benjamin

    2018-02-01

    A visual just noticeable difference (VJND) is the amount of change in either an image (e.g. a photographic print) or in vision (e.g. due to a change in refractive power of a vision correction device or visually coupled optical system) that is just noticeable when compared with the prior state. Numerous theoretical and clinical studies have been performed to determine the amount of change in various visual inputs (power, spherical aberration, astigmatism, etc.) that result in a just noticeable visual change. Each of these approaches, in defining a VJND, relies on the comparison of two visual stimuli. The first stimulus is the nominal or baseline state and the second is the perturbed state that results in a VJND. Using this commonality, we converted each result to the change in the area of the modulation transfer function (AMTF) to provide a more fundamental understanding of what results in a VJND. We performed an analysis of the wavefront criteria from basic optics, the image quality metrics, and clinical studies testing various visual inputs, showing that fractional changes in AMTF resulting in one VJND range from 0.025 to 0.075. In addition, cycloplegia appears to desensitize the human visual system so that a much larger change in the retinal image is required to give a VJND. This finding may be of great import for clinical vision tests. Finally, we present applications of the VJND model for the determination of threshold ocular aberrations and manufacturing tolerances of visually coupled optical systems.

  6. Neural organization and visual processing in the anterior optic tubercle of the honeybee brain.

    PubMed

    Mota, Theo; Yamagata, Nobuhiro; Giurfa, Martin; Gronenberg, Wulfila; Sandoz, Jean-Christophe

    2011-08-10

    The honeybee Apis mellifera represents a valuable model for studying the neural segregation and integration of visual information. Vision in honeybees has been extensively studied at the behavioral level and, to a lesser degree, at the physiological level using intracellular electrophysiological recordings of single neurons. However, our knowledge of visual processing in honeybees is still limited by the lack of functional studies of visual processing at the circuit level. Here we contribute to filling this gap by providing a neuroanatomical and neurophysiological characterization at the circuit level of a practically unstudied visual area of the bee brain, the anterior optic tubercle (AOTu). First, we analyzed the internal organization and neuronal connections of the AOTu. Second, we established a novel protocol for performing optophysiological recordings of visual circuit activity in the honeybee brain and studied the responses of AOTu interneurons during stimulation of distinct eye regions. Our neuroanatomical data show an intricate compartmentalization and connectivity of the AOTu, revealing a dorsoventral segregation of the visual input to the AOTu. Light stimuli presented in different parts of the visual field (dorsal, lateral, or ventral) induce distinct patterns of activation in AOTu output interneurons, retaining to some extent the dorsoventral input segregation revealed by our neuroanatomical data. In particular, activity patterns evoked by dorsal and ventral eye stimulation are clearly segregated into distinct AOTu subunits. Our results therefore suggest an involvement of the AOTu in the processing of dorsoventrally segregated visual information in the honeybee brain.

  7. Holistic Face Categorization in Higher Order Visual Areas of the Normal and Prosopagnosic Brain: Toward a Non-Hierarchical View of Face Perception

    PubMed Central

    Rossion, Bruno; Dricot, Laurence; Goebel, Rainer; Busigny, Thomas

    2011-01-01

    How a visual stimulus is initially categorized as a face in a network of human brain areas remains largely unclear. Hierarchical neuro-computational models of face perception assume that the visual stimulus is first decomposed in local parts in lower order visual areas. These parts would then be combined into a global representation in higher order face-sensitive areas of the occipito-temporal cortex. Here we tested this view in fMRI with visual stimuli that are categorized as faces based on their global configuration rather than their local parts (two-tones Mooney figures and Arcimboldo's facelike paintings). Compared to the same inverted visual stimuli that are not categorized as faces, these stimuli activated the right middle fusiform gyrus (“Fusiform face area”) and superior temporal sulcus (pSTS), with no significant activation in the posteriorly located inferior occipital gyrus (i.e., no “occipital face area”). This observation is strengthened by behavioral and neural evidence for normal face categorization of these stimuli in a brain-damaged prosopagnosic patient whose intact right middle fusiform gyrus and superior temporal sulcus are devoid of any potential face-sensitive inputs from the lesioned right inferior occipital cortex. Together, these observations indicate that face-preferential activation may emerge in higher order visual areas of the right hemisphere without any face-preferential inputs from lower order visual areas, supporting a non-hierarchical view of face perception in the visual cortex. PMID:21267432

  8. Biologically based machine vision: signal analysis of monopolar cells in the visual system of Musca domestica.

    PubMed

    Newton, Jenny; Barrett, Steven F; Wilcox, Michael J; Popp, Stephanie

    2002-01-01

    Machine vision for navigational purposes is a rapidly growing field. Many abilities such as object recognition and target tracking rely on vision. Autonomous vehicles must be able to navigate in dynamic enviroments and simultaneously locate a target position. Traditional machine vision often fails to react in real time because of large computational requirements whereas the fly achieves complex orientation and navigation with a relatively small and simple brain. Understanding how the fly extracts visual information and how neurons encode and process information could lead us to a new approach for machine vision applications. Photoreceptors in the Musca domestica eye that share the same spatial information converge into a structure called the cartridge. The cartridge consists of the photoreceptor axon terminals and monopolar cells L1, L2, and L4. It is thought that L1 and L2 cells encode edge related information relative to a single cartridge. These cells are thought to be equivalent to vertebrate bipolar cells, producing contrast enhancement and reduction of information sent to L4. Monopolar cell L4 is thought to perform image segmentation on the information input from L1 and L2 and also enhance edge detection. A mesh of interconnected L4's would correlate the output from L1 and L2 cells of adjacent cartridges and provide a parallel network for segmenting an object's edges. The focus of this research is to excite photoreceptors of the common housefly, Musca domestica, with different visual patterns. The electrical response of monopolar cells L1, L2, and L4 will be recorded using intracellular recording techniques. Signal analysis will determine the neurocircuitry to detect and segment images.

  9. The vestibulo-ocular reflex in fourth nerve palsy: deficits and adaptation.

    PubMed

    Wong, Agnes M F; Sharpe, James A; Tweed, Douglas

    2002-08-01

    The effects of fourth nerve palsy on the vestibulo-ocular reflex (VOR) had not been systematically investigated. We used the magnetic scleral search coil technique to study the VOR in patients with unilateral fourth nerve palsy during sinusoidal head rotations in yaw, pitch and roll at different frequencies. In darkness, VOR gains are reduced during incyclotorsion, depression and abduction of the paretic eye, as anticipated from paresis of the superior oblique muscle. VOR gains during excyclotorsion, elevation and adduction of the paretic eye are also reduced, whereas gains in the non-paretic eye remain normal, indicating a selective adjustment of innervation to the paretic eye. In light, torsional visually enhanced VOR (VVOR) gains in the paretic eye remain reduced; however, visual input increases vertical and horizontal VVOR gains to normal in the paretic eye, without a conjugate increase in VVOR gains in the non-paretic eye, providing further evidence of selective adaptation in the paretic eye. Motions of the eyes after fourth nerve palsy exemplify monocular adaptation of the VOR, in response to peripheral neuromuscular deficits.

  10. Sharpening vision by adapting to flicker.

    PubMed

    Arnold, Derek H; Williams, Jeremy D; Phipps, Natasha E; Goodale, Melvyn A

    2016-11-01

    Human vision is surprisingly malleable. A static stimulus can seem to move after prolonged exposure to movement (the motion aftereffect), and exposure to tilted lines can make vertical lines seem oppositely tilted (the tilt aftereffect). The paradigm used to induce such distortions (adaptation) can provide powerful insights into the computations underlying human visual experience. Previously spatial form and stimulus dynamics were thought to be encoded independently, but here we show that adaptation to stimulus dynamics can sharpen form perception. We find that fast flicker adaptation (FFAd) shifts the tuning of face perception to higher spatial frequencies, enhances the acuity of spatial vision-allowing people to localize inputs with greater precision and to read finer scaled text, and it selectively reduces sensitivity to coarse-scale form signals. These findings are consistent with two interrelated influences: FFAd reduces the responsiveness of magnocellular neurons (which are important for encoding dynamics, but can have poor spatial resolution), and magnocellular responses contribute coarse spatial scale information when the visual system synthesizes form signals. Consequently, when magnocellular responses are mitigated via FFAd, human form perception is transiently sharpened because "blur" signals are mitigated.

  11. Sharpening vision by adapting to flicker

    PubMed Central

    Arnold, Derek H.; Williams, Jeremy D.; Phipps, Natasha E.; Goodale, Melvyn A.

    2016-01-01

    Human vision is surprisingly malleable. A static stimulus can seem to move after prolonged exposure to movement (the motion aftereffect), and exposure to tilted lines can make vertical lines seem oppositely tilted (the tilt aftereffect). The paradigm used to induce such distortions (adaptation) can provide powerful insights into the computations underlying human visual experience. Previously spatial form and stimulus dynamics were thought to be encoded independently, but here we show that adaptation to stimulus dynamics can sharpen form perception. We find that fast flicker adaptation (FFAd) shifts the tuning of face perception to higher spatial frequencies, enhances the acuity of spatial vision—allowing people to localize inputs with greater precision and to read finer scaled text, and it selectively reduces sensitivity to coarse-scale form signals. These findings are consistent with two interrelated influences: FFAd reduces the responsiveness of magnocellular neurons (which are important for encoding dynamics, but can have poor spatial resolution), and magnocellular responses contribute coarse spatial scale information when the visual system synthesizes form signals. Consequently, when magnocellular responses are mitigated via FFAd, human form perception is transiently sharpened because “blur” signals are mitigated. PMID:27791115

  12. Recurrent Circuitry for Balancing Sleep Need and Sleep.

    PubMed

    Donlea, Jeffrey M; Pimentel, Diogo; Talbot, Clifford B; Kempf, Anissa; Omoto, Jaison J; Hartenstein, Volker; Miesenböck, Gero

    2018-01-17

    Sleep-promoting neurons in the dorsal fan-shaped body (dFB) of Drosophila are integral to sleep homeostasis, but how these cells impose sleep on the organism is unknown. We report that dFB neurons communicate via inhibitory transmitters, including allatostatin-A (AstA), with interneurons connecting the superior arch with the ellipsoid body of the central complex. These "helicon cells" express the galanin receptor homolog AstA-R1, respond to visual input, gate locomotion, and are inhibited by AstA, suggesting that dFB neurons promote rest by suppressing visually guided movement. Sleep changes caused by enhanced or diminished allatostatinergic transmission from dFB neurons and by inhibition or optogenetic stimulation of helicon cells support this notion. Helicon cells provide excitation to R2 neurons of the ellipsoid body, whose activity-dependent plasticity signals rising sleep pressure to the dFB. By virtue of this autoregulatory loop, dFB-mediated inhibition interrupts processes that incur a sleep debt, allowing restorative sleep to rebalance the books. VIDEO ABSTRACT. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Visual Detection Under Uncertainty Operates Via an Early Static, Not Late Dynamic, Non-Linearity

    PubMed Central

    Neri, Peter

    2010-01-01

    Signals in the environment are rarely specified exactly: our visual system may know what to look for (e.g., a specific face), but not its exact configuration (e.g., where in the room, or in what orientation). Uncertainty, and the ability to deal with it, is a fundamental aspect of visual processing. The MAX model is the current gold standard for describing how human vision handles uncertainty: of all possible configurations for the signal, the observer chooses the one corresponding to the template associated with the largest response. We propose an alternative model in which the MAX operation, which is a dynamic non-linearity (depends on multiple inputs from several stimulus locations) and happens after the input stimulus has been matched to the possible templates, is replaced by an early static non-linearity (depends only on one input corresponding to one stimulus location) which is applied before template matching. By exploiting an integrated set of analytical and experimental tools, we show that this model is able to account for a number of empirical observations otherwise unaccounted for by the MAX model, and is more robust with respect to the realistic limitations imposed by the available neural hardware. We then discuss how these results, currently restricted to a simple visual detection task, may extend to a wider range of problems in sensory processing. PMID:21212835

  14. Cortical feedback signals generalise across different spatial frequencies of feedforward inputs.

    PubMed

    Revina, Yulia; Petro, Lucy S; Muckli, Lars

    2017-09-22

    Visual processing in cortex relies on feedback projections contextualising feedforward information flow. Primary visual cortex (V1) has small receptive fields and processes feedforward information at a fine-grained spatial scale, whereas higher visual areas have larger, spatially invariant receptive fields. Therefore, feedback could provide coarse information about the global scene structure or alternatively recover fine-grained structure by targeting small receptive fields in V1. We tested if feedback signals generalise across different spatial frequencies of feedforward inputs, or if they are tuned to the spatial scale of the visual scene. Using a partial occlusion paradigm, functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA) we investigated whether feedback to V1 contains coarse or fine-grained information by manipulating the spatial frequency of the scene surround outside an occluded image portion. We show that feedback transmits both coarse and fine-grained information as it carries information about both low (LSF) and high spatial frequencies (HSF). Further, feedback signals containing LSF information are similar to feedback signals containing HSF information, even without a large overlap in spatial frequency bands of the HSF and LSF scenes. Lastly, we found that feedback carries similar information about the spatial frequency band across different scenes. We conclude that cortical feedback signals contain information which generalises across different spatial frequencies of feedforward inputs. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  15. From the optic tectum to the primary visual cortex: migration through evolution of the saliency map for exogenous attentional guidance.

    PubMed

    Zhaoping, Li

    2016-10-01

    Recent data have supported the hypothesis that, in primates, the primary visual cortex (V1) creates a saliency map from visual input. The exogenous guidance of attention is then realized by means of monosynaptic projections to the superior colliculus, which can select the most salient location as the target of a gaze shift. V1 is less prominent, or is even absent in lower vertebrates such as fish; whereas the superior colliculus, called optic tectum in lower vertebrates, also receives retinal input. I review the literature and propose that the saliency map has migrated from the tectum to V1 over evolution. In addition, attentional benefits manifested as cueing effects in humans should also be present in lower vertebrates. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. In Search of a Visual-cortical Describing Function: a Summary of Work in Progress

    NASA Technical Reports Server (NTRS)

    Junker, A. M.; Peio, K. J.

    1984-01-01

    The thrust of the present work is to explore the utility of using a sum of sinusoids (seven or more) to obtain an evoked response and, furthermore, to see if the response is sensitive to changes in cognitive processing. Within the field of automatic control system technology, a mathematical input/output relationship for a sinusoidally stimulated nonlinear system is defined as describing function. Applying this technology, sum of sines inputs to yield describing functions for the visual-cortical response have been designed. What follows is a description of the method used to obtain visual-cortical describing functions. A number of measurement system redesigns were necessary to achieve the desired frequency resolution. Results that guided and came out of the redesigns are presented. Preliminary results of stimulus parameter effects (average intensity and depth of modulation) are also shown.

  17. Salience in Second Language Acquisition: Physical Form, Learner Attention, and Instructional Focus

    PubMed Central

    Cintrón-Valentín, Myrna C.; Ellis, Nick C.

    2016-01-01

    We consider the role of physical form, prior experience, and form focused instruction (FFI) in adult language learning. (1) When presented with competing cues to interpretation, learners are more likely to attend to physically more salient cues in the input. (2) Learned attention is an associative learning phenomenon where prior-learned cues block those that are experienced later. (3) The low salience of morphosyntactic cues can be overcome by FFI, which leads learners to attend cues which might otherwise be ignored. Experiment 1 used eye-tracking to investigate how language background influences learners’ attention to morphological cues, as well as the attentional processes whereby different types of FFI overcome low cue salience, learned attention and blocking. Chinese native speakers (no L1 verb-tense morphology) viewed Latin utterances combining lexical and morphological cues to temporality under control conditions (CCs) and three types of explicit FFI: verb grammar instruction (VG), verb salience with textual enhancement (VS), and verb pretraining (VP), and their use of these cues was assessed in a subsequent comprehension test. CC participants were significantly more sensitive to the adverbs than verb morphology. Instructed participants showed greater sensitivity to the verbs. These results reveal attentional processes whereby learners’ prior linguistic experience can shape their attention toward cues in the input, and whereby FFI helps learners overcome the long-term blocking of verb-tense morphology. Experiment 2 examined the role of modality of input presentation – aural or visual – in L1 English learners’ attentional focus on morphological cues and the effectiveness of different FFI manipulations. CC participants showed greater sensitivity toward the adverb cue. FFI was effective in increasing attention to verb-tense morphology, however, the processing of morphological cues was considerably more difficult under aural presentation. From visual exposure, the FFI conditions were broadly equivalent at tuning attention to the morphology, although VP resulted in balanced attention to both cues. The effectiveness of morphological salience-raising varied across modality: VS was effective under visual exposure, but not under aural exposure. From aural exposure, only VG was effective. These results demonstrate how salience in physical form, learner attention, and instructional focus all variously affect the success of L2 acquisition. PMID:27621715

  18. Molecular biology of myopia.

    PubMed

    Schaeffel, Frank; Simon, Perikles; Feldkaemper, Marita; Ohngemach, Sibylle; Williams, Robert W

    2003-09-01

    Experiments in animal models of myopia have emphasised the importance of visual input in emmetropisation but it is also evident that the development of human myopia is influenced to some degree by genetic factors. Molecular genetic approaches can help to identify both the genes involved in the control of ocular development and the potential targets for pharmacological intervention. This review covers a variety of techniques that are being used to study the molecular biology of myopia. In the first part, we describe techniques used to analyse visually induced changes in gene expression: Northern Blot, polymerase chain reaction (PCR) and real-time PCR to obtain semi-quantitative and quantitative measures of changes in transcription level of a known gene, differential display reverse transcription PCR (DD-RT-PCR) to search for new genes that are controlled by visual input, rapid amplification of 5' cDNA (5'-RACE) to extend the 5' end of sequences that are regulated by visual input, in situ hybridisation to localise the expression of a given gene in a tissue and oligonucleotide microarray assays to simultaneously test visually induced changes in thousands of transcripts in single experiments. In the second part, we describe techniques that are used to localise regions in the genome that contain genes that are involved in the control of eye growth and refractive errors in mice and humans. These include quantitative trait loci (QTL) mapping, exploiting experimental test crosses of mice and transmission disequilibrium tests (TDT) in humans to find chromosomal intervals that harbour genes involved in myopia development. We review several successful applications of this battery of techniques in myopia research.

  19. The Future of Access Technology for Blind and Visually Impaired People.

    ERIC Educational Resources Information Center

    Schreier, E. M.

    1990-01-01

    This article describes potential use of new technological products and services by blind/visually impaired people. Items discussed include computer input devices, public telephones, automatic teller machines, airline and rail arrival/departure displays, ticketing machines, information retrieval systems, order-entry terminals, optical character…

  20. Entanglement enhancement in multimode integrated circuits

    NASA Astrophysics Data System (ADS)

    Léger, Zacharie M.; Brodutch, Aharon; Helmy, Amr S.

    2018-06-01

    The faithful distribution of entanglement in continuous-variable systems is essential to many quantum information protocols. As such, entanglement distillation and enhancement schemes are a cornerstone of many applications. The photon subtraction scheme offers enhancement with a relatively simple setup and has been studied in various scenarios. Motivated by recent advances in integrated optics, particularly the ability to build stable multimode interferometers with squeezed input states, a multimodal extension to the enhancement via photon subtraction protocol is studied. States generated with multiple squeezed input states, rather than a single input source, are shown to be more sensitive to the enhancement protocol, leading to increased entanglement at the output. Numerical results show the gain in entanglement is not monotonic with the number of modes or the degree of squeezing in the additional modes. Consequently, the advantage due to having multiple squeezed input states can be maximized when the number of modes is still relatively small (e.g., four). The requirement for additional squeezing is within the current realm of implementation, making this scheme achievable with present technologies.

  1. Quantitative myocardial perfusion from static cardiac and dynamic arterial CT

    NASA Astrophysics Data System (ADS)

    Bindschadler, Michael; Branch, Kelley R.; Alessio, Adam M.

    2018-05-01

    Quantitative myocardial blood flow (MBF) estimation by dynamic contrast enhanced cardiac computed tomography (CT) requires multi-frame acquisition of contrast transit through the blood pool and myocardium to inform the arterial input and tissue response functions. Both the input and the tissue response functions for the entire myocardium are sampled with each acquisition. However, the long breath holds and frequent sampling can result in significant motion artifacts and relatively high radiation dose. To address these limitations, we propose and evaluate a new static cardiac and dynamic arterial (SCDA) quantitative MBF approach where (1) the input function is well sampled using either prediction from pre-scan timing bolus data or measured from dynamic thin slice ‘bolus tracking’ acquisitions, and (2) the whole-heart tissue response data is limited to one contrast enhanced CT acquisition. A perfusion model uses the dynamic arterial input function to generate a family of possible myocardial contrast enhancement curves corresponding to a range of MBF values. Combined with the timing of the single whole-heart acquisition, these curves generate a lookup table relating myocardial contrast enhancement to quantitative MBF. We tested the SCDA approach in 28 patients that underwent a full dynamic CT protocol both at rest and vasodilator stress conditions. Using measured input function plus single (enhanced CT only) or plus double (enhanced and contrast free baseline CT’s) myocardial acquisitions yielded MBF estimates with root mean square (RMS) error of 1.2 ml/min/g and 0.35 ml/min/g, and radiation dose reductions of 90% and 83%, respectively. The prediction of the input function based on timing bolus data and the static acquisition had an RMS error compared to the measured input function of 26.0% which led to MBF estimation errors greater than threefold higher than using the measured input function. SCDA presents a new, simplified approach for quantitative perfusion imaging with an acquisition strategy offering substantial radiation dose and computational complexity savings over dynamic CT.

  2. Computer-aided diagnosis based on enhancement of degraded fundus photographs.

    PubMed

    Jin, Kai; Zhou, Mei; Wang, Shaoze; Lou, Lixia; Xu, Yufeng; Ye, Juan; Qian, Dahong

    2018-05-01

    Retinal imaging is an important and effective tool for detecting retinal diseases. However, degraded images caused by the aberrations of the eye can disguise lesions, so that a diseased eye can be mistakenly diagnosed as normal. In this work, we propose a new image enhancement method to improve the quality of degraded images. A new method is used to enhance degraded-quality fundus images. In this method, the image is converted from the input RGB colour space to LAB colour space and then each normalized component is enhanced using contrast-limited adaptive histogram equalization. Human visual system (HVS)-based fundus image quality assessment, combined with diagnosis by experts, is used to evaluate the enhancement. The study included 191 degraded-quality fundus photographs of 143 subjects with optic media opacity. Objective quality assessment of image enhancement (range: 0-1) indicated that our method improved colour retinal image quality from an average of 0.0773 (variance 0.0801) to an average of 0.3973 (variance 0.0756). Following enhancement, area under curves (AUC) were 0.996 for the glaucoma classifier, 0.989 for the diabetic retinopathy (DR) classifier, 0.975 for the age-related macular degeneration (AMD) classifier and 0.979 for the other retinal diseases classifier. The relatively simple method for enhancing degraded-quality fundus images achieves superior image enhancement, as demonstrated in a qualitative HVS-based image quality assessment. This retinal image enhancement may, therefore, be employed to assist ophthalmologists in more efficient screening of retinal diseases and the development of computer-aided diagnosis. © 2017 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  3. Statistical learning and auditory processing in children with music training: An ERP study.

    PubMed

    Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Ibrahim, Ronny; Arciuli, Joanne

    2017-07-01

    The question whether musical training is associated with enhanced auditory and cognitive abilities in children is of considerable interest. In the present study, we compared children with music training versus those without music training across a range of auditory and cognitive measures, including the ability to detect implicitly statistical regularities in input (statistical learning). Statistical learning of regularities embedded in auditory and visual stimuli was measured in musically trained and age-matched untrained children between the ages of 9-11years. In addition to collecting behavioural measures, we recorded electrophysiological measures to obtain an online measure of segmentation during the statistical learning tasks. Musically trained children showed better performance on melody discrimination, rhythm discrimination, frequency discrimination, and auditory statistical learning. Furthermore, grand-averaged ERPs showed that triplet onset (initial stimulus) elicited larger responses in the musically trained children during both auditory and visual statistical learning tasks. In addition, children's music skills were associated with performance on auditory and visual behavioural statistical learning tasks. Our data suggests that individual differences in musical skills are associated with children's ability to detect regularities. The ERP data suggest that musical training is associated with better encoding of both auditory and visual stimuli. Although causality must be explored in further research, these results may have implications for developing music-based remediation strategies for children with learning impairments. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  4. Conditions for the Effectiveness of Multiple Visual Representations in Enhancing STEM Learning

    ERIC Educational Resources Information Center

    Rau, Martina A.

    2017-01-01

    Visual representations play a critical role in enhancing science, technology, engineering, and mathematics (STEM) learning. Educational psychology research shows that adding visual representations to text can enhance students' learning of content knowledge, compared to text-only. But should students learn with a single type of visual…

  5. Contextual modulation of primary visual cortex by auditory signals.

    PubMed

    Petro, L S; Paton, A T; Muckli, L

    2017-02-19

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195-201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256-1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.

  6. Contextual modulation of primary visual cortex by auditory signals

    PubMed Central

    Paton, A. T.

    2017-01-01

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195–201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256–1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044015

  7. Unconstrained face detection and recognition based on RGB-D camera for the visually impaired

    NASA Astrophysics Data System (ADS)

    Zhao, Xiangdong; Wang, Kaiwei; Yang, Kailun; Hu, Weijian

    2017-02-01

    It is highly important for visually impaired people (VIP) to be aware of human beings around themselves, so correctly recognizing people in VIP assisting apparatus provide great convenience. However, in classical face recognition technology, faces used in training and prediction procedures are usually frontal, and the procedures of acquiring face images require subjects to get close to the camera so that frontal face and illumination guaranteed. Meanwhile, labels of faces are defined manually rather than automatically. Most of the time, labels belonging to different classes need to be input one by one. It prevents assisting application for VIP with these constraints in practice. In this article, a face recognition system under unconstrained environment is proposed. Specifically, it doesn't require frontal pose or uniform illumination as required by previous algorithms. The attributes of this work lie in three aspects. First, a real time frontal-face synthesizing enhancement is implemented, and frontal faces help to increase recognition rate, which is proved with experiment results. Secondly, RGB-D camera plays a significant role in our system, from which both color and depth information are utilized to achieve real time face tracking which not only raises the detection rate but also gives an access to label faces automatically. Finally, we propose to use neural networks to train a face recognition system, and Principal Component Analysis (PCA) is applied to pre-refine the input data. This system is expected to provide convenient help for VIP to get familiar with others, and make an access for them to recognize people when the system is trained enough.

  8. MODFLOW-2000, the U.S. Geological Survey modular ground-water model : user guide to the LMT6 package, the linkage with MT3DMS for multi-species mass transport modeling

    USGS Publications Warehouse

    Zheng, Chunmiao; Hill, Mary Catherine; Hsieh, Paul A.

    2001-01-01

    MODFLOW-2000, the newest version of MODFLOW, is a computer program that numerically solves the three-dimensional ground-water flow equation for a porous medium using a finite-difference method. MT3DMS, the successor to MT3D, is a computer program for modeling multi-species solute transport in three-dimensional ground-water systems using multiple solution techniques, including the finite-difference method, the method of characteristics (MOC), and the total-variation-diminishing (TVD) method. This report documents a new version of the Link-MT3DMS Package, which enables MODFLOW-2000 to produce the information needed by MT3DMS, and also discusses new visualization software for MT3DMS. Unlike the Link-MT3D Packages that coordinated previous versions of MODFLOW and MT3D, the new Link-MT3DMS Package requires an input file that, among other things, provides enhanced support for additional MODFLOW sink/source packages and allows list-directed (free) format for the flow model produced flow-transport link file. The report contains four parts: (a) documentation of the Link-MT3DMS Package Version 6 for MODFLOW-2000; (b) discussion of several issues related to simulation setup and input data preparation for running MT3DMS with MODFLOW-2000; (c) description of two test example problems, with comparison to results obtained using another MODFLOW-based transport program; and (d) overview of post-simulation visualization and animation using the U.S. Geological Survey?s Model Viewer.

  9. Visual Memories Bypass Normalization.

    PubMed

    Bloem, Ilona M; Watanabe, Yurika L; Kibbe, Melissa M; Ling, Sam

    2018-05-01

    How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores-neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation.

  10. Visual Memories Bypass Normalization

    PubMed Central

    Bloem, Ilona M.; Watanabe, Yurika L.; Kibbe, Melissa M.; Ling, Sam

    2018-01-01

    How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores—neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation. PMID:29596038

  11. Automated objective characterization of visual field defects in 3D

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang (Inventor)

    2006-01-01

    A method and apparatus for electronically performing a visual field test for a patient. A visual field test pattern is displayed to the patient on an electronic display device and the patient's responses to the visual field test pattern are recorded. A visual field representation is generated from the patient's responses. The visual field representation is then used as an input into a variety of automated diagnostic processes. In one process, the visual field representation is used to generate a statistical description of the rapidity of change of a patient's visual field at the boundary of a visual field defect. In another process, the area of a visual field defect is calculated using the visual field representation. In another process, the visual field representation is used to generate a statistical description of the volume of a patient's visual field defect.

  12. Supporting Knowledge Integration in Chemistry with a Visualization-Enhanced Inquiry Unit

    NASA Astrophysics Data System (ADS)

    Chiu, Jennifer L.; Linn, Marcia C.

    2014-02-01

    This paper describes the design and impact of an inquiry-oriented online curriculum that takes advantage of dynamic molecular visualizations to improve students' understanding of chemical reactions. The visualization-enhanced unit uses research-based guidelines following the knowledge integration framework to help students develop coherent understanding by connecting and refining existing and new ideas. The inquiry unit supports students to develop connections among molecular, observable, and symbolic representations of chemical reactions. Design-based research included a pilot study, a study comparing the visualization-enhanced inquiry unit to typical instruction, and a course-long comparison study featuring a delayed posttest. Students participating in the visualization-enhanced unit outperformed students receiving typical instruction and further consolidated their understanding on the delayed posttest. Students who used the visualization-enhanced unit formed more connections among concepts than students with typical textbook and lecture-based instruction. Item analysis revealed the types of connections students made when studying the curriculum and suggested how these connections enabled students to consolidate their understanding as they continued in the chemistry course. Results demonstrate that visualization-enhanced inquiry designed for knowledge integration can improve connections between observable and atomic-level phenomena and serve students well as they study subsequent topics in chemistry.

  13. Sensory gain control (amplification) as a mechanism of selective attention: electrophysiological and neuroimaging evidence.

    PubMed Central

    Hillyard, S A; Vogel, E K; Luck, S J

    1998-01-01

    Both physiological and behavioral studies have suggested that stimulus-driven neural activity in the sensory pathways can be modulated in amplitude during selective attention. Recordings of event-related brain potentials indicate that such sensory gain control or amplification processes play an important role in visual-spatial attention. Combined event-related brain potential and neuroimaging experiments provide strong evidence that attentional gain control operates at an early stage of visual processing in extrastriate cortical areas. These data support early selection theories of attention and provide a basis for distinguishing between separate mechanisms of attentional suppression (of unattended inputs) and attentional facilitation (of attended inputs). PMID:9770220

  14. Transformation priming helps to disambiguate sudden changes of sensory inputs.

    PubMed

    Pastukhov, Alexander; Vivian-Griffiths, Solveiga; Braun, Jochen

    2015-11-01

    Retinal input is riddled with abrupt transients due to self-motion, changes in illumination, object-motion, etc. Our visual system must correctly interpret each of these changes to keep visual perception consistent and sensitive. This poses an enormous challenge, as many transients are highly ambiguous in that they are consistent with many alternative physical transformations. Here we investigated inter-trial effects in three situations with sudden and ambiguous transients, each presenting two alternative appearances (rotation-reversing structure-from-motion, polarity-reversing shape-from-shading, and streaming-bouncing object collisions). In every situation, we observed priming of transformations as the outcome perceived in earlier trials tended to repeat in subsequent trials and this repetition was contingent on perceptual experience. The observed priming was specific to transformations and did not originate in priming of perceptual states preceding a transient. Moreover, transformation priming was independent of attention and specific to low level stimulus attributes. In summary, we show how "transformation priors" and experience-driven updating of such priors helps to disambiguate sudden changes of sensory inputs. We discuss how dynamic transformation priors can be instantiated as "transition energies" in an "energy landscape" model of the visual perception. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Airflow and optic flow mediate antennal positioning in flying honeybees

    PubMed Central

    Roy Khurana, Taruni; Sane, Sanjay P

    2016-01-01

    To maintain their speeds during navigation, insects rely on feedback from their visual and mechanosensory modalities. Although optic flow plays an essential role in speed determination, it is less reliable under conditions of low light or sparse landmarks. Under such conditions, insects rely on feedback from antennal mechanosensors but it is not clear how these inputs combine to elicit flight-related antennal behaviours. We here show that antennal movements of the honeybee, Apis mellifera, are governed by combined visual and antennal mechanosensory inputs. Frontal airflow, as experienced during forward flight, causes antennae to actively move forward as a sigmoidal function of absolute airspeed values. However, corresponding front-to-back optic flow causes antennae to move backward, as a linear function of relative optic flow, opposite the airspeed response. When combined, these inputs maintain antennal position in a state of dynamic equilibrium. DOI: http://dx.doi.org/10.7554/eLife.14449.001 PMID:27097104

  16. Synaptology of physiologically identified ganglion cells in the cat retina: a comparison of retinal X- and Y-cells.

    PubMed

    Weber, A J; Stanford, L R

    1994-05-15

    It has long been known that a number of functionally different types of ganglion cells exist in the cat retina, and that each responds differently to visual stimulation. To determine whether the characteristic response properties of different retinal ganglion cell types might reflect differences in the number and distribution of their bipolar and amacrine cell inputs, we compared the percentages and distributions of the synaptic inputs from bipolar and amacrine cells to the entire dendritic arbors of physiologically characterized retinal X- and Y-cells. Sixty-two percent of the synaptic input to the Y-cell was from amacrine cell terminals, while the X-cells received approximately equal amounts of input from amacrine and bipolar cells. We found no significant difference in the distributions of bipolar or amacrine cell inputs to X- and Y-cells, or ON-center and OFF-center cells, either as a function of dendritic branch order or distance from the origin of the dendritic arbor. While, on the basis of these data, we cannot exclude the possibility that the difference in the proportion of bipolar and amacrine cell input contributes to the functional differences between X- and Y-cells, the magnitude of this difference, and the similarity in the distributions of the input from the two afferent cell types, suggest that mechanisms other than a simple predominance of input from amacrine or bipolar cells underlie the differences in their response properties. More likely, perhaps, is that the specific response features of X- and Y-cells originate in differences in the visual responses of the bipolar and amacrine cells that provide their input, or in the complex synaptic arrangements found among amacrine and bipolar cell terminals and the dendrites of specific types of retinal ganglion cells.

  17. The Role of Left Occipitotemporal Cortex in Reading: Reconciling Stimulus, Task, and Lexicality Effects

    PubMed Central

    Humphries, Colin; Desai, Rutvik H.; Seidenberg, Mark S.; Osmon, David C.; Stengel, Ben C.; Binder, Jeffrey R.

    2013-01-01

    Although the left posterior occipitotemporal sulcus (pOTS) has been called a visual word form area, debate persists over the selectivity of this region for reading relative to general nonorthographic visual object processing. We used high-resolution functional magnetic resonance imaging to study left pOTS responses to combinatorial orthographic and object shape information. Participants performed naming and visual discrimination tasks designed to encourage or suppress phonological encoding. During the naming task, all participants showed subregions within left pOTS that were more sensitive to combinatorial orthographic information than to object information. This difference disappeared, however, when phonological processing demands were removed. Responses were stronger to pseudowords than to words, but this effect also disappeared when phonological processing demands were removed. Subregions within the left pOTS are preferentially activated when visual input must be mapped to a phonological representation (i.e., a name) and particularly when component parts of the visual input must be mapped to corresponding phonological elements (consonant or vowel phonemes). Results indicate a specialized role for subregions within the left pOTS in the isomorphic mapping of familiar combinatorial visual patterns to phonological forms. This process distinguishes reading from picture naming and accounts for a wide range of previously reported stimulus and task effects in left pOTS. PMID:22505661

  18. Multisensory integration and the concert experience: An overview of how visual stimuli can affect what we hear

    NASA Astrophysics Data System (ADS)

    Hyde, Jerald R.

    2004-05-01

    It is clear to those who ``listen'' to concert halls and evaluate their degree of acoustical success that it is quite difficult to separate the acoustical response at a given seat from the multi-modal perception of the whole event. Objective concert hall data have been collected for the purpose of finding a link with their related subjective evaluation and ultimately with the architectural correlates which produce the sound field. This exercise, while important, tends to miss the point that a concert or opera event utilizes all the senses of which the sound field and visual stimuli are both major contributors to the experience. Objective acoustical factors point to visual input as being significant in the perception of ``acoustical intimacy'' and with the perception of loudness versus distance in large halls. This paper will review the evidence of visual input as a factor in what we ``hear'' and introduce concepts of perceptual constancy, distance perception, static and dynamic visual stimuli, and the general process of the psychology of the integrated experience. A survey of acousticians on their opinions about the auditory-visual aspects of the concert hall experience will be presented. [Work supported in part from the Veneklasen Research Foundation and Veneklasen Associates.

  19. Characterization of Visual Scanning Patterns in Air Traffic Control

    PubMed Central

    McClung, Sarah N.; Kang, Ziho

    2016-01-01

    Characterization of air traffic controllers' (ATCs') visual scanning strategies is a challenging issue due to the dynamic movement of multiple aircraft and increasing complexity of scanpaths (order of eye fixations and saccades) over time. Additionally, terminologies and methods are lacking to accurately characterize the eye tracking data into simplified visual scanning strategies linguistically expressed by ATCs. As an intermediate step to automate the characterization classification process, we (1) defined and developed new concepts to systematically filter complex visual scanpaths into simpler and more manageable forms and (2) developed procedures to map visual scanpaths with linguistic inputs to reduce the human judgement bias during interrater agreement. The developed concepts and procedures were applied to investigating the visual scanpaths of expert ATCs using scenarios with different aircraft congestion levels. Furthermore, oculomotor trends were analyzed to identify the influence of aircraft congestion on scan time and number of comparisons among aircraft. The findings show that (1) the scanpaths filtered at the highest intensity led to more consistent mapping with the ATCs' linguistic inputs, (2) the pattern classification occurrences differed between scenarios, and (3) increasing aircraft congestion caused increased scan times and aircraft pairwise comparisons. The results provide a foundation for better characterizing complex scanpaths in a dynamic task and automating the analysis process. PMID:27239190

  20. Integrative cortical dysfunction and pervasive motion perception deficit in fragile X syndrome.

    PubMed

    Kogan, C S; Bertone, A; Cornish, K; Boutet, I; Der Kaloustian, V M; Andermann, E; Faubert, J; Chaudhuri, A

    2004-11-09

    Fragile X syndrome (FXS) is associated with neurologic deficits recently attributed to the magnocellular pathway of the lateral geniculate nucleus. To test the hypotheses that FXS individuals 1) have a pervasive visual motion perception impairment affecting neocortical circuits in the parietal lobe and 2) have deficits in integrative neocortical mechanisms necessary for perception of complex stimuli. Psychophysical tests of visual motion and form perception defined by either first-order (luminance) or second-order (texture) attributes were used to probe early and later occipito-temporal and occipito-parietal functioning. When compared to developmental- and age-matched controls, FXS individuals displayed severe impairments in first- and second-order motion perception. This deficit was accompanied by near normal perception for first-order form stimuli but not second-order form stimuli. Impaired visual motion processing for first- and second-order stimuli suggests that both early- and later-level neurologic function of the parietal lobe are affected in Fragile X syndrome (FXS). Furthermore, this deficit likely stems from abnormal input from the magnocellular compartment of the lateral geniculate nucleus. Impaired visual form and motion processing for complex visual stimuli with normal processing for simple (i.e., first-order) form stimuli suggests that FXS individuals have normal early form processing accompanied by a generalized impairment in neurologic mechanisms necessary for integrating all early visual input.

  1. Characterization of Visual Scanning Patterns in Air Traffic Control.

    PubMed

    McClung, Sarah N; Kang, Ziho

    2016-01-01

    Characterization of air traffic controllers' (ATCs') visual scanning strategies is a challenging issue due to the dynamic movement of multiple aircraft and increasing complexity of scanpaths (order of eye fixations and saccades) over time. Additionally, terminologies and methods are lacking to accurately characterize the eye tracking data into simplified visual scanning strategies linguistically expressed by ATCs. As an intermediate step to automate the characterization classification process, we (1) defined and developed new concepts to systematically filter complex visual scanpaths into simpler and more manageable forms and (2) developed procedures to map visual scanpaths with linguistic inputs to reduce the human judgement bias during interrater agreement. The developed concepts and procedures were applied to investigating the visual scanpaths of expert ATCs using scenarios with different aircraft congestion levels. Furthermore, oculomotor trends were analyzed to identify the influence of aircraft congestion on scan time and number of comparisons among aircraft. The findings show that (1) the scanpaths filtered at the highest intensity led to more consistent mapping with the ATCs' linguistic inputs, (2) the pattern classification occurrences differed between scenarios, and (3) increasing aircraft congestion caused increased scan times and aircraft pairwise comparisons. The results provide a foundation for better characterizing complex scanpaths in a dynamic task and automating the analysis process.

  2. A Neurobehavioral Model of Flexible Spatial Language Behaviors

    ERIC Educational Resources Information Center

    Lipinski, John; Schneegans, Sebastian; Sandamirskaya, Yulia; Spencer, John P.; Schoner, Gregor

    2012-01-01

    We propose a neural dynamic model that specifies how low-level visual processes can be integrated with higher level cognition to achieve flexible spatial language behaviors. This model uses real-word visual input that is linked to relational spatial descriptions through a neural mechanism for reference frame transformations. We demonstrate that…

  3. Proprioceptive versus Visual Control in Autistic Children.

    ERIC Educational Resources Information Center

    Masterton, B. A.; Biederman, G. B.

    1983-01-01

    The autistic children's presumed preference for proximal over distal sensory input was studied by requiring that "autistic," retarded, and "normal" children (7-15 years old) adapt to lateral displacement of the visual field. Only autistic Ss demonstrated transfer of adaptation to the nonadapted hand, indicating reliance on proprioception rather…

  4. Contrast Enhancement Algorithm Based on Gap Adjustment for Histogram Equalization

    PubMed Central

    Chiu, Chung-Cheng; Ting, Chih-Chung

    2016-01-01

    Image enhancement methods have been widely used to improve the visual effects of images. Owing to its simplicity and effectiveness histogram equalization (HE) is one of the methods used for enhancing image contrast. However, HE may result in over-enhancement and feature loss problems that lead to unnatural look and loss of details in the processed images. Researchers have proposed various HE-based methods to solve the over-enhancement problem; however, they have largely ignored the feature loss problem. Therefore, a contrast enhancement algorithm based on gap adjustment for histogram equalization (CegaHE) is proposed. It refers to a visual contrast enhancement algorithm based on histogram equalization (VCEA), which generates visually pleasing enhanced images, and improves the enhancement effects of VCEA. CegaHE adjusts the gaps between two gray values based on the adjustment equation, which takes the properties of human visual perception into consideration, to solve the over-enhancement problem. Besides, it also alleviates the feature loss problem and further enhances the textures in the dark regions of the images to improve the quality of the processed images for human visual perception. Experimental results demonstrate that CegaHE is a reliable method for contrast enhancement and that it significantly outperforms VCEA and other methods. PMID:27338412

  5. The Effect of Input-Based Instruction Type on the Acquisition of Spanish Accusative Clitics

    ERIC Educational Resources Information Center

    White, Justin

    2015-01-01

    The purpose of this paper is to compare structured input (SI) with other input-based instructional treatments. The input-based instructional types include: input flood (IF), text enhancement (TE), SI activities, and focused input (FI; SI without implicit negative feedback). Participants included 145 adult learners enrolled in an intermediate…

  6. Modeling and Analysis of Information Product Maps

    ERIC Educational Resources Information Center

    Heien, Christopher Harris

    2012-01-01

    Information Product Maps are visual diagrams used to represent the inputs, processing, and outputs of data within an Information Manufacturing System. A data unit, drawn as an edge, symbolizes a grouping of raw data as it travels through this system. Processes, drawn as vertices, transform each data unit input into various forms prior to delivery…

  7. A new method for fusion, denoising and enhancement of x-ray images retrieved from Talbot-Lau grating interferometry.

    PubMed

    Scholkmann, Felix; Revol, Vincent; Kaufmann, Rolf; Baronowski, Heidrun; Kottler, Christian

    2014-03-21

    This paper introduces a new image denoising, fusion and enhancement framework for combining and optimal visualization of x-ray attenuation contrast (AC), differential phase contrast (DPC) and dark-field contrast (DFC) images retrieved from x-ray Talbot-Lau grating interferometry. The new image fusion framework comprises three steps: (i) denoising each input image (AC, DPC and DFC) through adaptive Wiener filtering, (ii) performing a two-step image fusion process based on the shift-invariant wavelet transform, i.e. first fusing the AC with the DPC image and then fusing the resulting image with the DFC image, and finally (iii) enhancing the fused image to obtain a final image using adaptive histogram equalization, adaptive sharpening and contrast optimization. Application examples are presented for two biological objects (a human tooth and a cherry) and the proposed method is compared to two recently published AC/DPC/DFC image processing techniques. In conclusion, the new framework for the processing of AC, DPC and DFC allows the most relevant features of all three images to be combined in one image while reducing the noise and enhancing adaptively the relevant image features. The newly developed framework may be used in technical and medical applications.

  8. Improving Balance Function Using Low Levels of Electrical Stimulation of the Balance Organs

    NASA Technical Reports Server (NTRS)

    Bloomberg, Jacob; Reschke, Millard; Mulavara, Ajitkumar; Wood, Scott; Serrador, Jorge; Fiedler, Matthew; Kofman, Igor; Peters, Brian T.; Cohen, Helen

    2012-01-01

    Crewmembers returning from long-duration space flight face significant challenges due to the microgravity-induced inappropriate adaptations in balance/ sensorimotor function. The Neuroscience Laboratory at JSC is developing a method based on stochastic resonance to enhance the brain s ability to detect signals from the balance organs of the inner ear and use them for rapid improvement in balance skill, especially when combined with balance training exercises. This method involves a stimulus delivery system that is wearable/portable providing imperceptible electrical stimulation to the balance organs of the human body. Stochastic resonance (SR) is a phenomenon whereby the response of a nonlinear system to a weak periodic input signal is optimized by the presence of a particular non-zero level of noise. This phenomenon of SR is based on the concept of maximizing the flow of information through a system by a non-zero level of noise. Application of imperceptible SR noise coupled with sensory input in humans has been shown to improve motor, cardiovascular, visual, hearing, and balance functions. SR increases contrast sensitivity and luminance detection; lowers the absolute threshold for tone detection in normal hearing individuals; improves homeostatic function in the human blood pressure regulatory system; improves noise-enhanced muscle spindle function; and improves detection of weak tactile stimuli using mechanical or electrical stimulation. SR noise has been shown to improve postural control when applied as mechanical noise to the soles of the feet, or when applied as electrical noise at the knee and to the back muscles.

  9. Visual Contrast Enhancement Algorithm Based on Histogram Equalization

    PubMed Central

    Ting, Chih-Chung; Wu, Bing-Fei; Chung, Meng-Liang; Chiu, Chung-Cheng; Wu, Ya-Ching

    2015-01-01

    Image enhancement techniques primarily improve the contrast of an image to lend it a better appearance. One of the popular enhancement methods is histogram equalization (HE) because of its simplicity and effectiveness. However, it is rarely applied to consumer electronics products because it can cause excessive contrast enhancement and feature loss problems. These problems make the images processed by HE look unnatural and introduce unwanted artifacts in them. In this study, a visual contrast enhancement algorithm (VCEA) based on HE is proposed. VCEA considers the requirements of the human visual perception in order to address the drawbacks of HE. It effectively solves the excessive contrast enhancement problem by adjusting the spaces between two adjacent gray values of the HE histogram. In addition, VCEA reduces the effects of the feature loss problem by using the obtained spaces. Furthermore, VCEA enhances the detailed textures of an image to generate an enhanced image with better visual quality. Experimental results show that images obtained by applying VCEA have higher contrast and are more suited to human visual perception than those processed by HE and other HE-based methods. PMID:26184219

  10. Backward masked fearful faces enhance contralateral occipital cortical activity for visual targets within the spotlight of attention

    PubMed Central

    Reinke, Karen S.; LaMontagne, Pamela J.; Habib, Reza

    2011-01-01

    Spatial attention has been argued to be adaptive by enhancing the processing of visual stimuli within the ‘spotlight of attention’. We previously reported that crude threat cues (backward masked fearful faces) facilitate spatial attention through a network of brain regions consisting of the amygdala, anterior cingulate and contralateral visual cortex. However, results from previous functional magnetic resonance imaging (fMRI) dot-probe studies have been inconclusive regarding a fearful face-elicited contralateral modulation of visual targets. Here, we tested the hypothesis that the capture of spatial attention by crude threat cues would facilitate processing of subsequently presented visual stimuli within the masked fearful face-elicited ‘spotlight of attention’ in the contralateral visual cortex. Participants performed a backward masked fearful face dot-probe task while brain activity was measured with fMRI. Masked fearful face left visual field trials enhanced activity for spatially congruent targets in the right superior occipital gyrus, fusiform gyrus and lateral occipital complex, while masked fearful face right visual field trials enhanced activity in the left middle occipital gyrus. These data indicate that crude threat elicited spatial attention enhances the processing of subsequent visual stimuli in contralateral occipital cortex, which may occur by lowering neural activation thresholds in this retinotopic location. PMID:20702500

  11. Programmable remapper for image processing

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)

    1991-01-01

    A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

  12. Visual Predictions in the Orbitofrontal Cortex Rely on Associative Content

    PubMed Central

    Chaumon, Maximilien; Kveraga, Kestutis; Barrett, Lisa Feldman; Bar, Moshe

    2014-01-01

    Predicting upcoming events from incomplete information is an essential brain function. The orbitofrontal cortex (OFC) plays a critical role in this process by facilitating recognition of sensory inputs via predictive feedback to sensory cortices. In the visual domain, the OFC is engaged by low spatial frequency (LSF) and magnocellular-biased inputs, but beyond this, we know little about the information content required to activate it. Is the OFC automatically engaged to analyze any LSF information for meaning? Or is it engaged only when LSF information matches preexisting memory associations? We tested these hypotheses and show that only LSF information that could be linked to memory associations engages the OFC. Specifically, LSF stimuli activated the OFC in 2 distinct medial and lateral regions only if they resembled known visual objects. More identifiable objects increased activity in the medial OFC, known for its function in affective responses. Furthermore, these objects also increased the connectivity of the lateral OFC with the ventral visual cortex, a crucial region for object identification. At the interface between sensory, memory, and affective processing, the OFC thus appears to be attuned to the associative content of visual information and to play a central role in visuo-affective prediction. PMID:23771980

  13. Serial dependence promotes object stability during occlusion

    PubMed Central

    Liberman, Alina; Zhang, Kathy; Whitney, David

    2016-01-01

    Object identities somehow appear stable and continuous over time despite eye movements, disruptions in visibility, and constantly changing visual input. Recent results have demonstrated that the perception of orientation, numerosity, and facial identity is systematically biased (i.e., pulled) toward visual input from the recent past. The spatial region over which current orientations or face identities are pulled by previous orientations or identities, respectively, is known as the continuity field, which is temporally tuned over the past several seconds (Fischer & Whitney, 2014). This perceptual pull could contribute to the visual stability of objects over short time periods, but does it also address how perceptual stability occurs during visual discontinuities? Here, we tested whether the continuity field helps maintain perceived object identity during occlusion. Specifically, we found that the perception of an oriented Gabor that emerged from behind an occluder was significantly pulled toward the random (and unrelated) orientation of the Gabor that was seen entering the occluder. Importantly, this serial dependence was stronger for predictable, continuously moving trajectories, compared to unpredictable ones or static displacements. This result suggests that our visual system takes advantage of expectations about a stable world, helping to maintain perceived object continuity despite interrupted visibility. PMID:28006066

  14. Octopus vulgaris uses visual information to determine the location of its arm.

    PubMed

    Gutnick, Tamar; Byrne, Ruth A; Hochner, Binyamin; Kuba, Michael

    2011-03-22

    Octopuses are intelligent, soft-bodied animals with keen senses that perform reliably in a variety of visual and tactile learning tasks. However, researchers have found them disappointing in that they consistently fail in operant tasks that require them to combine central nervous system reward information with visual and peripheral knowledge of the location of their arms. Wells claimed that in order to filter and integrate an abundance of multisensory inputs that might inform the animal of the position of a single arm, octopuses would need an exceptional computing mechanism, and "There is no evidence that such a system exists in Octopus, or in any other soft bodied animal." Recent electrophysiological experiments, which found no clear somatotopic organization in the higher motor centers, support this claim. We developed a three-choice maze that required an octopus to use a single arm to reach a visually marked goal compartment. Using this operant task, we show for the first time that Octopus vulgaris is capable of guiding a single arm in a complex movement to a location. Thus, we claim that octopuses can combine peripheral arm location information with visual input to control goal-directed complex movements. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Embodied attention and word learning by toddlers

    PubMed Central

    Yu, Chen; Smith, Linda B.

    2013-01-01

    Many theories of early word learning begin with the uncertainty inherent to learning a word from its co-occurrence with a visual scene. However, the relevant visual scene for infant word learning is neither from the adult theorist’s view nor the mature partner’s view, but is rather from the learner’s personal view. Here we show that when 18-month old infants interacted with objects in play with their parents, they created moments in which a single object was visually dominant. If parents named the object during these moments of bottom-up selectivity, later forced-choice tests showed that infants learned the name, but did not when naming occurred during a less visually selective moment. The momentary visual input for parents and toddlers was captured via head cameras placed low on each participant’s forehead as parents played with and named objects for their infant. Frame-by-frame analyses of the head camera images at and around naming moments were conducted to determine the visual properties at input that were associated with learning. The analyses indicated that learning occurred when bottom-up visual information was clean and uncluttered. The sensory-motor behaviors of infants and parents were also analyzed to determine how their actions on the objects may have created these optimal visual moments for learning. The results are discussed with respect to early word learning, embodied attention, and the social role of parents in early word learning. PMID:22878116

  16. Primary Visual Cortex as a Saliency Map: A Parameter-Free Prediction and Its Test by Behavioral Data

    PubMed Central

    Zhaoping, Li; Zhe, Li

    2015-01-01

    It has been hypothesized that neural activities in the primary visual cortex (V1) represent a saliency map of the visual field to exogenously guide attention. This hypothesis has so far provided only qualitative predictions and their confirmations. We report this hypothesis’ first quantitative prediction, derived without free parameters, and its confirmation by human behavioral data. The hypothesis provides a direct link between V1 neural responses to a visual location and the saliency of that location to guide attention exogenously. In a visual input containing many bars, one of them saliently different from all the other bars which are identical to each other, saliency at the singleton’s location can be measured by the shortness of the reaction time in a visual search for singletons. The hypothesis predicts quantitatively the whole distribution of the reaction times to find a singleton unique in color, orientation, and motion direction from the reaction times to find other types of singletons. The prediction matches human reaction time data. A requirement for this successful prediction is a data-motivated assumption that V1 lacks neurons tuned simultaneously to color, orientation, and motion direction of visual inputs. Since evidence suggests that extrastriate cortices do have such neurons, we discuss the possibility that the extrastriate cortices play no role in guiding exogenous attention so that they can be devoted to other functions like visual decoding and endogenous attention. PMID:26441341

  17. Depth image enhancement using perceptual texture priors

    NASA Astrophysics Data System (ADS)

    Bang, Duhyeon; Shim, Hyunjung

    2015-03-01

    A depth camera is widely used in various applications because it provides a depth image of the scene in real time. However, due to the limited power consumption, the depth camera presents severe noises, incapable of providing the high quality 3D data. Although the smoothness prior is often employed to subside the depth noise, it discards the geometric details so to degrade the distance resolution and hinder achieving the realism in 3D contents. In this paper, we propose a perceptual-based depth image enhancement technique that automatically recovers the depth details of various textures, using a statistical framework inspired by human mechanism of perceiving surface details by texture priors. We construct the database composed of the high quality normals. Based on the recent studies in human visual perception (HVP), we select the pattern density as a primary feature to classify textures. Upon the classification results, we match and substitute the noisy input normals with high quality normals in the database. As a result, our method provides the high quality depth image preserving the surface details. We expect that our work is effective to enhance the details of depth image from 3D sensors and to provide a high-fidelity virtual reality experience.

  18. "Visual" Cortex of Congenitally Blind Adults Responds to Syntactic Movement.

    PubMed

    Lane, Connor; Kanjlia, Shipra; Omaki, Akira; Bedny, Marina

    2015-09-16

    Human cortex is comprised of specialized networks that support functions, such as visual motion perception and language processing. How do genes and experience contribute to this specialization? Studies of plasticity offer unique insights into this question. In congenitally blind individuals, "visual" cortex responds to auditory and tactile stimuli. Remarkably, recent evidence suggests that occipital areas participate in language processing. We asked whether in blindness, occipital cortices: (1) develop domain-specific responses to language and (2) respond to a highly specialized aspect of language-syntactic movement. Nineteen congenitally blind and 18 sighted participants took part in two fMRI experiments. We report that in congenitally blind individuals, but not in sighted controls, "visual" cortex is more active during sentence comprehension than during a sequence memory task with nonwords, or a symbolic math task. This suggests that areas of occipital cortex become selective for language, relative to other similar higher-cognitive tasks. Crucially, we find that these occipital areas respond more to sentences with syntactic movement but do not respond to the difficulty of math equations. We conclude that regions within the visual cortex of blind adults are involved in syntactic processing. Our findings suggest that the cognitive function of human cortical areas is largely determined by input during development. Human cortex is made up of specialized regions that perform different functions, such as visual motion perception and language processing. How do genes and experience contribute to this specialization? Studies of plasticity show that cortical areas can change function from one sensory modality to another. Here we demonstrate that input during development can alter cortical function even more dramatically. In blindness a subset of "visual" areas becomes specialized for language processing. Crucially, we find that the same "visual" areas respond to a highly specialized and uniquely human aspect of language-syntactic movement. These data suggest that human cortex has broad functional capacity during development, and input plays a major role in determining functional specialization. Copyright © 2015 the authors 0270-6474/15/3512859-10$15.00/0.

  19. Target detection in GPR data using joint low-rank and sparsity constraints

    NASA Astrophysics Data System (ADS)

    Bouzerdoum, Abdesselam; Tivive, Fok Hing Chi; Abeynayake, Canicious

    2016-05-01

    In ground penetrating radars, background clutter, which comprises the signals backscattered from the rough, uneven ground surface and the background noise, impairs the visualization of buried objects and subsurface inspections. In this paper, a clutter mitigation method is proposed for target detection. The removal of background clutter is formulated as a constrained optimization problem to obtain a low-rank matrix and a sparse matrix. The low-rank matrix captures the ground surface reflections and the background noise, whereas the sparse matrix contains the target reflections. An optimization method based on split-Bregman algorithm is developed to estimate these two matrices from the input GPR data. Evaluated on real radar data, the proposed method achieves promising results in removing the background clutter and enhancing the target signature.

  20. Aircraft geometry verification with enhanced computer generated displays

    NASA Technical Reports Server (NTRS)

    Cozzolongo, J. V.

    1982-01-01

    A method for visual verification of aerodynamic geometries using computer generated, color shaded images is described. The mathematical models representing aircraft geometries are created for use in theoretical aerodynamic analyses and in computer aided manufacturing. The aerodynamic shapes are defined using parametric bi-cubic splined patches. This mathematical representation is then used as input to an algorithm that generates a color shaded image of the geometry. A discussion of the techniques used in the mathematical representation of the geometry and in the rendering of the color shaded display is presented. The results include examples of color shaded displays, which are contrasted with wire frame type displays. The examples also show the use of mapped surface pressures in terms of color shaded images of V/STOL fighter/attack aircraft and advanced turboprop aircraft.

  1. Neural Dynamics Underlying Target Detection in the Human Brain

    PubMed Central

    Bansal, Arjun K.; Madhavan, Radhika; Agam, Yigal; Golby, Alexandra; Madsen, Joseph R.

    2014-01-01

    Sensory signals must be interpreted in the context of goals and tasks. To detect a target in an image, the brain compares input signals and goals to elicit the correct behavior. We examined how target detection modulates visual recognition signals by recording intracranial field potential responses from 776 electrodes in 10 epileptic human subjects. We observed reliable differences in the physiological responses to stimuli when a cued target was present versus absent. Goal-related modulation was particularly strong in the inferior temporal and fusiform gyri, two areas important for object recognition. Target modulation started after 250 ms post stimulus, considerably after the onset of visual recognition signals. While broadband signals exhibited increased or decreased power, gamma frequency power showed predominantly increases during target presence. These observations support models where task goals interact with sensory inputs via top-down signals that influence the highest echelons of visual processing after the onset of selective responses. PMID:24553944

  2. Learning the Gestalt rule of collinearity from object motion.

    PubMed

    Prodöhl, Carsten; Würtz, Rolf P; von der Malsburg, Christoph

    2003-08-01

    The Gestalt principle of collinearity (and curvilinearity) is widely regarded as being mediated by the long-range connection structure in primary visual cortex. We review the neurophysiological and psychophysical literature to argue that these connections are developed from visual experience after birth, relying on coherent object motion. We then present a neural network model that learns these connections in an unsupervised Hebbian fashion with input from real camera sequences. The model uses spatiotemporal retinal filtering, which is very sensitive to changes in the visual input. We show that it is crucial for successful learning to use the correlation of the transient responses instead of the sustained ones. As a consequence, learning works best with video sequences of moving objects. The model addresses a special case of the fundamental question of what represents the necessary a priori knowledge the brain is equipped with at birth so that the self-organized process of structuring by experience can be successful.

  3. Parametric embedding for class visualization.

    PubMed

    Iwata, Tomoharu; Saito, Kazumi; Ueda, Naonori; Stromsten, Sean; Griffiths, Thomas L; Tenenbaum, Joshua B

    2007-09-01

    We propose a new method, parametric embedding (PE), that embeds objects with the class structure into a low-dimensional visualization space. PE takes as input a set of class conditional probabilities for given data points and tries to preserve the structure in an embedding space by minimizing a sum of Kullback-Leibler divergences, under the assumption that samples are generated by a gaussian mixture with equal covariances in the embedding space. PE has many potential uses depending on the source of the input data, providing insight into the classifier's behavior in supervised, semisupervised, and unsupervised settings. The PE algorithm has a computational advantage over conventional embedding methods based on pairwise object relations since its complexity scales with the product of the number of objects and the number of classes. We demonstrate PE by visualizing supervised categorization of Web pages, semisupervised categorization of digits, and the relations of words and latent topics found by an unsupervised algorithm, latent Dirichlet allocation.

  4. Deep learning of orthographic representations in baboons.

    PubMed

    Hannagan, Thomas; Ziegler, Johannes C; Dufau, Stéphane; Fagot, Joël; Grainger, Jonathan

    2014-01-01

    What is the origin of our ability to learn orthographic knowledge? We use deep convolutional networks to emulate the primate's ventral visual stream and explore the recent finding that baboons can be trained to discriminate English words from nonwords. The networks were exposed to the exact same sequence of stimuli and reinforcement signals as the baboons in the experiment, and learned to map real visual inputs (pixels) of letter strings onto binary word/nonword responses. We show that the networks' highest levels of representations were indeed sensitive to letter combinations as postulated in our previous research. The model also captured the key empirical findings, such as generalization to novel words, along with some intriguing inter-individual differences. The present work shows the merits of deep learning networks that can simulate the whole processing chain all the way from the visual input to the response while allowing researchers to analyze the complex representations that emerge during the learning process.

  5. Thalamic projections to visual and visuomotor areas (V6 and V6A) in the Rostral Bank of the parieto-occipital sulcus of the Macaque.

    PubMed

    Gamberini, Michela; Bakola, Sophia; Passarelli, Lauretta; Burman, Kathleen J; Rosa, Marcello G P; Fattori, Patrizia; Galletti, Claudio

    2016-04-01

    The medial posterior parietal cortex of the primate brain includes different functional areas, which have been defined based on the functional properties, cyto- and myeloarchitectural criteria, and cortico-cortical connections. Here, we describe the thalamic projections to two of these areas (V6 and V6A), based on 14 retrograde neuronal tracer injections in 11 hemispheres of 9 Macaca fascicularis. The injections were placed either by direct visualisation or using electrophysiological guidance, and the location of injection sites was determined post mortem based on cyto- and myeloarchitectural criteria. We found that the majority of the thalamic afferents to the visual area V6 originate in subdivisions of the lateral and inferior pulvinar nuclei, with weaker inputs originating from the central densocellular, paracentral, lateral posterior, lateral geniculate, ventral anterior and mediodorsal nuclei. In contrast, injections in both the dorsal and ventral parts of the visuomotor area V6A revealed strong inputs from the lateral posterior and medial pulvinar nuclei, as well as smaller inputs from the ventrolateral complex and from the central densocellular, paracentral, and mediodorsal nuclei. These projection patterns are in line with the functional properties of injected areas: "dorsal stream" extrastriate area V6 receives information from visuotopically organised subdivisions of the thalamus; whereas visuomotor area V6A, which is involved in the sensory guidance of arm movement, receives its primary afferents from thalamic nuclei that provide high-order somatic and visual input.

  6. Influence of both cutaneous input from the foot soles and visual information on the control of postural stability in dyslexic children.

    PubMed

    Goulème, Nathalie; Villeneuve, Philippe; Gérard, Christophe-Loïc; Bucci, Maria Pia

    2017-07-01

    Dyslexic children show impaired in postural stability. The aim of our study was to test the influence of foot soles and visual information on the postural control of dyslexic children, compared to non-dyslexic children. Postural stability was evaluated with TechnoConcept ® platform in twenty-four dyslexic children (mean age: 9.3±0.29years) and in twenty-four non-dyslexic children, gender- and age-matched, in two postural conditions (with and without foam: a 4-mm foam was put under their feet or not) and in two visual conditions (eyes open and eyes closed). We measured the surface area, the length and the mean velocity of the center of pressure (CoP). Moreover, we calculated the Romberg Quotient (RQ). Our results showed that the surface area, length and mean velocity of the CoP were significantly greater in the dyslexic children compared to the non-dyslexic children, particularly with foam and eyes closed. Furthermore, the RQ was significantly smaller in the dyslexic children and significantly greater without foam than with foam. All these findings suggest that dyslexic children are not able to compensate with other available inputs when sensorial inputs are less informative (with foam, or eyes closed), which results in poor postural stability. We suggest that the impairment of the cerebellar integration of all the sensorial inputs is responsible for the postural deficits observed in dyslexic children. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Computer vision-based method for classification of wheat grains using artificial neural network.

    PubMed

    Sabanci, Kadir; Kayabasi, Ahmet; Toktas, Abdurrahim

    2017-06-01

    A simplified computer vision-based application using artificial neural network (ANN) depending on multilayer perceptron (MLP) for accurately classifying wheat grains into bread or durum is presented. The images of 100 bread and 100 durum wheat grains are taken via a high-resolution camera and subjected to pre-processing. The main visual features of four dimensions, three colors and five textures are acquired using image-processing techniques (IPTs). A total of 21 visual features are reproduced from the 12 main features to diversify the input population for training and testing the ANN model. The data sets of visual features are considered as input parameters of the ANN model. The ANN with four different input data subsets is modelled to classify the wheat grains into bread or durum. The ANN model is trained with 180 grains and its accuracy tested with 20 grains from a total of 200 wheat grains. Seven input parameters that are most effective on the classifying results are determined using the correlation-based CfsSubsetEval algorithm to simplify the ANN model. The results of the ANN model are compared in terms of accuracy rate. The best result is achieved with a mean absolute error (MAE) of 9.8 × 10 -6 by the simplified ANN model. This shows that the proposed classifier based on computer vision can be successfully exploited to automatically classify a variety of grains. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  8. Perceptual learning improves visual performance in juvenile amblyopia.

    PubMed

    Li, Roger W; Young, Karen G; Hoenig, Pia; Levi, Dennis M

    2005-09-01

    To determine whether practicing a position-discrimination task improves visual performance in children with amblyopia and to determine the mechanism(s) of improvement. Five children (age range, 7-10 years) with amblyopia practiced a positional acuity task in which they had to judge which of three pairs of lines was misaligned. Positional noise was produced by distributing the individual patches of each line segment according to a Gaussian probability function. Observers were trained at three noise levels (including 0), with each observer performing between 3000 and 4000 responses in 7 to 10 sessions. Trial-by-trial feedback was provided. Four of the five observers showed significant improvement in positional acuity. In those four observers, on average, positional acuity with no noise improved by approximately 32% and with high noise by approximately 26%. A position-averaging model was used to parse the improvement into an increase in efficiency or a decrease in equivalent input noise. Two observers showed increased efficiency (51% and 117% improvements) with no significant change in equivalent input noise across sessions. The other two observers showed both a decrease in equivalent input noise (18% and 29%) and an increase in efficiency (17% and 71%). All five observers showed substantial improvement in Snellen acuity (approximately 26%) after practice. Perceptual learning can improve visual performance in amblyopic children. The improvement can be parsed into two important factors: decreased equivalent input noise and increased efficiency. Perceptual learning techniques may add an effective new method to the armamentarium of amblyopia treatments.

  9. Using Visual Organizers to Enhance EFL Instruction

    ERIC Educational Resources Information Center

    Kang, Shumin

    2004-01-01

    Visual organizers are visual frameworks such as figures, diagrams, charts, etc. used to present structural knowledge spatially in a given area with the intention of enhancing comprehension and learning. Visual organizers are effective in terms of helping to elicit, explain, and communicate information because they can clarify complex concepts into…

  10. Illuminant-adaptive color reproduction for mobile display

    NASA Astrophysics Data System (ADS)

    Kim, Jong-Man; Park, Kee-Hyon; Kwon, Oh-Seol; Cho, Yang-Ho; Ha, Yeong-Ho

    2006-01-01

    This paper proposes an illuminant-adaptive reproduction method using light adaptation and flare conditions for a mobile display. Mobile displays, such as PDAs and cellular phones, are viewed under various lighting conditions. In particular, images displayed in daylight are perceived as quite dark due to the light adaptation of the human visual system, as the luminance of a mobile display is considerably lower than that of an outdoor environment. In addition, flare phenomena decrease the color gamut of a mobile display by increasing the luminance of dark areas and de-saturating the chroma. Therefore, this paper presents an enhancement method composed of lightness enhancement and chroma compensation. First, the ambient light intensity is measured using a lux-sensor, then the flare is calculated based on the reflection ratio of the display device and the ambient light intensity. The relative cone response is nonlinear to the input luminance. This is also changed by the ambient light intensity. Thus, to improve the perceived image, the displayed luminance is enhanced by lightness linearization. In this paper, the image's luminance is transformed by linearization of the response to the input luminance according to the ambient light intensity. Next, the displayed image is compensated according to the physically reduced chroma, resulting from flare phenomena. The reduced chroma value is calculated according to the flare for each intensity. The chroma compensation method to maintain the original image's chroma is applied differently for each hue plane, as the flare affects each hue plane differently. At this time, the enhanced chroma also considers the gamut boundary. Based on experimental observations, the outer luminance-intensity generally ranges from 1,000 lux to 30,000 lux. Thus, in the case of an outdoor environment, i.e. greater than 1,000 lux, this study presents a color reproduction method based on an inverse cone response curve and flare condition. Consequently, the proposed algorithm improves the quality of the perceived image adaptive to an outdoor environment.

  11. A comparison of visual statistics for the image enhancement of FORESITE aerial images with those of major image classes

    NASA Astrophysics Data System (ADS)

    Jobson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.; Hines, Glenn D.

    2006-05-01

    Aerial images from the Follow-On Radar, Enhanced and Synthetic Vision Systems Integration Technology Evaluation (FORESITE) flight tests with the NASA Langley Research Center's research Boeing 757 were acquired during severe haze and haze/mixed clouds visibility conditions. These images were enhanced using the Visual Servo (VS) process that makes use of the Multiscale Retinex. The images were then quantified with visual quality metrics used internally within the VS. One of these metrics, the Visual Contrast Measure, has been computed for hundreds of FORESITE images, and for major classes of imaging-terrestrial (consumer), orbital Earth observations, orbital Mars surface imaging, NOAA aerial photographs, and underwater imaging. The metric quantifies both the degree of visual impairment of the original, un-enhanced images as well as the degree of visibility improvement achieved by the enhancement process. The large aggregate data exhibits trends relating to degree of atmospheric visibility attenuation, and its impact on the limits of enhancement performance for the various image classes. Overall results support the idea that in most cases that do not involve extreme reduction in visibility, large gains in visual contrast are routinely achieved by VS processing. Additionally, for very poor visibility imaging, lesser, but still substantial, gains in visual contrast are also routinely achieved. Further, the data suggest that these visual quality metrics can be used as external standalone metrics for establishing performance parameters.

  12. A Comparison of Visual Statistics for the Image Enhancement of FORESITE Aerial Images with Those of Major Image Classes

    NASA Technical Reports Server (NTRS)

    Johnson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.; Hines, Glenn D.

    2006-01-01

    Aerial images from the Follow-On Radar, Enhanced and Synthetic Vision Systems Integration Technology Evaluation (FORESITE) flight tests with the NASA Langley Research Center's research Boeing 757 were acquired during severe haze and haze/mixed clouds visibility conditions. These images were enhanced using the Visual Servo (VS) process that makes use of the Multiscale Retinex. The images were then quantified with visual quality metrics used internally with the VS. One of these metrics, the Visual Contrast Measure, has been computed for hundreds of FORESITE images, and for major classes of imaging--terrestrial (consumer), orbital Earth observations, orbital Mars surface imaging, NOAA aerial photographs, and underwater imaging. The metric quantifies both the degree of visual impairment of the original, un-enhanced images as well as the degree of visibility improvement achieved by the enhancement process. The large aggregate data exhibits trends relating to degree of atmospheric visibility attenuation, and its impact on limits of enhancement performance for the various image classes. Overall results support the idea that in most cases that do not involve extreme reduction in visibility, large gains in visual contrast are routinely achieved by VS processing. Additionally, for very poor visibility imaging, lesser, but still substantial, gains in visual contrast are also routinely achieved. Further, the data suggest that these visual quality metrics can be used as external standalone metrics for establishing performance parameters.

  13. Visual processing in anorexia nervosa and body dysmorphic disorder: similarities, differences, and future research directions

    PubMed Central

    Madsen, Sarah K.; Bohon, Cara; Feusner, Jamie D.

    2013-01-01

    Anorexia nervosa (AN) and body dysmorphic disorder (BDD) are psychiatric disorders that involve distortion of the experience of one’s physical appearance. In AN, individuals believe that they are overweight, perceive their body as “fat,” and are preoccupied with maintaining a low body weight. In BDD, individuals are preoccupied with misperceived defects in physical appearance, most often of the face. Distorted visual perception may contribute to these cardinal symptoms, and may be a common underlying phenotype. This review surveys the current literature on visual processing in AN and BDD, addressing lower- to higher-order stages of visual information processing and perception. We focus on peer-reviewed studies of AN and BDD that address ophthalmologic abnormalities, basic neural processing of visual input, integration of visual input with other systems, neuropsychological tests of visual processing, and representations of whole percepts (such as images of faces, bodies, and other objects). The literature suggests a pattern in both groups of over-attention to detail, reduced processing of global features, and a tendency to focus on symptom-specific details in their own images (body parts in AN, facial features in BDD), with cognitive strategy at least partially mediating the abnormalities. Visuospatial abnormalities were also evident when viewing images of others and for non-appearance related stimuli. Unfortunately no study has directly compared AN and BDD, and most studies were not designed to disentangle disease-related emotional responses from lower-order visual processing. We make recommendations for future studies to improve the understanding of visual processing abnormalities in AN and BDD. PMID:23810196

  14. Enhancement of vision by monocular deprivation in adult mice.

    PubMed

    Prusky, Glen T; Alam, Nazia M; Douglas, Robert M

    2006-11-08

    Plasticity of vision mediated through binocular interactions has been reported in mammals only during a "critical" period in juvenile life, wherein monocular deprivation (MD) causes an enduring loss of visual acuity (amblyopia) selectively through the deprived eye. Here, we report a different form of interocular plasticity of vision in adult mice in which MD leads to an enhancement of the optokinetic response (OKR) selectively through the nondeprived eye. Over 5 d of MD, the spatial frequency sensitivity of the OKR increased gradually, reaching a plateau of approximately 36% above pre-deprivation baseline. Eye opening initiated a gradual decline, but sensitivity was maintained above pre-deprivation baseline for 5-6 d. Enhanced function was restricted to the monocular visual field, notwithstanding the dependence of the plasticity on binocular interactions. Activity in visual cortex ipsilateral to the deprived eye was necessary for the characteristic induction of the enhancement, and activity in visual cortex contralateral to the deprived eye was necessary for its maintenance after MD. The plasticity also displayed distinct learning-like properties: Active testing experience was required to attain maximal enhancement and for enhancement to persist after MD, and the duration of enhanced sensitivity after MD was extended by increasing the length of MD, and by repeating MD. These data show that the adult mouse visual system maintains a form of experience-dependent plasticity in which the visual cortex can modulate the normal function of subcortical visual pathways.

  15. Effects of contour enhancement on low-vision preference and visual search.

    PubMed

    Satgunam, Premnandhini; Woods, Russell L; Luo, Gang; Bronstad, P Matthew; Reynolds, Zachary; Ramachandra, Chaithanya; Mel, Bartlett W; Peli, Eli

    2012-09-01

    To determine whether image enhancement improves visual search performance and whether enhanced images were also preferred by subjects with vision impairment. Subjects (n = 24) with vision impairment (vision: 20/52 to 20/240) completed visual search and preference tasks for 150 static images that were enhanced to increase object contours' visual saliency. Subjects were divided into two groups and were shown three enhancement levels. Original and medium enhancements were shown to both groups. High enhancement was shown to group 1, and low enhancement was shown to group 2. For search, subjects pointed to an object that matched a search target displayed at the top left of the screen. An "integrated search performance" measure (area under the curve of cumulative correct response rate over search time) quantified performance. For preference, subjects indicated the preferred side when viewing the same image with different enhancement levels on side-by-side high-definition televisions. Contour enhancement did not improve performance in the visual search task. Group 1 subjects significantly (p < 0.001) rejected the High enhancement, and showed no preference for medium enhancement over the original images. Group 2 subjects significantly preferred (p < 0.001) both the medium and the low enhancement levels over original. Contrast sensitivity was correlated with both preference and performance; subjects with worse contrast sensitivity performed worse in the search task (ρ = 0.77, p < 0.001) and preferred more enhancement (ρ = -0.47, p = 0.02). No correlation between visual search performance and enhancement preference was found. However, a small group of subjects (n = 6) in a narrow range of mid-contrast sensitivity performed better with the enhancement, and most (n = 5) also preferred the enhancement. Preferences for image enhancement can be dissociated from search performance in people with vision impairment. Further investigations are needed to study the relationships between preference and performance for a narrow range of mid-contrast sensitivity where a beneficial effect of enhancement may exist.

  16. Simplification of Visual Rendering in Simulated Prosthetic Vision Facilitates Navigation.

    PubMed

    Vergnieux, Victor; Macé, Marc J-M; Jouffrais, Christophe

    2017-09-01

    Visual neuroprostheses are still limited and simulated prosthetic vision (SPV) is used to evaluate potential and forthcoming functionality of these implants. SPV has been used to evaluate the minimum requirement on visual neuroprosthetic characteristics to restore various functions such as reading, objects and face recognition, object grasping, etc. Some of these studies focused on obstacle avoidance but only a few investigated orientation or navigation abilities with prosthetic vision. The resolution of current arrays of electrodes is not sufficient to allow navigation tasks without additional processing of the visual input. In this study, we simulated a low resolution array (15 × 18 electrodes, similar to a forthcoming generation of arrays) and evaluated the navigation abilities restored when visual information was processed with various computer vision algorithms to enhance the visual rendering. Three main visual rendering strategies were compared to a control rendering in a wayfinding task within an unknown environment. The control rendering corresponded to a resizing of the original image onto the electrode array size, according to the average brightness of the pixels. In the first rendering strategy, vision distance was limited to 3, 6, or 9 m, respectively. In the second strategy, the rendering was not based on the brightness of the image pixels, but on the distance between the user and the elements in the field of view. In the last rendering strategy, only the edges of the environments were displayed, similar to a wireframe rendering. All the tested renderings, except the 3 m limitation of the viewing distance, improved navigation performance and decreased cognitive load. Interestingly, the distance-based and wireframe renderings also improved the cognitive mapping of the unknown environment. These results show that low resolution implants are usable for wayfinding if specific computer vision algorithms are used to select and display appropriate information regarding the environment. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  17. The impact of visual sequencing of graphic symbols on the sentence construction output of children who have acquired language.

    PubMed

    Alant, Erna; du Plooy, Amelia; Dada, Shakila

    2007-01-01

    Although the sequence of graphic or pictorial symbols displayed on a communication board can have an impact on the language output of children, very little research has been conducted to describe this. Research in this area is particularly relevant for prioritising the importance of specific visual and graphic features in providing more effective and user-friendly access to communication boards. This study is concerned with understanding the impact ofspecific sequences of graphic symbol input on the graphic and spoken output of children who have acquired language. Forty participants were divided into two comparable groups. Each group was exposed to graphic symbol input with a certain word order sequence. The structure of input was either in typical English word order sequence Subject- Verb-Object (SVO) or in the word order sequence of Subject-Object-Verb (SOV). Both input groups had to answer six questions by using graphic output as well as speech. The findings indicated that there are significant differences in the PCS graphic output patterns of children who are exposed to graphic input in the SOV and SVO sequences. Furthermore, the output produced in the graphic mode differed considerably to the output produced in the spoken mode. Clinical implications of these findings are discussed

  18. Working with and Visualizing Big Data Efficiently with Python for the DARPA XDATA Program

    DTIC Science & Technology

    2017-08-01

    same function to be used with scalar inputs, input arrays of the same shape, or even input arrays of dimensionality in some cases. Most of the math ... math operations on values ● Split-apply-combine: similar to group-by operations in databases ● Join: combine two datasets using common columns 4.3.3...Numba - Continue to increase SIMD performance with support for fast math flags and improved support for AVX, Intel’s large vector

  19. Cross-modal enhancement of speech detection in young and older adults: does signal content matter?

    PubMed

    Tye-Murray, Nancy; Spehar, Brent; Myerson, Joel; Sommers, Mitchell S; Hale, Sandra

    2011-01-01

    The purpose of the present study was to examine the effects of age and visual content on cross-modal enhancement of auditory speech detection. Visual content consisted of three clearly distinct types of visual information: an unaltered video clip of a talker's face, a low-contrast version of the same clip, and a mouth-like Lissajous figure. It was hypothesized that both young and older adults would exhibit reduced enhancement as visual content diverged from the original clip of the talker's face, but that the decrease would be greater for older participants. Nineteen young adults and 19 older adults were asked to detect a single spoken syllable (/ba/) in speech-shaped noise, and the level of the signal was adaptively varied to establish the signal-to-noise ratio (SNR) at threshold. There was an auditory-only baseline condition and three audiovisual conditions in which the syllable was accompanied by one of the three visual signals (the unaltered clip of the talker's face, the low-contrast version of that clip, or the Lissajous figure). For each audiovisual condition, the SNR at threshold was compared with the SNR at threshold for the auditory-only condition to measure the amount of cross-modal enhancement. Young adults exhibited significant cross-modal enhancement with all three types of visual stimuli, with the greatest amount of enhancement observed for the unaltered clip of the talker's face. Older adults, in contrast, exhibited significant cross-modal enhancement only with the unaltered face. Results of this study suggest that visual signal content affects cross-modal enhancement of speech detection in both young and older adults. They also support a hypothesized age-related deficit in processing low-contrast visual speech stimuli, even in older adults with normal contrast sensitivity.

  20. The locus of impairment in English developmental letter position dyslexia

    PubMed Central

    Kezilas, Yvette; Kohnen, Saskia; McKague, Meredith; Castles, Anne

    2014-01-01

    Many children with reading difficulties display phonological deficits and struggle to acquire non-lexical reading skills. However, not all children with reading difficulties have these problems, such as children with selective letter position dyslexia (LPD), who make excessive migration errors (such as reading slime as “smile”). Previous research has explored three possible loci for the deficit – the phonological output buffer, the orthographic input lexicon, and the orthographic-visual analysis stage of reading. While there is compelling evidence against a phonological output buffer and orthographic input lexicon deficit account of English LPD, the evidence in support of an orthographic-visual analysis deficit is currently limited. In this multiple single-case study with three English-speaking children with developmental LPD, we aimed to both replicate and extend previous findings regarding the locus of impairment in English LPD. First, we ruled out a phonological output buffer and an orthographic input lexicon deficit by administering tasks that directly assess phonological processing and lexical guessing. We then went on to directly assess whether or not children with LPD have an orthographic-visual analysis deficit by modifying two tasks that have previously been used to localize processing at this level: a same-different decision task and a non-word reading task. The results from these tasks indicate that LPD is most likely caused by a deficit specific to the coding of letter positions at the orthographic-visual analysis stage of reading. These findings provide further evidence for the heterogeneity of dyslexia and its underlying causes. PMID:24917802

  1. Color opponent receptive fields self-organize in a biophysical model of visual cortex via spike-timing dependent plasticity

    PubMed Central

    Eguchi, Akihiro; Neymotin, Samuel A.; Stringer, Simon M.

    2014-01-01

    Although many computational models have been proposed to explain orientation maps in primary visual cortex (V1), it is not yet known how similar clusters of color-selective neurons in macaque V1/V2 are connected and develop. In this work, we address the problem of understanding the cortical processing of color information with a possible mechanism of the development of the patchy distribution of color selectivity via computational modeling. Each color input is decomposed into a red, green, and blue representation and transmitted to the visual cortex via a simulated optic nerve in a luminance channel and red–green and blue–yellow opponent color channels. Our model of the early visual system consists of multiple topographically-arranged layers of excitatory and inhibitory neurons, with sparse intra-layer connectivity and feed-forward connectivity between layers. Layers are arranged based on anatomy of early visual pathways, and include a retina, lateral geniculate nucleus, and layered neocortex. Each neuron in the V1 output layer makes synaptic connections to neighboring neurons and receives the three types of signals in the different channels from the corresponding photoreceptor position. Synaptic weights are randomized and learned using spike-timing-dependent plasticity (STDP). After training with natural images, the neurons display heightened sensitivity to specific colors. Information-theoretic analysis reveals mutual information between particular stimuli and responses, and that the information reaches a maximum with fewer neurons in the higher layers, indicating that estimations of the input colors can be done using the output of fewer cells in the later stages of cortical processing. In addition, cells with similar color receptive fields form clusters. Analysis of spiking activity reveals increased firing synchrony between neurons when particular color inputs are presented or removed (ON-cell/OFF-cell). PMID:24659956

  2. Psychophysical and neuroimaging responses to moving stimuli in a patient with the Riddoch phenomenon due to bilateral visual cortex lesions.

    PubMed

    Arcaro, Michael J; Thaler, Lore; Quinlan, Derek J; Monaco, Simona; Khan, Sarah; Valyear, Kenneth F; Goebel, Rainer; Dutton, Gordon N; Goodale, Melvyn A; Kastner, Sabine; Culham, Jody C

    2018-05-09

    Patients with injury to early visual cortex or its inputs can display the Riddoch phenomenon: preserved awareness for moving but not stationary stimuli. We provide a detailed case report of a patient with the Riddoch phenomenon, MC. MC has extensive bilateral lesions to occipitotemporal cortex that include most early visual cortex and complete blindness in visual field perimetry testing with static targets. Nevertheless, she shows a remarkably robust preserved ability to perceive motion, enabling her to navigate through cluttered environments and perform actions like catching moving balls. Comparisons of MC's structural magnetic resonance imaging (MRI) data to a probabilistic atlas based on controls reveals that MC's lesions encompass the posterior, lateral, and ventral early visual cortex bilaterally (V1, V2, V3A/B, LO1/2, TO1/2, hV4 and VO1 in both hemispheres) as well as more extensive damage to right parietal (inferior parietal lobule) and left ventral occipitotemporal cortex (VO1, PHC1/2). She shows some sparing of anterior occipital cortex, which may account for her ability to see moving targets beyond ~15 degrees eccentricity during perimetry. Most strikingly, functional and structural MRI revealed robust and reliable spared functionality of the middle temporal motion complex (MT+) bilaterally. Moreover, consistent with her preserved ability to discriminate motion direction in psychophysical testing, MC also shows direction-selective adaptation in MT+. A variety of tests did not enable us to discern whether input to MT+ was driven by her spared anterior occipital cortex or subcortical inputs. Nevertheless, MC shows rich motion perception despite profoundly impaired static and form vision, combined with clear preservation of activation in MT+, thus supporting the role of MT+ in the Riddoch phenomenon. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Throwing out the rules: anticipatory alpha-band oscillatory attention mechanisms during task-set reconfigurations.

    PubMed

    Foxe, John J; Murphy, Jeremy W; De Sanctis, Pierfilippo

    2014-06-01

    We assessed the role of alpha-band oscillatory activity during a task-switching design that required participants to switch between an auditory and a visual task, while task-relevant audiovisual inputs were simultaneously presented. Instructional cues informed participants which task to perform on a given trial and we assessed alpha-band power in the short 1.35-s period intervening between the cue and the task-imperative stimuli, on the premise that attentional biasing mechanisms would be deployed to resolve competition between the auditory and visual inputs. Prior work had shown that alpha-band activity was differentially deployed depending on the modality of the cued task. Here, we asked whether this activity would, in turn, be differentially deployed depending on whether participants had just made a switch of task or were being asked to simply repeat the task. It is well established that performance speed and accuracy are poorer on switch than on repeat trials. Here, however, the use of instructional cues completely mitigated these classic switch-costs. Measures of alpha-band synchronisation and desynchronisation showed that there was indeed greater and earlier differential deployment of alpha-band activity on switch vs. repeat trials. Contrary to our hypothesis, this differential effect was entirely due to changes in the amount of desynchronisation observed during switch and repeat trials of the visual task, with more desynchronisation over both posterior and frontal scalp regions during switch-visual trials. These data imply that particularly vigorous, and essentially fully effective, anticipatory biasing mechanisms resolved the competition between competing auditory and visual inputs when a rapid switch of task was required. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  4. Robot Sequencing and Visualization Program (RSVP)

    NASA Technical Reports Server (NTRS)

    Cooper, Brian K.; Maxwell,Scott A.; Hartman, Frank R.; Wright, John R.; Yen, Jeng; Toole, Nicholas T.; Gorjian, Zareh; Morrison, Jack C

    2013-01-01

    The Robot Sequencing and Visualization Program (RSVP) is being used in the Mars Science Laboratory (MSL) mission for downlink data visualization and command sequence generation. RSVP reads and writes downlink data products from the operations data server (ODS) and writes uplink data products to the ODS. The primary users of RSVP are members of the Rover Planner team (part of the Integrated Planning and Execution Team (IPE)), who use it to perform traversability/articulation analyses, take activity plan input from the Science and Mission Planning teams, and create a set of rover sequences to be sent to the rover every sol. The primary inputs to RSVP are downlink data products and activity plans in the ODS database. The primary outputs are command sequences to be placed in the ODS for further processing prior to uplink to each rover. RSVP is composed of two main subsystems. The first, called the Robot Sequence Editor (RoSE), understands the MSL activity and command dictionaries and takes care of converting incoming activity level inputs into command sequences. The Rover Planners use the RoSE component of RSVP to put together command sequences and to view and manage command level resources like time, power, temperature, etc. (via a transparent realtime connection to SEQGEN). The second component of RSVP is called HyperDrive, a set of high-fidelity computer graphics displays of the Martian surface in 3D and in stereo. The Rover Planners can explore the environment around the rover, create commands related to motion of all kinds, and see the simulated result of those commands via its underlying tight coupling with flight navigation, motor, and arm software. This software is the evolutionary replacement for the Rover Sequencing and Visualization software used to create command sequences (and visualize the Martian surface) for the Mars Exploration Rover mission.

  5. Do the Contents of Visual Working Memory Automatically Influence Attentional Selection during Visual Search?

    ERIC Educational Resources Information Center

    Woodman, Geoffrey F.; Luck, Steven J.

    2007-01-01

    In many theories of cognition, researchers propose that working memory and perception operate interactively. For example, in previous studies researchers have suggested that sensory inputs matching the contents of working memory will have an automatic advantage in the competition for processing resources. The authors tested this hypothesis by…

  6. Designing between Pedagogies and Cultures: Audio-Visual Chinese Language Resources for Australian Schools

    ERIC Educational Resources Information Center

    Yuan, Yifeng; Shen, Huizhong

    2016-01-01

    This design-based study examines the creation and development of audio-visual Chinese language teaching and learning materials for Australian schools by incorporating users' feedback and content writers' input that emerged in the designing process. Data were collected from workshop feedback of two groups of Chinese-language teachers from primary…

  7. Learning to Look for Language: Development of Joint Attention in Young Deaf Children

    ERIC Educational Resources Information Center

    Lieberman, Amy M.; Hatrak, Marla; Mayberry, Rachel I.

    2014-01-01

    Joint attention between hearing children and their caregivers is typically achieved when the adult provides spoken, auditory linguistic input that relates to the child's current visual focus of attention. Deaf children interacting through sign language must learn to continually switch visual attention between people and objects in order to achieve…

  8. Relationship between Odor Identification and Visual Distractors in Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Kumazaki, Hirokazu; Kikuchi, Mitsuru; Yoshimura, Yuko; Miyao, Masutomo; Okada, Ken-ichi; Mimura, Masaru; Minabe, Yoshio

    2018-01-01

    Understanding the nature of olfactory abnormalities is crucial for optimal interventions in children with autism spectrum disorders (ASD). However, previous studies that have investigated odor identification in children with ASD have produced inconsistent results. The ability to correctly identify an odor relies heavily on visual inputs in the…

  9. Central Cross-Talk in Task Switching : Evidence from Manipulating Input-Output Modality Compatibility

    ERIC Educational Resources Information Center

    Stephan, Denise Nadine; Koch, Iring

    2010-01-01

    Two experiments examined the role of compatibility of input and output (I-O) modality mappings in task switching. We define I-O modality compatibility in terms of similarity of stimulus modality and modality of response-related sensory consequences. Experiment 1 included switching between 2 compatible tasks (auditory-vocal vs. visual-manual) and…

  10. Low-cost USB interface for operant research using Arduino and Visual Basic.

    PubMed

    Escobar, Rogelio; Pérez-Herrera, Carlos A

    2015-03-01

    This note describes the design of a low-cost interface using Arduino microcontroller boards and Visual Basic programming for operant conditioning research. The board executes one program in Arduino programming language that polls the state of the inputs and generates outputs in an operant chamber. This program communicates through a USB port with another program written in Visual Basic 2010 Express Edition running on a laptop, desktop, netbook computer, or even a tablet equipped with Windows operating system. The Visual Basic program controls schedules of reinforcement and records real-time data. A single Arduino board can be used to control a total of 52 inputs/output lines, and multiple Arduino boards can be used to control multiple operant chambers. An external power supply and a series of micro relays are required to control 28-V DC devices commonly used in operant chambers. Instructions for downloading and using the programs to generate simple and concurrent schedules of reinforcement are provided. Testing suggests that the interface is reliable, accurate, and could serve as an inexpensive alternative to commercial equipment. © Society for the Experimental Analysis of Behavior.

  11. Visual tracking using neuromorphic asynchronous event-based cameras.

    PubMed

    Ni, Zhenjiang; Ieng, Sio-Hoi; Posch, Christoph; Régnier, Stéphane; Benosman, Ryad

    2015-04-01

    This letter presents a novel computationally efficient and robust pattern tracking method based on a time-encoded, frame-free visual data. Recent interdisciplinary developments, combining inputs from engineering and biology, have yielded a novel type of camera that encodes visual information into a continuous stream of asynchronous, temporal events. These events encode temporal contrast and intensity locally in space and time. We show that the sparse yet accurately timed information is well suited as a computational input for object tracking. In this letter, visual data processing is performed for each incoming event at the time it arrives. The method provides a continuous and iterative estimation of the geometric transformation between the model and the events representing the tracked object. It can handle isometry, similarities, and affine distortions and allows for unprecedented real-time performance at equivalent frame rates in the kilohertz range on a standard PC. Furthermore, by using the dimension of time that is currently underexploited by most artificial vision systems, the method we present is able to solve ambiguous cases of object occlusions that classical frame-based techniques handle poorly.

  12. TMS of the occipital cortex induces tactile sensations in the fingers of blind Braille readers.

    PubMed

    Ptito, M; Fumal, A; de Noordhout, A Martens; Schoenen, J; Gjedde, A; Kupers, R

    2008-01-01

    Various non-visual inputs produce cross-modal responses in the visual cortex of early blind subjects. In order to determine the qualitative experience associated with these occipital activations, we systematically stimulated the entire occipital cortex using single pulse transcranial magnetic stimulation (TMS) in early blind subjects and in blindfolded seeing controls. Whereas blindfolded seeing controls reported only phosphenes following occipital cortex stimulation, some of the blind subjects reported tactile sensations in the fingers that were somatotopically organized onto the visual cortex. The number of cortical sites inducing tactile sensations appeared to be related to the number of hours of Braille reading per day, Braille reading speed and dexterity. These data, taken in conjunction with previous anatomical, behavioural and functional imaging results, suggest the presence of a polysynaptic cortical pathway between the somatosensory cortex and the visual cortex in early blind subjects. These results also add new evidence that the activity of the occipital lobe in the blind takes its qualitative expression from the character of its new input source, therefore supporting the cortical deference hypothesis.

  13. Visual analytics of inherently noisy crowdsourced data on ultra high resolution displays

    NASA Astrophysics Data System (ADS)

    Huynh, Andrew; Ponto, Kevin; Lin, Albert Yu-Min; Kuester, Falko

    The increasing prevalence of distributed human microtasking, crowdsourcing, has followed the exponential increase in data collection capabilities. The large scale and distributed nature of these microtasks produce overwhelming amounts of information that is inherently noisy due to the nature of human input. Furthermore, these inputs create a constantly changing dataset with additional information added on a daily basis. Methods to quickly visualize, filter, and understand this information over temporal and geospatial constraints is key to the success of crowdsourcing. This paper present novel methods to visually analyze geospatial data collected through crowdsourcing on top of remote sensing satellite imagery. An ultra high resolution tiled display system is used to explore the relationship between human and satellite remote sensing data at scale. A case study is provided that evaluates the presented technique in the context of an archaeological field expedition. A team in the field communicated in real-time with and was guided by researchers in the remote visual analytics laboratory, swiftly sifting through incoming crowdsourced data to identify target locations that were identified as viable archaeological sites.

  14. Visual adaptation enhances action sound discrimination.

    PubMed

    Barraclough, Nick E; Page, Steve A; Keefe, Bruce D

    2017-01-01

    Prolonged exposure, or adaptation, to a stimulus in 1 modality can bias, but also enhance, perception of a subsequent stimulus presented within the same modality. However, recent research has also found that adaptation in 1 modality can bias perception in another modality. Here, we show a novel crossmodal adaptation effect, where adaptation to a visual stimulus enhances subsequent auditory perception. We found that when compared to no adaptation, prior adaptation to visual, auditory, or audiovisual hand actions enhanced discrimination between 2 subsequently presented hand action sounds. Discrimination was most enhanced when the visual action "matched" the auditory action. In addition, prior adaptation to a visual, auditory, or audiovisual action caused subsequent ambiguous action sounds to be perceived as less like the adaptor. In contrast, these crossmodal action aftereffects were not generated by adaptation to the names of actions. Enhanced crossmodal discrimination and crossmodal perceptual aftereffects may result from separate mechanisms operating in audiovisual action sensitive neurons within perceptual systems. Adaptation-induced crossmodal enhancements cannot be explained by postperceptual responses or decisions. More generally, these results together indicate that adaptation is a ubiquitous mechanism for optimizing perceptual processing of multisensory stimuli.

  15. Experience-enabled enhancement of adult visual cortex function.

    PubMed

    Tschetter, Wayne W; Alam, Nazia M; Yee, Christopher W; Gorz, Mario; Douglas, Robert M; Sagdullaev, Botir; Prusky, Glen T

    2013-03-20

    We previously reported in adult mice that visuomotor experience during monocular deprivation (MD) augmented enhancement of visual-cortex-dependent behavior through the non-deprived eye (NDE) during deprivation, and enabled enhanced function to persist after MD. We investigated the physiological substrates of this experience-enabled form of adult cortical plasticity by measuring visual behavior and visually evoked potentials (VEPs) in binocular visual cortex of the same mice before, during, and after MD. MD on its own potentiated VEPs contralateral to the NDE during MD and shifted ocular dominance (OD) in favor of the NDE in both hemispheres. Whereas we expected visuomotor experience during MD to augment these effects, instead enhanced responses contralateral to the NDE, and the OD shift ipsilateral to the NDE were attenuated. However, in the same animals, we measured NMDA receptor-dependent VEP potentiation ipsilateral to the NDE during MD, which persisted after MD. The results indicate that visuomotor experience during adult MD leads to enduring enhancement of behavioral function, not simply by amplifying MD-induced changes in cortical OD, but through an independent process of increasing NDE drive in ipsilateral visual cortex. Because the plasticity is resident in the mature visual cortex and selectively effects gain of visual behavior through experiential means, it may have the therapeutic potential to target and non-invasively treat eye- or visual-field-specific cortical impairment.

  16. Emotion and the Cardiovascular System: Postulated Role of Inputs From the Medial Prefrontal Cortex to the Dorsolateral Periaqueductal Gray.

    PubMed

    Dampney, Roger

    2018-01-01

    The midbrain periaqueductal gray (PAG) plays a major role in generating different types of behavioral responses to emotional stressors. This review focuses on the role of the dorsolateral (dl) portion of the PAG, which on the basis of anatomical and functional studies, appears to have a unique and distinctive role in generating behavioral, cardiovascular and respiratory responses to real and perceived emotional stressors. In particular, the dlPAG, but not other parts of the PAG, receives direct inputs from the primary auditory cortex and from the secondary visual cortex. In addition, there are strong direct inputs to the dlPAG, but not other parts of the PAG, from regions within the medial prefrontal cortex that in primates correspond to cortical areas 10 m, 25 and 32. I first summarise the evidence that the inputs to the dlPAG arising from visual, auditory and olfactory signals trigger defensive behavioral responses supported by appropriate cardiovascular and respiratory effects, when such signals indicate the presence of a real external threat, such as the presence of a predator. I then consider the functional roles of the direct inputs from the medial prefrontal cortex, and propose the hypothesis that these inputs are activated by perceived threats, that are generated as a consequence of complex cognitive processes. I further propose that the inputs from areas 10 m, 25 and 32 are activated under different circumstances. The input from cortical area 10 m is of special interest, because this cortical area exists only in primates and is much larger in the brain of humans than in all other primates.

  17. Emotion and the Cardiovascular System: Postulated Role of Inputs From the Medial Prefrontal Cortex to the Dorsolateral Periaqueductal Gray

    PubMed Central

    Dampney, Roger

    2018-01-01

    The midbrain periaqueductal gray (PAG) plays a major role in generating different types of behavioral responses to emotional stressors. This review focuses on the role of the dorsolateral (dl) portion of the PAG, which on the basis of anatomical and functional studies, appears to have a unique and distinctive role in generating behavioral, cardiovascular and respiratory responses to real and perceived emotional stressors. In particular, the dlPAG, but not other parts of the PAG, receives direct inputs from the primary auditory cortex and from the secondary visual cortex. In addition, there are strong direct inputs to the dlPAG, but not other parts of the PAG, from regions within the medial prefrontal cortex that in primates correspond to cortical areas 10 m, 25 and 32. I first summarise the evidence that the inputs to the dlPAG arising from visual, auditory and olfactory signals trigger defensive behavioral responses supported by appropriate cardiovascular and respiratory effects, when such signals indicate the presence of a real external threat, such as the presence of a predator. I then consider the functional roles of the direct inputs from the medial prefrontal cortex, and propose the hypothesis that these inputs are activated by perceived threats, that are generated as a consequence of complex cognitive processes. I further propose that the inputs from areas 10 m, 25 and 32 are activated under different circumstances. The input from cortical area 10 m is of special interest, because this cortical area exists only in primates and is much larger in the brain of humans than in all other primates. PMID:29881334

  18. Differences in peripheral sensory input to the olfactory bulb between male and female mice

    NASA Astrophysics Data System (ADS)

    Kass, Marley D.; Czarnecki, Lindsey A.; Moberly, Andrew H.; McGann, John P.

    2017-04-01

    Female mammals generally have a superior sense of smell than males, but the biological basis of this difference is unknown. Here, we demonstrate sexually dimorphic neural coding of odorants by olfactory sensory neurons (OSNs), primary sensory neurons that physically contact odor molecules in the nose and provide the initial sensory input to the brain’s olfactory bulb. We performed in vivo optical neurophysiology to visualize odorant-evoked OSN synaptic output into olfactory bub glomeruli in unmanipulated (gonad-intact) adult mice from both sexes, and found that in females odorant presentation evoked more rapid OSN signaling over a broader range of OSNs than in males. These spatiotemporal differences enhanced the contrast between the neural representations of chemically related odorants in females compared to males during stimulus presentation. Removing circulating sex hormones makes these signals slower and less discriminable in females, while in males they become faster and more discriminable, suggesting opposite roles for gonadal hormones in influencing male and female olfactory function. These results demonstrate that the famous sex difference in olfactory abilities likely originates in the primary sensory neurons, and suggest that hormonal modulation of the peripheral olfactory system could underlie differences in how males and females experience the olfactory world.

  19. Membrane Resonance Enables Stable and Robust Gamma Oscillations

    PubMed Central

    Moca, Vasile V.; Nikolić, Danko; Singer, Wolf; Mureşan, Raul C.

    2014-01-01

    Neuronal mechanisms underlying beta/gamma oscillations (20–80 Hz) are not completely understood. Here, we show that in vivo beta/gamma oscillations in the cat visual cortex sometimes exhibit remarkably stable frequency even when inputs fluctuate dramatically. Enhanced frequency stability is associated with stronger oscillations measured in individual units and larger power in the local field potential. Simulations of neuronal circuitry demonstrate that membrane properties of inhibitory interneurons strongly determine the characteristics of emergent oscillations. Exploration of networks containing either integrator or resonator inhibitory interneurons revealed that: (i) Resonance, as opposed to integration, promotes robust oscillations with large power and stable frequency via a mechanism called RING (Resonance INduced Gamma); resonance favors synchronization by reducing phase delays between interneurons and imposes bounds on oscillation cycle duration; (ii) Stability of frequency and robustness of the oscillation also depend on the relative timing of excitatory and inhibitory volleys within the oscillation cycle; (iii) RING can reproduce characteristics of both Pyramidal INterneuron Gamma (PING) and INterneuron Gamma (ING), transcending such classifications; (iv) In RING, robust gamma oscillations are promoted by slow but are impaired by fast inputs. Results suggest that interneuronal membrane resonance can be an important ingredient for generation of robust gamma oscillations having stable frequency. PMID:23042733

  20. Processing Stages Underlying Word Recognition in the Anteroventral Temporal Lobe

    PubMed Central

    Halgren, Eric; Wang, Chunmao; Schomer, Donald L.; Knake, Susanne; Marinkovic, Ksenija; Wu, Julian; Ulbert, Istvan

    2006-01-01

    The anteroventral temporal lobe integrates visual, lexical, semantic and mnestic aspects of word-processing, through its reciprocal connections with the ventral visual stream, language areas, and the hippocampal formation. We used linear microelectrode arrays to probe population synaptic currents and neuronal firing in different cortical layers of the anteroventral temporal lobe, during semantic judgments with implicit priming, and overt word recognition. Since different extrinsic and associative inputs preferentially target different cortical layers, this method can help reveal the sequence and nature of local processing stages at a higher resolution than was previously possible. The initial response in inferotemporal and perirhinal cortices is a brief current sink beginning at ~120ms, and peaking at ~170ms. Localization of this initial sink to middle layers suggests that it represents feedforward input from lower visual areas, and simultaneously increased firing implies that it represents excitatory synaptic currents. Until ~800ms, the main focus of transmembrane current sinks alternates between middle and superficial layers, with the superficial focus becoming increasingly dominant after ~550ms. Since superficial layers are the target of local and feedback associative inputs, this suggests an alternation in predominant synaptic input between feedforward and feedback modes. Word repetition does not affect the initial perirhinal and inferotemporal middle layer sink, but does decrease later activity. Entorhinal activity begins later (~200ms), with greater apparent excitatory postsynaptic currents and multiunit activity in neocortically-projecting than hippocampal-projecting layers. In contrast to perirhinal and entorhinal responses, entorhinal responses are larger to repeated words during memory retrieval. These results identify a sequence of physiological activation, beginning with a sharp activation from lower level visual areas carrying specific information to middle layers. This is followed by feedback and associative interactions involving upper cortical layers, which are abbreviated to repeated words. Following bottom-up and associative stages, top-down recollective processes may be driven by entorhinal cortex. Word processing involves a systematic sequence of fast feedforward information transfer from visual areas to anteroventral temporal cortex, followed by prolonged interactions of this feedforward information with local associations, and feedback mnestic information from the medial temporal lobe. PMID:16488158

Top